They did not stop using it due to contamination. They said it's flawed and indirectly said anthropics results were impossible. It's very possible they are sore losers
Qwen3.6 runs on a single GPU and beats claudes sonnet. In benchmarks and real world tests from humans. Kimi is awesome but most people won't be able to host it themselves.
A lot of people are slowly realizing the moat of 1T closed source models is gone as of the last few weeks. It's going to change the industry. April was a huge month for open models, it'll be curious to see if that continues.
This Mistral submission is another nail in the coffin.
I care a good deal that I trust the people who developed my browser. It's about the most critical piece of software in my life. From banking to professionally to personal life.
The people who developed brave used brave to impersonate people and defraud their users out of money by asking for donations using other peoples names [1]. I don't trust them at all. Thus I don't use their browser.
And, unsurprisingly, this is part of a pattern of bad behavior, not a one off criminal act by otherwise trustworthy people, for some examples [2].
This sounds like good advice so upvoted. I’m a big fan of Raymond Hill¹’s products so I am curious about how much benefit Adguard provides if uBlock Origin is already blocking online trackers, ads and other annoyances.
¹ In this case, the developer – not the musician. I really liked the user interface of uMatrix.
It’s really nice to have ad and tracker domains blocked systemwide though I think you need to be more careful and set your device up as supervised to have more robust blocking (real always-on VPN functionality vs. best effort?).
And even then when I read about defects in Apple software that means a firewall like Little Snitch isn’t perfect (macOS) I think an external device (mobile VPN router?) is going to be essential for some threat models.
I’m a Firefox user myself but there are some very valid arguments against it on Android as well. Firefox on Android is significantly more vulnerable to exploits, lacks internal sandboxing and doesn’t properly isolate tabs from each other.
That's not totally true. Orion supports Chrome/FF WebExtensions, for example. The engine does (practically, even in the EU) have to be WebKit, but that's not the same thing as a "Safari skin."
There is Reynard if you're motivated too (Gecko-based, but it's not ready for prime time yet, and to get good performance you'll have to resort to some workaround to get JIT enabled, as it does not rely on Apple's BrowserEngineKit; one of the goals of the project is giving to not up-to-date iOS devices access to a modern browser).
Genuine question, does brave have ff's container extension? currently that's one of the thing that keeps holding me on ff. another big one is i test website on firefox so to not get carried away with features only available in chromium
Why would you use Brave when for many years it wouid surreptitiously install a VPN service on your Windows machine. The Brave devs took more than a year to even address it, let alone remove it.
More ideologically, Google and Chromium are awful for the internet as monopolistic tech.
Their whole thing looks sketchy, frankly. I'm not saying they're evil or have some deep secret ulterior motive. But their "vision" appears to be bunch of absolutely half-baked ideas for privacy, for which Firefox has a much more boring, and consequently better, track record.
The average person has no idea this is true. And the average person cannot tell when this is the case. So we have a bunch of people, going their way through school, and then when they get stuck relying on AI. The future is gonna be wild.
Yep. And it doesn't help that the people selling AI products act as if they're going to build God. Going, "well AI can't do that" isn't going to fly when you are lax about communicating its limitations!
It also doesn't help when the messaging is linked to how "there will be no jobs where you use your brain anymore everything will be automated". What motivation does the average 16 year old have to try hard and learn anything beyond what they immediately need.
No jobs, ai Jesus is coming, and if you use ai it will use all of the worlds compute power to try to convince you it's correct even when it's not.
A more robust measurement might be the (former) US Department of Education's "Adult Literacy in the United States" survey, most recently conducted in 2019. The results of this are sobering enough:
Both show that only a small fraction (5--10%) of adults operate at high levels of literacy (whether of text, numeracy, or technology), and that a large fraction (roughly 50%) operate at a minimal or below-minimal rate.
Many of of witnessed the technical literacy of the general population when we ran to show them ChatGPT 3.5, and they just kind of shrugged like "So? What are you showing me?"
I am asking a lot here, but school needs to be training people what AI is and what it's weaknesses are and how to use it... My school taught me to use a calculator. It also taught me how to check my work when I relied on the calculator.
AI is a very complicated calculator - you give it an input, magic happens, it gives you an output. Really no different, to a layman.
To be fair, this should probably be covered by basic physics/maybe cooking classes. “You can’t determine the calories in food by looking at it” isn’t really ML specific.
Won't help much if kids are ai'ing their way through physics then ten years later need to go on a diet having not applied the knowledge possibly ever or exercised their critical thinking skills
Considering the lack of basic math skills I encounter each and every day, I don't think schools did enough; they certainly aren't going to do enough w/LLMs.
Knowing the lack of understanding of basic chemistry and physics like fundamental thermodynamics... I have little hope any population can be trained to understand LLMs sufficiently...
It's more complicated than a calculator. Even researchers who have dedicated their lives to the field don't know all of the limitations of any given model. That fact alone isn't helpful when a model is 80% correct in one area but 2% in another.
If even experts in the field don’t know all of the limitations then it’s even more important to stress that relying on the output of an LLM is a poor choice without additional checking and verification.
Even with calculators, I was taught that you should double check by hand sometimes to make sure you got it right.
If you're looking for a citation about this, the 1999 Dunning-Kurger paper "Unskilled and unaware" [1] is about this.
People who are unskilled at a task are unaware of what that task performed correctly is. So, somebody who can't count calories is unable to tell that the AI can't perform the task correctly either.
Fwiw invoking Dunning-Kruger is beyond trite at this point.
Which is a good thing because it means we can talk like normal humans ("people don't know that it's unreliable") instead of acting like we're making such a profound claim that it needs a citation and psychological dissection.
Agreed. In general the amount and variety of bugs introduced since everyone started vibing is worrying. It is probably a national security concern but I guess so is the economy tanking due to failed AI investments. Guess we will see
It's basically saying to randomly slop something and see if it gets better. Evolution has physical principles and guard rails backing it. Here there are no principals whatsoever, just slopping the slopper to see if it's somehow less sloppy then writing a gist with a slop machine.
I wouldn't call it karpathys loop I'd call it slop descent. Or descent into slop. Or something like that
Evolution very much involves random mutations that turn out useless or harmful and thus don't spread.
This is in fact less random than how generic algorithms used to work traditionally which encoded behaviors in some data structure that then got randomly mutated or crossed with other candidates in the pool.
I am aware of what biological evolution is. This isn't analogous. I love my software friends, I'm a software person now too, but the level at which people take algorithms that involve any level of biomimicry as a model for actual biology is frustrating.
Is slop verifiable? If so we can throw it in the loop... The point is that this loop can be pointed at any verifiable work. Yeah you are seeing it raw, the verifier is the principle you talked about. Yes it was fully AI generated, It will be refined
reply