This sounds like a "perfect is the enemy of good" situation. There are certain types of reactors that can reuse uranium to further reduce its half life to around 6000 years so the one million years legal requirement is an unreasonable target.
Any material that is still radioactive after a hundred years wasn’t that deadly to begin with. There is a strong link between ”hotness” and short half-lifes, fast-decaying extra spicy isotopes are..fast-decaying
Actually, those materials can be MUCH more radioactive in the beginning compared to 'conventional' nuclear waste, the half-life is just so short that you can let them sit for a couple of decades and then deal with it.
IIR, those "certain types of reactors" and their supporting infrastructure are (1) very handy for producing weapons-grade nuclear material, and (2) extremely difficult to operate (historically) without sundry environmental disasters.
Which problems make them considerably hotter - politically - than no-reuse type reactors.
I don’t know how many difference little models this uses under the hood, but I was shocked at how good it was at the couple document extraction tasks I threw it at.
It's looking like running your own mini ecosystem is the way of the future to me. No data centers, just a decent GPU 16-24gb of VRAM, CPU, and 32gb of RAM.
I'm pretty sure there's someone somewhere who'll create a proper harness that's equivalent to one giant model. The difficulty is mostly local hardware has lot of memory constraints. Targeting 128GB would seem to be the current sweet spot. If we could get out of the corporate market movers of buying up all the memory, we could maybe have more.
Regardless, the people in the 80s capable of pruning programs to fit on small devices is likely happening now. I'd bet most of the Chinese firms are doing it because of the US's silly GPU games among other constraints.
> They could easily have read it, and thought , that communicates the information that it needs to.
I'd they aren't self-aware enough or smart enough to determine that what they wrote is indistinguishable from text generation, how probable is it that they have something of value to add to any thought?
I don't really see reason to complain about tool use, so long as the result is cohesive, accurate and that ultimately means a human has at least read their own output before publishing. It's a bit like receiving a supposedly personal letter that starts "Dear [INSERT_FIRST_NAME_FIELD]," are you really going to read such a thing?
My opinion is that literature and art will continue pushing the envelope in the places they always pushed the envelope. LLMs will not change this, humans love making art, and they love doing it in new ways.
Corporate announcements were never the places that literature and art were pushing the envelope. They were slop before, and they're slop now.
The quantization for some models can be very detrimental and their quality can drop considerably from the posted benchmarks which are probably at bf16, this is why having considerable RAM can be important.
When it comes to forcing platforms outside of Greece to comply with this, those platforms will just close their service down to Greece.
If you want to talk about the concept itself of removing anonymity: on HN the impact would not be huge, a lot of us are not really anonymous with links to personal sites in our profiles. Reddit is a different beast entirely.
You don't need a physical presence to be subject to another country's laws. Disobeying a judicial order would be grounds for issuing a warrant which could easily be expanded to an international warrant for the owners of the platform.
The "judicial order" in the first place violates the first amendment, which isn't binding on Greece, but is binding on the nature of any extradition order they wish to seek in the USA.
I do wonder how easy it is to de-anonymise a typical reddit user based on topics, ways of writing, time of writing etc. throw into some form of pattern recognition and see if it links up with other reddit accounts, accounts on forums, things like Facebook etc.
Throw in information Reddit has (ip addresses, user agents etc) and it’s no doubt a certainty.
Not quite. Large amounts of data going into these models has already been curated, otherwise you would get a tremendous amount of wrong answers for even the most basic questions.
It still produces wrong answers regardless though, not because of the training data but because of just... intrinsics. The question is what an acceptable error rate is, how severe those errors are, and whether a human would make comparable errors.
But this debate has parallels with self-driving cars; even if the numbers say that self-driving cars are not perfect but safer than human drivers, anything but perfection will be considered broken or outright illegal.
You’d think they could have had the existing GitHub on whatever continue as is (maybe for paying customers) while all the AI new inrush goes to the Azure setup.
The European leaders would have have no say in it. If the software from Seattle is designed to covertly exfiltrate information, they won't even know it. Even if they review the individual code changes, it can be an obfuscated attack similar to XZ where the code itself is clean, but not so much for the network fabric firmware binary test data.
Which community are we talking about? The professionals with 10+ years experience using LLMs, the vibe coders that have no experience writing code and everyone in between? If you read some of the online communities the experiences with the models all over the place, some compare GPT 5.5 to the second coming of JC while others think it's stupider than 5.4.
I personally don't have time to build a set of private benchmarks to compare the models that are coming out so I'm mostly relying on private and semi-private benchmarks to get a feel for how models are improving before I subscribe to a service and start using it myself. At least it's something a bit more reliable than the vibes of random people and bots on reddit.
yea lol i think the community on this one is woefully unqualified to call any shots here. the goalposts are basically teleporting and everyone's aligning success with their own incredibly vague, personally created agentic non-deterministic workflows success. there's like no real answers coming from "the community" in this space at the moment, it's vividly similar to cryptocurrency cycles. most importantly, like you say, vibe coders are going to be the largest subset of the community and probably the most unqualified to assess performance because they're mostly clueless to how things work under the hood.
reply