Back in the day I couldn't even dream of a PC. They were way too expensive. It took my extended family chipping in (~15 people) to buy me a C64 with tape storage. Still it was great fun. It made me learn programming in BASIC and English at the same time (as the Polish language book included was so badly translated and full of errors it was hopeless).
It was pre-internet obviously so obtaining software was very difficult. For years when I was learning assembler I was using a so called "monitor cartridge" that did simple assembly/disassembly, but it didn't support labels and such. I could read about software like "Meta Assembler" that let you use labels and variables and think "wow, I could do so much stuff with that..."
My first PC was sometime in late 90s. A Celeron 233MHz with Windows 95. I wasn't a huge fan of Windows back then. I remember when one of the pc magazines I got had RedHat Linux install CDs. I liked it from the start. The fact my software only modem and Lexmark printer didn't work got me into kernel programming :-)
Fun to think of it now, but I prefer 2026 a 100x :-)
After 2 years of using all of these tools (Claude C, Gemini cli, opencode with all models available) I can tell you it is a huge enabler, but you have to provide these "expert guardrails" by monitoring every single deliverable.
For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.
Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.
So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.
Yup. I’m not expert so maybe I’m completely off base, but if I were OpenAI or Anthropic I’d likely just hire 1000 highly skilled engineers across multiple disciplines, tell them to build something in their domain of expertise, then critique the model’s output, iteratively work on guardrails for a month or two until the model one-shots the problem, and package that into the new release.
Any comments on how the copyright issues are handled in corporate settings? I mean both in terms of staying clear of lawsuit+ ensuring what we produce remains safe from copying
Similar to my favourite OVH servers, but I have unlimited traffic at 0.5Gb/s 64gb ram and dual mics. Similar price (with vat in Poland).
If you wanted to run same workloads on Aws it would cost you few hundred euro a month.
I see a silver lining to all this. At least maybe the silly "throw more horizontal scaling at it" will stop being a default response to all performance problems and people that are able to squeeze more processing out of the same hardware will be sought after again.
>×If 5 companies want to buy a quadrilion ram chips to build datacenters, why is this manipulation moreso than a million companies each wanting to buy 100 ram chips?
Because they are 5 companies, especially when it can be shown they work in unison (formed a cartel)
This excuse "we need to raise prices because we have more demand" is BS. They should be truthful and say "we can increase prices and people will pay it because they want to be EU based"
To be honest for anything more serious than a personal Minecraft server hetzner has been beaten by ovh for ages (on bandwidth - you get all you can eat data limited by speed from ovh - for example 500mbit, instead of 20tb from hetzner).
For this reason hetzner is always a "backup DC" in my eyes and never the primary.
Also I heard they are extremely sensitive regarding abuse allegations so don't even think of hosting something someone may not like seeing...
They get a lot of hype, but there are many competitors worth looking at.
Is it? Gemini 3-pro-preview and 3-flash-preview, respectively top2 and top3, had 44% and 37% true positive and whooping 65% and 86% false positives. This is worse than a coin toss. Anything more than 0% (3% to be generous) is useless in the real world. This leaves only grok and GPT, with 18%, 9% and 2% success rate.
In fact, this is what authors said themselves: "However, this approach is not ready for production. Even the best model, Claude Opus 4.6, found relatively obvious backdoors in small/mid-size binaries only 49% of the time. Worse yet, most models had a high false positive rate — flagging clean binaries." So I'm not sure if we're even discussing the same article.
I also don't see a comparison with any other methodology. What is the success rate of ./decompile binary.exe | grep "(exec|system)/bin/sh"? What is the success rate of state-of-the-art alternative approaches?
I applaud their bravery in remaining non violent, but I'm not sure that is the best strategy as the state showed their willingness to just kill everyone.
Would organising an armed resistance be more effective? The state dissappears people. Have them organise and dissappear the leaders of the revolutionary guard or at the very least help another state (like Israel) to target them.
Non violence works only in democracies and other systems where the rulers care about what people think.
Protest of any kind only works in systems where the rulers aren’t insulated from the sentiment of their populace by a steady stream of natural resources money.
Nonviolence works where the rulers have a conscience (or at least where those who carry out the rulers' will do).
Would armed resistance be more effective? How many guns can they get their hands on? I don't know the answer to that, but my expectation is, not many. (I am open to correction.)
I mean, with dictators, that's usually what it comes down to. But it often takes years or decades of unrest and repression before someone with enough guns decides they want to be on the right side of history.
It's a fascinating if morbid process we go through every now and then... sort of, building consensus by sacrificing livelihoods and lives.
Iran is one of the most oppressive regimes remaining on this planet, so I really hope this does it. The problem is that revolutionary governments are usually not dumb and do their best to make sure that another revolution can't overthrow them too easily - hardline loyalists with benefits in the military, etc. So this probably ends with a military intervention by other countries or some other sequence of events that will spell even more misery.
The whole history of the Iranian revolution is pretty wacky. It's easy to take a knee-jerk position that "the West did it", and we definitely set some pieces in motion, but Iran wasn't really hurting prior to the revolution, which is why it caught everyone by surprise. The shah made a number of political missteps, there was some sentiment against the UK and the US, and people wanted change... but almost no one wanted a theocratic dictatorship instead. And yet...
Is there a shittier summary anywhere, please? Or did the author reached the peak of enshittification?
Honestly, did the bot implementation have bugs or was it a proper implementation that crashed the network due to sheer numbers?
Also, how does changing the encryption standard affect anything if the bots tried to integrate correctly with the network?
Is the problem "fixed" or is it not? Elsewhere I found large number if botnet devs got pissed off with this botnet operator and 600k nodes went offline. Might this have much more to do with the situation getting better than simply changing encryption?
Also, was there any suggestion a quantum breaking attack was attempted? No. So why put the emphasis on "post quantum" in this article?
reply