> Japan is one of the only countries to have privatized parking. In Europe and North America, vast quantities of parking space is socialized: municipalities own the streets and allow people to park on them at low or zero cost. Initially with the intention of encouraging the provision of more parking spaces, Japan made it illegal to park on public roads or pavements without special permission. Before someone buys a car, they must prove that they have a reserved night-time space on private land, either owned or leased.
This is got to be a huge factor. Making everyone pay for "free parking" through inefficient use of space is such a waste. I strongly recommend everyone to read Donald Shoup's "The High Price of Free Parking".
We dropped Claude. It's pretty clear this is a race to the bottom, and we don't want a hard dependency on another multi-billion dollar company just to write software
We'll be keeping an eye on open models (of which we already make good use of). I think that's the way forward. Actually it would be great if everybody would put more focus on open models, perhaps we can come up with something like the "linux/postgres/git/http/etc" of the LLMs: something we all can benefit from while it not being monopolized by a single billionarie company. Wouldn't it be nice if we don't need to pay for tokens? Paying for infra (servers, electricity) is already expensive enough
When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
One of the fun features that I developed for Warcraft (the RTS) was to fade the screen to grayscale when the game is paused.
Since the game uses a 256 color palette, it was only necessary to update a few bytes of data (3x256) instead of redrawing the whole screen, so the effect was quick.
I also used this trick when the game stalled due to missing network packets from other players. Initially the game would still be responsive when no messages were received so that you could still interact and send commands. After a few seconds the game would go into paused state with grayscale screen to signify the player that things were stuck. Then several seconds after that a dialog box would show allowing a player to quit the game.
This was much less disruptive than displaying a dialog box immediately on network stall.
I've been part of a response team on a security incident and I really feel for them. However, this initial communication is terrible.
Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?
The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.
The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.
I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.
Say whatever you want about the merits of prediction markets. But I just don't see a way those benefits outweigh the societal dangers of these constant reminders that people in or close to power can freely profit from their positions in the ways the rest of the population can't. There's always talk about the dangers of disincentivizing job creators, but what happens when a society routinely disincentives job havers in this way? We're just getting a constant barrage of information telling us that if we show up to our job and simply work as we're expected that we're stooges who won't get ahead. You'll need to look for your own individual scheme, ethics be damned, if you just want to keep up with the rest of the population. That's not healthy on an individual level or cumulatively at a societal level.
Kdenlive hits the perfect sweet spot for me. It's much more capable than basic editors like iMovie, but doesn't have the overwhelming learning curve (or steep hardware requirements) of DaVinci Resolve.
Like others have mentioned, pairing it with OBS for screen recording and Audacity for audio makes for an incredibly powerful, 100% FOSS media creation stack. It's amazing to see how far open-source video editing has come.
Is there any constant more misused in compsci than ieee epsilon? :)
It's defined as the difference between 1.0 and the smallest number larger than 1.0. More usefully, it's the spacing between adjacent representable float numbers in the range 1.0 to 2.0.
Because floats get less precise at every integer power of two, it's impossible for two numbers greater than or equal to 2.0 to be epsilon apart. The spacing between 2.0 and the next larger number is 2*epsilon.
That means `abs(a - b) <= epsilon` is equivalent to `a == b` for any a or b greater than or equal to 2.0. And if you use `<` then the limit will be 1.0 instead.
Epsilon is the wrong tool for the job in 99.9% of cases.
Everyone talking about magenta and brown, but you can see an illusory color right now even without lasers! https://dynomight.net/colors/ behold, some kind of hyper-turquoise
One of the things that impressed me in Quake (the first one) was the demo recording system. The system was deterministic enough that it could record your inputs/the game state and just play them back to get a gameplay video. Especially given that Quake had state of the art graphics at the time, and video playback on computers otherwise was a low-res, resource intensive affair at the time, it was way cool.
It always surprised me how few games had that feature - though a few important ones, like StarCraft, did - and it only became rarer over the years.
“Japan’s liberal land use regulation makes it straightforward to build new neighborhoods next to railway lines, giving commuters easy access to city centers. It also enables the densification of these centers, which means that commuters have more places they want to go.”
This is the most important paragraph in the article. It can’t be overstated how ingenious Japan’s system of zoning is and how much this has benefitted their society in ways we can only dream about here in the West.
I moved two servers, one from Linode and the other from DO to Hetzner a few months ago, with similar savings. The best part was that the two servers had tens of different sites running, implemented in different languages, with obsolete libraries, MySQL and Redis instances. A total mess. Well: Claude Code migrated it all, sometimes rewriting parts when the libraries where no longer available. Today complex migrations are much simpler to perform, which, I believe, will increase the mobility across providers a lot.
The UAE doesn't have a self-advancement culture, it's a capital-backed monarchy that imports pretty much all of its research and production; in other words it piggy-backs on the knowledge produced in other societies. There is no advancement through dialog in the country itself.
> publishing information deemed harmful to state interests
Is the charge, which I think kind of speaks for itself. Full on: "You embarrassed us, straight to jail."
In most of the world such photos would be deemed of public interest and shared by the media then we'd reflect on if our routing is safe/correct and make proportional changes for safety. Not a big deal, nobody is fired, life moves on.
I feel like actions like this are going to hurt the UAE themselves, because how can you improve if there is no dialog? No information to even start a dialog? A lot of hard conversations are NOT going to be had because I guess it is a state secret?
Combined with the announcement that they're killing the old Kindles as well...this is 100% about preventing people from liberating DRM from their books. Full stop. They are closing each and every remaining hole.
lol yes. At least in agency world, a common approach in the last X years has been that designers create entire pixel-perfect, component-based sources-of-truth in Figma (which evolve! they aren't delivered static and complete) -- these are also what the client sees and approves, or at the very least they see branded deck slides that incorporate the Figma designs. Anyways, front end then re-implements from Figma into CSS, except it's usually best-approximation (not pixel-perfect) partially because, despite Figma allowing you to "copy CSS" for an element, it's unusable, almost inline CSS (and usually not aware of its ascendents and descendents, or any variables you're maintaining in CSS, or any class hierarchies, etc), and partially because the units of measurement aren't always identical on either side. You'll also often have multiple FE devs recreating components independently of each other (as a team effort), which can lead to drift and different implementations, which is fun. Then, depending upon the tech stack, FE might be building these components in something like Storybook [0] as a "front end source of truth", which then are either directly injected into a React or NextJS app or whatever, or sometimes they're partially or fully re-implemented again into BE components in the CMS (ex. Sitefinity). Then people ask which one is the source of truth, but really it's a chain of sources of truth that looks more like the telephone game than a canonical "brand bible". Then throw in any out-of-the-box future client efforts (say, a promotional landing page hosted outside of the main project) and you may have yet another reimplementation of part of the same design, but in a completely different system.
From 1988-91, I was a volunteer teacher in Africa. I lived in a hut without running water or electricity, and I had a subscription to Byte.
There was also almost nothing to read, so when my monthly issue of Byte appeared (2-3 months later than most people would receive it), I devoured that thing. I would read it literally cover to cover, including all those ads, several times.
I wasn't (then) working in IT, so a lot of the content (like Steve Ciarcia's Circuit Cellar) went way over my head but it didn't matter, I read it anyway, often by the light of my kerosene lantern. I learned a huge amount: object-oriented programming, this new thing called the Internet (capitalized back then, and before the WWW), and how Jerry Pournelle was a self-important jerk (but boy, did I envy the toys he got to play with!).
This was the age of big, fold-out Gateway 2000 ads, 20MB hard drives, and Turbo Pascal kicking other compilers' butts.
I would read the magazine, then write out programs (in BASIC, the only language I had learned at that point). On my monthly trips to the capital city I would go to a local NGO and in exchange for helping with their IT issues they would let me play (i.e type out my programs and try to get them working) on their computers.
Thank you Michael Rabin for your excellent work. Rest in Peace.
Rabin Fingerprinting is one of my favorites of his contributions. It's a "rolling hash" that allows you to quickly compute a 32-bit (or larger) hash at *every* byte offset of a file. It is used most notably to do file block matching/deduplication when those matching blocks can be at any offset. It's tragically underappreciated.
I've been meaning to write up a tutorial as part of my Galois Field series. Someday..
Claude Code defaulting to a certain set of recommended providers[0] and frameworks is making the web more homogenous and that lack of diversity is increasing the blast radius of incidents
You'd have to be spectacularly stupid to bet on these kinds of things without having insider knowledge, because you ought to know good and damn well by now that the people with insider knowledge are DEFINITELY betting on them.
> The new <acting_vs_clarifying> section includes: When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first.
Uff, I've tried stuff like these in my prompts, and the results are never good, I much prefer the agent to prompt me upfront to resolve that before it "attempts" whatever it wants, kind of surprised to see that they added that
I used it today to take a look at my previously built design system with Logos, branding, fonts, and everything else. After a lot of annoying tweaking back and forth, finally, I got something that was satisfactory.
Then I looked at the usage and it said I had used 95% of my Claude design usage for the week!
This isn't a real tool. This is a plaything, if that's what they're providing as examples.
For a fair comparison you need to look at the total cost, because 4.7 produces significantly fewer output tokens than 4.6, and seems to cost significantly less on the reasoning side as well.
Here is a comparison for 4.5, 4.6 and 4.7 (Output Tokens section):
Notably the cost of reasoning has been cut almost in half from 4.6 to 4.7.
I'm not sure what that looks like for most people's workloads, i.e. what the cost breakdown looks like for Claude Code. I expect it's heavy on both input and reasoning, so I don't know how that balances out, now that input is more expensive and reasoning is cheaper.
On reasoning-heavy tasks, it might be cheaper. On tasks which don't require much reasoning, it's probably more expensive. (But for those, I would use Codex anyway ;)
It wasn't really that much to do with determinism. Quake uses a client-server network model all the time, even when you're only playing a local single-player game. What the demo recording system does is capture all of the network packets that are being sent from the server to the client. When playing back a demo, all the game has to do is run a client and replay the packets that it originally received from the server. It's a very elegant system that naturally flows out of the rather forward-looking decision to build the entire engine around a robust networking model.
> And the men that had spent longer looking after babies showed the largest drops in testosterone. Those that shared a bed with their infants also had lower levels.
Dad here. Maybe…it’s the lack of sleep? Involved fathers tend to have less sleep.
Seriously. Why am I reading about this here and not via an email? I've been a paying customer for over a year now. My online news aggregator informs me before the actual company itself does?
>we don't want a hard dependency on another multi-billion dollar company just to write software
One of two main reasons why I'm wary of LLMs. The other is fear of skill atrophy. These two problems compound. Skill atrophy is less bad if the replacement for the previous skill does not depend on a potentially less-than-friendly party.
I had the honor and pleasure to take a class from the venerable professor, JPL director, and Voyager project scientist Ed Stone at Caltech in 2018. He excitedly told us a "secret" on November 1st that Voyager 2 had reached interstellar space, and he showed us the actual data proving it. But we had to keep it a secret until the press release that Monday, November 5. It was a special moment to see his passion for the project almost 50 years in, and felt incredibly special to hear it directly from him. RIP professor.
This explanation is relatively reductive when it comes to its criticism of computational geometry.
The thing with computational geometry is, that its usually someone else's geometry, i.e you have no control over its quality or intention. In other words, whether two points or planes or lines actually align or align within 1e-4 is no longer really mathematically interesting because its all about the intention of the user: does the user think these planes overlap?.
This is why most geometry kernels (see open cascade) sport things like "fuzzy boolean operations" [0]) that lean into epsilons. These epsilons mask the error-prone supply chain of these meshes that arrive in your program by allowing some tolerance.
Finally, the remark "There are many ways of solving this problem" is also overly reductive, everyone reading here should really understand that this is a topic that is being actively researched right now in 2026, hence there are currently no blessed solutions to this problem, otherwise this research would not be needed. Even more so, to some extent this problem is fundamentally unsolvable depending on what you mean by "solvable", because your input is inexact not all geometrical operations are topologically valid, hence an "exact" or let alone "correct along some dimension" result cannot be achieved for all (combination of) inputs.
This is got to be a huge factor. Making everyone pay for "free parking" through inefficient use of space is such a waste. I strongly recommend everyone to read Donald Shoup's "The High Price of Free Parking".