Hacker Newsnew | past | comments | ask | show | jobs | submit | Oras's commentslogin

Neither on mobile chrome

MS put 10B for 50% if I remember correctly. OpenAI is worth many multiples of that.

> OpenAI is worth many multiples of that

valued at --which I'd say is a reasonable distinction to make right about now


Their revenue is 20B, so they still worth multiples of 10B regardless of valuation even if you consider the basic 5x revenue valuation

https://www.reuters.com/business/openai-cfo-says-annualized-...


"The basic 5x revenue valuation" doesn't work for businesses that aren't profitable.

It is also unclear to me how much real debt they carry. They have famously been signing many deals: RAM, datacenters, maybe nuclear power plants -I no longer know what is a joke or not. They must be carrying hundreds of billions in paper debt obligations, which is tough to payback at $20B revenue.

I'm giddy about reading their S1 in the near future. We're about to have another "We What the Fuck" moment.

> Their revenue is 20B, so they still worth multiples of 10B regardless of valuation...

I can easily generate double that revenue, by selling $20 bills for $10.


When they put 10B in, they got weird tiered revenue shares and other rights. That has been simplified to 27% of OpenAI today. I don't know what that meant their 10B would be worth before dilution in later rounds.

> OpenAI is worth many multiples of that.

How?


Because they recently issued shares at a price many multiples of that, and people bought them. How else would you define financial worth?

I would use your number adjusted by some demand elasticity curve.

The "back-of-the-napkin" only has enough room to estimate based on recently issued share price. Seems reasonable to me.

Sure, for napkin level math you can go with this, and multiply by some simple multiplier, I like 70%.

Does speculation equal worth?

They just announced their new chip, and they are the ones created transformers yet investing this amount in a competitor?

I don’t know what to make of it


I wonder if Google regrets publishing that article on transformers.

Urs used to talk (internally) about not publishing "industry-enabling papers" which is why most Google infrastructure papers were describing something that had already been turned off, or was already in the process of being replaced by the next system (GFS, Vitess, etc). The things that did get published were either considered not key advantages, that other companies simply cannot do, things that other companies wouldn't bother doing, or experiments that never worked at all. There were exceptions of course. But it led to a public perception of the Google stack involving mostly technologies that were long dead or were never adopted.

"Attention Is All You Need" was a very very different thing and I also wonder if they are glad they published it. But I imagine if they hadn't, the motivation for researchers to leave Google would have been even larger.


> I also wonder if they are glad they published it

https://youtu.be/ue9MWfvMylE

Jeff Dean is asked this question by Geoffrey Hinton at 37:35 - might worth watching. Overall an interesting video.



So Google allowed publishing the Attention paper because they didn't understand its value.

They patented it. When the dumb money stops sloshing around, we'll start to see the fallout from that.

It makes every bit as much sense as investing in Snap while still operating their own social network product. Seems to have worked out fine (for Google, not Snap).

FWIW I’d buy SNAP now that they are at rock bottom

Given that anthropic is probably paying it all back to them in compute bills, they may not be giving them anything.

Why do you think Google considers Anthropic a competitor?

Google makes a competing product to Claude's main product? So competing, in fact, that they have to ban Googlers from using Claude in order to get enough dogfooders.

hedge your bets, I know I would

The tech job market is in dark state now, really not much options to choose “meaningful”.

I feel for the those who will hear the bad news, whether meta or other companies, and I hope they will cope and get other roles


There are plenty of jobs outside of software development.

More on the 7th minute if you’re using opus

I used pro via API (DeepSeek API not OpenRouter) with Claude Code, and the planning, visual solution, understanding was fantastic.

I would say I wouldn't notice this wasn't Opus 4.6. What I asked was looking at a feature implemented recently, and how it could be improved. Consumed 3.3 million tokens and create a much better flow.

It had a bug when I started the implementation though related to the API, which I suppose it is something they didn't catch when making their API compatible with CC.


are govs required to comply with GDPR and data breaches laws?

There are carve-outs to allow for governments to make exceptions, but it's besides the point.

If the government were to hold themselves to account, they would fine themselves some amount N, and pay itself N using your taxes. It also wastes other finite resources for all the paperwork and legal action involved that could be used for something else.

Speaking pragmatically, there's no point trying to hold the government itself to it's own laws. The only time citizens do hold the government accountable, it's always done in the form of hangings, or the guillotine in France's case.


Yes, but unelected bureaucrats only impose fines on the private sector.

what would be the point of the government fining itself though?

Now that I'm thinking of it, it would create the need for an extra gaggle of bureaucrats to oversee the process,so I suppose someone might see a point to it ...


You may think you're funny or something, but boy do I have news for you.

There absolutely are fines for French administrations. And, knowing the French tax system, they've probably found a way to levy VAT and some other taxes on top of those fines.


Do you mean fines for tiny companies?

If these models reach quality of Opus 4.5, then DGX could be a good alternative for serious dev teams to run local models. It is not that expensive and has short time to make ROI

Memory bandwidth is the biggest L on the dgx spark, it’s half my MacBook from 2023 and that’s the biggest tok/sec bottleneck

I agree. The problem is the noise ratio, not how the platform was implemented.

My test for image models is asking it to create an image showing chess openings. Both this model and Banana pro are so bad at it.

While the image looks nice, the actual details are always wrong, such as showing pawns in wrong locations, missing pawns, .. etc.

Try it yourself with this prompt: Create a poster to show opening game for Queen's Gambit to teach kids to play chess.


It almost nailed it for me (two squares have both white and black color). All pieces and the position look correct.

What move? Who's turn is it? Declined or accepted? Garbage in, garbage out.

In some cases I would agree with this, but image model releases including this one are beginning to incorporate and market the thinking step. It is not a reach at this point to expect the model to take liberties in order to deliver a faithful and accurate representation of your request. A model could still be accurate while navigating your lack of specificity.

What do you mean? Parent clearly describes the Queen's Gambit. 1.d4 d5 2.c4 There is no room for ambiguity here.

King Indian Defense would be a better prompt as Queen's Gambit can now refer to e.g. some scene from Netflix series.

Kasparov vs Karpov ‘87 Olympiad. Move 6

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: