Hacker Newsnew | past | comments | ask | show | jobs | submit | Liftyee's commentslogin

Neat little example of what's possible even using a restricted and standardised language. One could imagine using this as an interface layer for humans to interact with robots or industrial systems today. Of course, it would still be slower than an old-fashioned control panel with tactile, individual controls - but there may be some niches in which this language-based contextual control method has advantages.

For industrial systems, PLC controllers programmed visually [0] are an alternate to text-based programming. It's surprisingly capable! I think this sort of fits the situation better, since every state the program can be in is visible all at once (each horizontal line is a pattern match case for the current state of the machine), and your inputs and outputs are immediately clear. In text, you're going to have to somehow introspect what nouns are available and what verbs they can do. That starts to feel like Smalltalk or something, with an object browser, [1] in which case, why not just use something general?

Trying to handle a text-based programming language with an implicitly english subject/verb/object order also feels like it makes it a bit harder to grok for Average Person (worldwide). For english speakers this is natural, but for people used to different grammar, this is nearly the same difficultly of learning a general purpose programming language already.

[0] https://en.wikipedia.org/wiki/Ladder_logic

[1] https://en.wikipedia.org/wiki/Smalltalk#Browser


If ASML could do it, China will also do it. It's just a matter of development time and resources, both of which are plentiful in China.


Didn’t Japan just recently restarted their own project in this area?

https://www.rapidus.inc/en/news_topics/information/rapidus-b...

https://www.semimedia.cc/18196.html

If China and Japan are currently working on it, certainly South Korea is not far behind.


Neat project. These popular "commodity" devboard designs have been remixed and copied so much that it was just missing an open-source design to slot into many existing projects. I can imagine designing a board using one of these designs as a "template" but adding whatever capabilities I need, then knowing it fits a standard footprint.

Yeah, I've designed PCBs around PCBs—most recently around the LILYGO T-Display because it had an integrated LCD. I ended up adding my own DACs to the "mother" board though. It would be nice to have a single PCB that combined the best of both.

(I still wonder if I could compete on final cost though.)


I'm not familiar with the details of real software development, so I don't know why it's not possible to just "not give the SVG part of the code internet access" or "perform sanitization on post-decoding (url, hex, etc) data".

Is it because the SVG parser/renderer being used is an entire library, and it would be prohibitive to write your own SVG parser/renderer or insert your own code into the existing one?


Some of the suggestions are kind-of exactly that. But they specify not a change to the default behavior, but a new behavior based on the presence of a new attribute.

You could change the default behavior to the “safer” behavior. And then add some sort of “danger mode” attribute. But… devs are usually hesitant to do something that would break legitimate code, such as changing the default behavior would do.


Praised by most customers, probably. As an engineer I appreciate Bike Friday's attention to detail and I own a good few "artisan" devices myself, but the reality is that most people want a mass-produced bike that is "good enough" within their budget.

There's no doubt that your bike is higher quality than the Decathlon one, but the average customer doesn't appreciate how well engineered it is or how many patents (??) are involved.


Having lived in Italy and used the btwin folder quite a lot, I can assure you there are lots of basic folders in its category and price range which are much better. I'd look into Dahon and Tern for a basic folding bike.

Folding bikes are complex and hard to make safely, and the folding mechanism is costly to engineer right. This means that the manufacturer of a cheap bike is either providing you with a dangerous folding mechanism, or is putting a lot of the cost of the bike into the folding mechanism, so there's not much money left for the rest of the parts. Either way, it means that cheap folding bikes are a bad choice, and the btwin folder is a good example of that.

As to patents: yes, there are patents.


I have several bikes. My decathlon foldy is great.

Out of curiosity, what do you use the higher 20gbps transfer speeds for? Video production?

I use USB-C displays, but they run in DP Alt mode. I don't have many (any?) storage devices that can max out a 20gbps connection, and usually don't exceed 5gbps


This goes back to another point I've historically made which is that except for storage devices, pretty much nothing supports those speeds. I think there are some USB adapters that don't use alt mode and that can have some advantages on some hosts but usually that's a disadvantage.

USB interface chips are, as far as I've seen, a Cypress/Infineon FX3 or a bit more rare FTDI FT600/FT601. I even talked with the FTDI guys at s conference and they said nobody's asking for higher than 5gbps. Infineon just recently, after I think 10+ years, came out with 10 and 20gbps chips. But only for receive. Seems to be for cameras mainly. So surprisingly yes, video production.

But I want it for other reasons professionally. For example, if you look at the signalhound (which uses the fx3) series of products, they often cap out at 40 Msamples/sec for USB. This is a classic 5gbps limit. To compete with the big boys they need 250 MHz if not more. That's 8 gbps before protocol overhead. It doesn't help that USB is extremely dependent on host compute capability to keep throughput up but assuming your PC is up to the task, 20 gbps could interface some serious data to the real world.


Besides storage devices, i.e. external SSDs, which are very frequently used and they need a USB port as fast as possible, the other frequent application that needs the fastest USB ports is the use of USB Ethernet interfaces.

Also eGPU. I have a tiny NUC-size system with decent internal GPU and a (physically much larger) game system with a slower CPU that idles at only a bit under twice the maximum power of the NUC. It would be handy to be able to just plug in an eGPU when needed. The power and cooling requirements of fancy GPUs are so much higher than that of CPUs that large cases designed around the CPU don't make much sense. Even the pysical stability of a large GPU in an ATX style case is not ideal.

> Out of curiosity, what do you use the higher 20gbps transfer speeds for?

Images, videos, movies, file transfer/backup. 50 megapixel RAW images from a DSLR that can capture up to 20 images a second get big. My daughter is a much better volleyball player than I am sports photographer, so I have to spray-and-pray to capture those high-speed hits at the net.

Transferring a few hundred such photos via a card reader was so glacially slow it was worth adding a 20Gbps USB3.2 2x2 port to my home server (Ryzen 5600x) via a dedicated PCI-E card. The USB3 ports on a good enthusiast-class mobo for that generation (only 4 yrs ago) max out at 5Gbps (theoretical). I would have added a 40Gbps Thunderbolt port instead but then I'd have to take a hit on the top speed of my second NVMe drive due to sharing PCI-E lanes.

While the increasing deployment of true USB4 ports is wonderful, it's not quite a panacea. Just because a port is labeled USB4 doesn't mean you necessarily get 40Gbps performance. USB bandwidth is shared across multiple ports via internal hubs and then the PCI-E lanes the hub is connected to might be shared with other peripherals (GFX, NVME, I/O cards). And different USB ports have different trade-offs depending on how they're internally connected, which isn't always documented well by mobo makers.

Sadly, in consumer systems the lack of PCI-E bandwidth can still be an issue if you want your expensive GPU to run maximally fast and have multiple fast NVMe drives. You have to spec your system carefully, get the latest generation hardware or pay 3-4x for HEDT/Enterprise chipset motherboards. Getting even 10s or 100s of gigs of data in or out of a PC reasonably quickly and conveniently has always been a bottleneck that's only getting somewhat better quite recently.


Any external NVMe SSD from the last 7-8 years easily saturates a 20 Gbps connection, because already from that time the NVMe SSDs were able to saturate a 32 Gb/s PCIe 3.0 4-lane connection.

For at least 7-8 years I have been using USB external enclosures for M.2 Key-M NVMe SSDs, which always saturated whatever kind of USB port they were connected to, i.e. 5/10/20 Gb/s.

I do not remember when I have last used a SATA SSD, which is slower than 10 and 20 Gb/s USB, but I think that this was about a decade ago.


The wires count seems to be the number of conductors in the cable (i.e. the number of wires you'll find if you cut a cable in half, including ground and power).

It's true that the actual data is sent over a lower number of diffpairs.

I suspect the shield is not included in the number of wires, since all USB cables have a shield (not sure if usb 3.0 has an extra return ground wire for high speed).


It still would't be right. Full-featured USB-C has 8x superspeed (tx1p, tx1n, rx1p, rx1n, tx2p, tx2n, rx2p, rx2n), 2x high-speed (dp, dn), 2x power (vbus, gnd), 2x SBU, 1x CC. That's 15 wires.

A full-featured USB-C connector has 24 pins, as shown in a diagram in the parent article.

The "12-wire" count of the parent article refers only to the main wires, i.e. the 4 USB 2.0 wires + 4 differential pairs for USB 3 or 4.

Similarly, the "8-wire" count for Type A connectors refers only to the main wires, i.e. 4 USB 2.0 wires + 2 differential pairs for USB 3.


I wonder if the real problem is short-term thinking in culture and incentivised by markets. By optimising next quarter's profits over investing in long-term growth and capability, things like this happen.

What hardware do you use?

Most of those have custom quants for Mac Studio M3 Ultra 512GB. You'll typically see them mention it by name.

All of that list but the last three run at these sizes. For last three, look for a custom quant, e.g. 9.5 bits and/or the Ultra M3 512GB mention.

Not sure which direction I'm surprised but Macbook Pro M5 Max ticks over models at the same speed. With "only" 128GB look for models of 116 GB (the absolute max that retains reasonable stability) or less.


I think the answer to this is:"yes"

a Beowulf cluster of 256 x Raspberry Pi 3.

I used to maintain a 2000 pi 4 cluster, before LLMs were relevant, with around 6gb free ram per node. I wonder what I could have done with something like this.

All of it.

I can speak Chinese fluently but I need to improve my reading hugely. This sounds like exactly my usecase. I would actually be willing to pay for this (though less certainly for a subscription, preferably Pleco-like one-time fee - even if larger).

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: