i wasn't advocating for trade secrets as "equal" or "the way to go", i was trying to explain in simple terms how to think about copyright issues in concordance with the existing legal structures
people here who have not much experience were intellectually trying to reinvent wheels and I wanted to save them time in structuring their arguments. I have been exposed to various tips of the legal iceberg and was thrilled to learn what I learned and trying to pass it on.
> Oh, I do understand it - laws are contradictory and can do whatever people shout out the most that they should do (but they don't always work that way). I just think that it is extremely bad when laws work this way.
You are completely misunderstanding GP's distinction between ownership and liability.
In short, if you use someone else's car to kill someone, you are still liable for killing that person even though you don't own the car.
Aren't you agreeing with him?
He pushed the boulder up the hill, thus he is responsible and liable for what happens. He is the author of the work of pushing the boulder up the hill.
In your analogy: He was driving the car, he is liable for the death. He is the author of the work of driving the car.
You are kinda unnecessarily introducing the creation of an object used for the work. Whoever did create the car/boulder is not liable for what happened.
So whoever made the LLM is not the author but the one who used it to create the code.
> If that were true, a developer may own copyright over the source code, but nothing on the compiled binaries, and I could download practically all software available as compiled binaries and use for free.
If the compiled binaries (output) were produced by running the input (source code) over every program written, then sure.
But that's not what's happening with compilers, is it? The output of a prompt is dependent on copyrighted work of others every single time it is run.
The output of a compiler is not dependent on the copyright output of every other program.
1. The "every"ies in your comment are not to be taken literally either. :-)
>> If the compiled binaries (output) were produced by running the input (source code) over every program written, then sure.
2. More importantly, the above seems cyclically dependent on whether output from generative AI is deemed to be in public domain or not, which I consider is an open-ended issue as of now. It is not so 'sure' as yet. :-)
> I'm not sure where in our lawbooks there are laws that specifically target humans to the exclusion of human-operated tools.
It doesn't need to. Laws are for humans.
Laws don't give rights to chainsaws. Or lawnmowers. Or kitchen knives, hammers, screwdrivers, and spades.
You can't use any of those to commit a crime and then claim that the law specifically did not exclude those tools.
Why are you seemingly in favour of carving out an exemption for LLMs?
Laws are for humans.
Arguing that the law did not specifically address "intentionally killing a person by tickling them till they died" means that you found a loophole which can be used to kill people is...
I don’t understand the hostility and insulting tones being reasonable now.
The comment is not at all just saying “their usage of their own AI is causing these issues”, it’s just a lot of hostility, I don’t see the value of these kind of insults.
lol "hostility" - they sell a very high profile product and the issues seem to reflect bad engineering culture. therefore, I say their culture smells bad.
I didn't downvote, but I feel the last two points show a lack of nuance. It's saying "Rust doesn't prevent 100% of the bugs, like all other programming languages", while failing to acknowledge that if a programming language prevents entire classes of bugs, it's a very significant improvement.
Nobody disputes that Rust is one of the programming languages that prevent several classes of frequent bugs, which is a valuable feature when compared with C/C++, even if that is a very low bar.
What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.
For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.
If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.
As other people have mentioned, the goal of uutils was not "let's reduce bugs in coreutils by rewriting it in Rust", it was "it's 2013 and here's a pre-1.0 language that looks neat and claims to be a credible replacement for C, let's test that hypothesis by porting coreutils, giving us an excuse to learn and play with a new language in the process". It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.
Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.
You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.
But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.
Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).
I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.
doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.
No, once you have an MIT-licensed codebase without a copyright assignment scheme, you no longer have the freedom to relicense it at will. You could attempt to have a mixed-license codebase, which is supported by the GPL, and specify that all new contributions must accept the GPL, but this is tantamount to an incompatible fork of the project from the perspective of any downstream users, and anyone who insists on contributing code under the GPL has the freedom to perform this fork themselves.
This is simply false. You can accept GPL contributions and clearly indicate the names of the contributors as required by MIT. There is no "incompatible fork".
No, GPL and MIT have significantly different compliance requirements. You cannot suddenly begin shipping code with stricter compliance requirements to downstream users without potentially exposing them to legal liability.
Because the bugs were caused by programmer error, not anything inherent to rust. It was more notable due to cloudflare being a critical dependency for half the internet, but that particular issue could've happened in any language.
This kind of melodramatic reaction to rust code is fatiguing, honestly. Rust does not bill itself as some programming panacea or as a bug free language, and neither do any of the people I know using it. That's a strawman that just won't go away.
Rust applies constraints regarding memory use and that nearly eliminates a class of bugs, provided safe usage. And that's compelling to enough people that it warrants migration from other languages that don't focus on memory safety. Bugs introduced during a rewrite aren't notable. It happens, they get fixed, life moves on.
> caused by programmer error, not anything inherent to Rust
Your argument does not work as a praise for Rust because the bugs in any program are caused by programmer errors, except the very rare cases when there are bugs in the compiler tool chain, which are caused by errors of other programmers.
The bugs in a C or C++ program are also caused by programmer errors, they are not inherent to C/C++. It is rather trivial to write C/C++ carefully, in order to make impossible any access outside bounds, numeric overflow, use-after-free, etc.
The problem is that many programmers are careless, especially when they might be pressed by tight time schedules, so they make some of these mistakes. For the mass production of software, it is good to use more strict programming languages, including Rust, where the compiler catches as many errors as possible, instead of relying on better programmers.
The cloudflare bug was the equivalent of an uncaught exception caused by a malformed config file. There's no recovery from a malformed config file - the software couldn't possibly have done its job. What's salient is that they were using an alternative to exceptions, because people were told exceptions were error-prone, and using this thing instead would make it easier to write bug-free code. But don't do the equivalent of not catching them!
And then, it turned out to not really be any better than exceptions.
Most Rust evangelism is like this. "In Rust you do X and this makes your code have fewer bugs!" Well no it doesn't. Manually propagating exceptions still makes the program crash and requires more typing, and doesn't emit a stack trace.
That was why I brought it up. I wasn't trying to be snarky or haughty. Thank you for filling in the gaps, I should have done that instead of the 1-liner.
> Tbf, their harness was surprisingly ahead of the curve for most of the last year..
Do a `s/harness/software` on that statement, and that is going to describe most companies shipping AI written software.
> this point, the difference is mostly made up by issues like the OP has, so you're likely better off using eg pi (-agent) and writing your own custom skills and extensions (or any of the other harnesses the providers create, even copilot-cli has gotten decent nowadays)
They (AI-written software) are all going to be ahead in some way, until they aren't because they hit the practical limits of codebase size that can be reasonably understood by an LLM.
> It seems like these supposed coreutils replacements are being written by people who don't know anything about Unix, and also didn't even bother looking at the GNU tools they were supposed to be replacing.
They're a group of people who want to replace pro-user software (GPL) with pro-business software (MIT).
> No it’s using an army of extremely well paid engineers, something I guarantee the parent comment has no access to
That's a different argument to the one I replied to, and the reply to "they have expensive infra people" is "you have to have expensive product-trained people to use them anyway".
The suggestion was to replace DBB and S3 with some VMs. Presumably those VMs would be managed by the engineers part of the parent commenter’s organization. They do not have access to as many engineers as AWS, nor do they pay them as well.
Not arguing about cost effectiveness here. Just pointing out how silly it is to suggest that you can replace DDB/S3 with some VMs ran by a midsize organization
> Instead of paying several engineers, that you have to vet first, to configure and maintain the services adjacent to your product you can just pay AWS or Azure or someone else to maintain the service.
Your engineers who all have to possess AWS or similar certs before you hire them, work for free?
A move off VPS to managed services doesn't reduce your headcount or labour costs.
You are correct. Someone has to manage and plan the infra. But that is the same for on prem or other non cloud. What you don't necessary need is several database admins, several network admins, several kubernetes admins, etc. I don't necessarily agree, but that is the calculation. Azure hires the 24/7 admins for the service and you pay a bit more to get a share of them. I have heard this argument in person.
I think there is a very narrow space where you need the resources that this provides and it's not yet more cost effective to have your own team of admins. At a certain headcount a the number admins don't matter that much anymore.
Trade secrets aren't very well protected, though.
You can sue the person who leaked/stole your secret, but if others keep sharing it once it is leaked you can do nothing to them.
reply