I’d be surprised, given how outspoken Sawe is about doping. He invited the AIU to test him before Berlin and Adidas also paid.
> Determined to prove he is competing clean, Adidas provided $50,000 (£36,900) to the Athletics Integrity Unit, the sport's anti-doping body, to frequently test Sawe over a 12-month period.
> That began with a reported 25 out-of-competition tests in the lead-up to Berlin in September, continuing at a similar rate as he prepared for London.
> Sawe said on Monday: "It's very important to me because it gets out the doubt in my career of athletics and yesterday's performance.
> "It shows Sabastian Sawe is clean. It shows running clean is good, and we can run clean and we can run faster.
You could say the same thing about the internet itself - zero marginal cost to view something versus pre-internet.
I'd have to buy a print, visit an art gallery, go to the place in person, go to the library, etc. That's all friction and cost to "ingest" art. Some of it costs something and some just the cost of going.
It's not a fair comparison because it's wrong. Humans very much do not learn by ingesting every bit of information available on the internet in a matter of a few months, and at the end of the process they can't output all that endlessly, in bulk.
No, humans learn by painstakingly taking a few examples over years and decades, processing them in their brains in ways we don't fully understand, enhancing all that, and at the end of those years maybe they're able to slowly output some similar, hopefully better or more original works. But by far most humans won't manage to do it even after decades of trying.
Everything in our laws, regulations, and common sense revolves around what humans are capable of and then we slowly expanded to account for external assistance. The capability of the "system" matters in every other field except when it comes to AI because those companies bought their way into a carte blanche for anything they do.
One bad possibility is that AI & robotics advance to the point where they can do every job better and more cheaply than humans; and then humans are no longer employable and all die if they have insufficient capital to survive the period between unemployment and post-scarcity.
Another possibility is that, once AI exceeds human performance in all economically useful activities, including high-level planning, governance, law enforcement, and military actions, it discovers that the benefits of keeping humans around aren't worth the costs and risks.
Bad: let tech (now "AI") companies, built on the collective (often in theory IP-protected) output of humanity, own and mediate an ever increasing proportion of the value created in society. Intellectual rent-seeking, if you will.
Bad: the above but also their power and influence grows so much and governments are so ineffective (or corrupt) against them that the tech companies also become de facto governments and people rely on them to survive. Also they destroy earth even faster with nobody left to stop them. The full fat cyberpunk dystopia.
Bad: the above but with lots more fascism and war. Too many people seem to want this.
Bad: regulate AI to such an extent as to cede all growth and technological leadership to whoever doesn't
If you end up creating something sufficiently similar, yes in fact you do. Or rather, you have done a copyright infringement and retroactive payment may be one of the remedies.
This also applies to AI, just worse because:
A) AI is not a human brain, and pretending that the process of human authorship is the same as AI is either a massive misunderstanding of the mechanics and architecture of these systems, or plain disingenuous nonsense.
B) AI has no capability of original thought. Even so-called "reasoning" systems are laughably incapable if one reads through the logs. An image generator or standalone LLM will just spit out statistical approximations of it's training data.
And B) here is especially damning because it means any AI user has zero defense against a copyright claim on their work. This creates enormous legal risks.
The model for copyright trolling is trivial. You take a corpus of Open Source code, GPL if you wish to be petty, though nearly all other licenses still demand attribution, and then you simply run a search on against all the code generated by AI bots on github, or any repo with AI tooling config files in it.
Won't be long before the FSF does something similar.
But open models are only about 8 months behind closed models. So even aggressive copyright-enforcement would only create an 8 month delay.
This is essentially a LimeWire problem. And OpenAI is essentially Spotify.
Even with revenue sharing, 99% of artists will get nothing (just like streaming), and revenue will be much lower than before (just like streaming compared to record era).
Only IP giants like Disney would see any real income.
The US isn't some global free zone where everyone has a right to come and go - do as they please.
If you came to the US legally with a visa. Great. When you signed your visa documents there were some questions they asked you and some fine print that basically made you liable for "bad behavior."
I'm an American living in the UK and I'm under no illusion that if I start doing dumb stuff here it's possible they tell me to leave. (Tho apparently the UK government has a pretty lax attitude with who they ask to leave.)
If someone wants to come to my country and behave in any way outside their best - then yes I support the government kicking them out.
I don't think protests in general are "behaving outside your best". Now what those protests contain is an entirely different matter. I read an article about the arrest of a foreign student recently who attended numerous "death to America" protests. I can support deportation in a case like that (even if only for the complete lack of self awareness), but not for all protests.
Protesting against ethnic cleansing is a bad thing, that’s what you’re saying?
No matter what kind of mental gymnastics you try to do, this is just an obvious case of a foreign government having a huge influence and control over internal US affairs.
But that is not what is happening, and they have stated that they were at the event for a short period of time, quite possibly at the portion that didn’t occur inside the event.
The willingness to assume one version of events, and then go down that path to award consequences, is premature.
reply