Hacker Newsnew | past | comments | ask | show | jobs | submit | CaptainFever's commentslogin

Who reads the ToS anyway?

> AI by its very nature removes attribution.

This is incorrect. RAG preserves attribution. Training data doesn't, but it doesn't make sense to attribute that anyway, unless you want a list of every person who has ever lived.


Website-crawling services tend to have better luck accessing these websites due to their experience solving CloudFlare's challenges. Might want to try those.

There's probably value in a web extension that uses a small embedded LLM to filter out comments that complain about literally any AI that's used in the submission.

To make it really funny, that extension should be vibe coded.

Seriously though, it should just be against HN guidelines. It's annoying to see that 90% of the comments are just people fighting over vibe coding on a completely unrelated topic. On this submission? There's only 1 (one) on-topic comment.


^ This is a troll; new account, troll username.


None of the comments in my sample of this account were troll-sentiment. It's ok to have a viewpoint and name an account about it.


Not a troll. I’ve been doing a lot of self reflection on this topic lately. Some people seem to enjoy software for the act & craft, where the outcome / artifact is secondary or irrelevant. I don’t. Some people enjoy the artifacts it produces, for their utility or economic value. Not really me either. Often people frame it as this dichotomy, but I’ve realized my enjoyment and self-fulfillment comes from creating an artifact that is genuinely good and that I can be proud of creating. Too much AI robs me of this. I’ve created cool stuff with AI that leaves me feeling nothing because I didn’t really create it.


This is all valid. Your original comment came across as a troll because it implied that nobody could ever feel good about stuff they built with AI. Asserting that you know more about the emotional state of strangers on the internet than they know themselves is arrogant.


Well, it’s a genuine question. Like, if I have a machine in my house where I give it a recipe and it spits out the food, should I feel good about having “cooked” that food? Or what if someone prompts an AI for some art, should they feel proud of “creating” that art? I think not. And it’s the same with code. Depending on how much of the work you actually did should influence how you talk and feel about a creation. So many people lazily prompt an AI and then come here to post about something they “made” and I think that’s wrong.


I’m thinking there’s probably degrees to it. Like there is some stuff I absolutely want to hand craft, but then other stuff I don’t mind so much.

One of the interesting discussions at work (I’m in gamedev) has been about tooling and where AI fits in there.

Previously you’d spend sometimes significant time writing a tool, then polishing it up and giving it to the team (think things like editor extensions that make your workflow easier).

But AI can make this kind of bespoke tool dev so cheap now that it’s possible for every single dev to have their own tool that matches the way they work exactly. At that point, do you really need to spend the long 80% effort of polishing and getting it ready for mass consumption?

Stuff like that is interesting. I still can’t imagine never looking at the AI-generated code, but I’ve seen people take the approach of “I’m not interested in the code, only in what the thing does. If it’s wrong, I ask the agent to fix it”.


Sure, but I’m not going to feel good about it or proud about it or share it with others under the idea that I built it.


You could have government-signed models + programs that are approved for generating CP (not CSAM). It's legal if the signature checks out. Something like https://contentauthenticity.org/ but for verifying that something is definitely made by AI.

(You need to sign both the models and the programs to make sure there's no img2img.)


You don’t even need to give them a model, just generate some images and publish them. If you find those images, it’s fine, if you find anything else, arrest them.


That works too, though it'll of course result in a smaller selection and therefore smaller impact on the real market.


We can't agree on weed or safe injection sites, you think we'll have government approved CP generation?


I totally agree, we should aim for all three harm reduction measures.


Yeah? How? What strategy do you have for government funded and provided CP that is even remotely within the realm of political possibility?


The level of tech solutionist brain rot you need to reach to propose state sponsored child porn generator... This forum is a parody of itself


So no real arguments against it, only insults. That's great news, thank you! :)


You don't need scientific arguments for everything you know. What's your argument against consentent 10 year old siblings having sex together if they use protection? I don't have one but I know it's morally wrong and won't bring anything good


This reminds me of a voting method I've seen some anarchists advocate for: the rules passed by votes should only be enforced on those who voted for it.


Rude.


We really need to add "please don't write comments witch-hunting articles for AI usage" into the guidelines at this rate


It is useful for those of us always checking the comments first, to decide if the article is worth reading.


"Information wants to be free, as long as it's not my information."


"...and as long it's created by a human."

(Because I feel proponents of generative AI appear to play the "info wants to be free" card as well.)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: