Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
I am aware and I don’t expect that everything on the internet is public domain… I think the models built off of works displayed to the public should be automatically part of the public domain.
The models are not creating copies of the works they are trained on any more than I am creating a copy of a sculpture I see in a park when I study it. You can’t open the model up and pull out images of everything that it was trained on. The models aren’t ‘stealing’ the works that they use for training data, and you are correct that the works were used without concern for copyright (because the works aren’t being copied through training), licenses (because a provision such as ‘you can’t use this work to influence your ability to create something with any similar elements’ isn’t really an enforceable provision in a license), or permission (because when you put something out for the public to view it’s hard to argue that people need permission to view it).
Using illegal sources is illegal, and I’m sure if it can be proven in court then Meta will gladly accept a few hundred thousand dollar fine… before they appeal it.
Putting massive restrictions on AI model creation is only going to make it so that the most wealthy and powerful corporations will have AI models. The best we can do is to fight to keep AI models in the public domain by default. The salt has already been spilled and wishing that it hadn’t isn’t going to change things.
I don’t have much technical knowledge of AI since I avoid it as much as I can, but I imagined that it would make sense to store the training data. It seems that it is beneficial to do so after all, so I presume that it’s done frequently: https://ai.stackexchange.com/questions/7739/what-happens-to-the-training-data-after-your-machine-learning-model-has-been-tra
My understanding is also that generative AI often produces plagiarized material. Here’s one academic study demonstrating this: https://www.psu.edu/news/research/story/beyond-memorization-text-generators-may-plagiarize-beyond-copy-and-paste
Finally, I think that whether putting massive restrictions on AI model creation would benefit wealthy corporations is very debatable. Generative AI is causing untold damage to many aspects of life, so it certainly deserves to be tightly controlled. However, I realize that it won’t happen. Just like climate change, it’s a collective action problem, meaning that nothing that would cause significant impact will be done until it’s way too late.