this post was submitted on 12 Jul 2024
564 points (98.3% liked)

Technology

58303 readers
15 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.

Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

(A copy of the bill is in he article, here is the important part imo:

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 4 months ago (1 children)

So you are saying that content scraped before the law is fair game to train new models? If so it's fucking terrible. But again, I doubt this is the case since this would be against the interests of the big copyright holders. And if it's not the case you are just creating a storm in glass of water since this affects the companies too.

As a side point, I'm really curious about LLM uses. As a programmer the only useful product I have seen so far is copilot and similar tools. And I ended up disabling the fucking thing because it produces too much garbage hahaha. But I'm the first to admit I haven't been following this hype cycle hahahaha, so I'm really curious what the big things will be. You clearly know so much, so want to enligten me?

[–] [email protected] 2 points 4 months ago (1 children)

This bill is being built with the interests of the big tech companies in mind imo, big copyright holders are just an afterthought. I figure since big tech spent quite a bit of money building those datasets and since they were built before the law, they will be able to keep using them as long as they don't add anything new but I can't be certain.

The use cases are vast. This is a huge boon for the indie gaming and animation industry. I'm seriously excited to have NPCs running on llms and don't want to be forced into a subscription just to play my games. It's also going to bring smart homes to an other level. Systems can be built that are much stronger than Alexa without having to send all that insanely private data to Amazon. There's a huge privacy issue if all the available models only run on Google or openais cloud, but I won't get into that (not to mention that these corporate llms will eventually be trained for advertisement and will essentially be poisoned to prefer whoever is paying its creator).

I'll give some more concrete example with my work but it will be a bit vague to preserve my anonymity.

I work in research (I originally studied software engineering and robotics) and we have about 20 years worth of projects. None of it is standardized and it's honestly a mess. I built a system in the space of a few days that grabs everyone of those docs, reads through it with an LLM and then classifies them doc per doc into an excel sheet with a SharePoint link. I've got 20 columns in there, it summarizes them, choses from a list of 30 types of documents I gave it, extracts related towns and people as well as companies and domain, it extracts the columns if there are any tables inside and generally establishes a bunch of different relationships. It doesn't sound like much but doing it by hand would have been weeks of tedious work. My computer did it in 20 minutes using a local LLM so any sensitive client data doesn't leave the building.

Right now I'm working on a GraphRAG system that will take all those docuuments and turns into into vectors, then an LLM adds relationships to those vectors. It will be incorporated into an internal chatbots so people can ask questions and not only get a natural language answer but have the references where the information was found and quick access to it. It's vector search on steroids and will cost nothing to run. I'm planning on eventually training the chatbots itself on our data so it can have a better understanding of our research sector as well as direct access to all the documents.

Next is building something that gets info automatically from the web. Sometimes we have to create long Excel sheets with a bunch of different data points. We stay at a state level usually but it can sometimes mean 1000 businesses and we have to google each one manually and find the info. It's sometimes weeks of work and honestly sucks doing. Llms are entirely capable of doing this kind of work and would take a few hours at most, again at no cost.

These things are seriously great whenever it's dealing with data that isn't just numbers and is hard to quantify. I hate Reddit and will never create an account there after what happened but I still go daily to the localllama subreddit, it's a great source of information if you want to keep abreast with what's happening.

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (1 children)

I figure since big tech spent quite a bit of money building those datasets and since they were built before the law, they will be able to keep using them as long as they don't add anything new but I can't be certain.

This is a very weird assumption you are making man. The quoted text you sent above pretty much says the opposite. It says everyone who wants to train their models wirh copyrigthed data needs to get permission from the copyright holders. That is great for me period. No one, not a big company nor the open source community, gets to steal the work of people producing art, code, etc. I honestly don't get why you assume all the data scrapped before would be exempt. Again, very weird assumption.

As for ML algorithms having use, of course they have. Hell, pretty much every company I have worked with has used them for decades. But take a look at the examples you provided. None of them requires you or your company scrapping a bunch of information from randoms on the internet. Specially not copyrighted art, literature, or code. And that's the point here, you are acting like all of that stops with these laws but that's ridiculous.

[–] [email protected] 1 points 4 months ago

The article is pro corpo, I'm looking at the bill and it's quite clear where it's headed.

None of what I mentioned is possible without the LLM that's at its heart. Just training an LLM is a million or two in compute power. We don't get the next generation for free if laws like this tack on an extra 80 million. 6 million for Reddit and that was when you could scrap it for free, and that's just a drop in the bucket.