this post was submitted on 29 Nov 2023
363 points (98.9% liked)

Privacy

31990 readers
312 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that's referenced in the article can be found here

you are viewing a single comment's thread
view the rest of the comments
[–] GarytheSnail 18 points 11 months ago (3 children)

How is this different than just googling for someone's email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?

[–] [email protected] 42 points 11 months ago (2 children)

It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.

In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.

[–] [email protected] 14 points 11 months ago

Nobody wants to be the one to say these models are illegal.

But they obviously are. Quick money by fining the crap out of them. Everyone is about short term gains these days, no?

[–] [email protected] 0 points 11 months ago

Are they illegal if they were entirely free tho?

[–] [email protected] 3 points 11 months ago (1 children)

I can think of two ways it's significantly different:

  1. Legally (in the United States specifically) the courts have previously ruled that search engines collecting links to other people's data is fair use, as it's a mutually beneficial thing for all parties: users find the info that they're looking for, search helps drive traffic to providers of info and services, and the search engine profits off connecting them to each other.

https://www.everycrsreport.com/reports/RL33810.html

https://copyright.columbia.edu/basics/fair-use.html

  1. Unlike Wikipedia, for example, info that's chewed up, processed, and regurgitated by "AI" chat bots and the like is totally unsourced, unaccountable, and passed off as original, authentic knowledge. ChatGPT is collecting various data from all of the net and forming it into something that appears to be presentable and correct, but it's merely recycling ideas from other people's work without any first-hand knowledge, thought, or attribution. Even the people who create "AI" can't even connect the dots about why it says what it says, let alone have it properly source where the information came from.
[–] GarytheSnail 1 points 11 months ago

Thank you for the links!

Do you think the same could be argued: that models collecting links to other people's data is fair use?

[–] [email protected] 1 points 11 months ago

It isn't. If someone is upset about this wait until they find out google's web cache or the wayback machine exists.