this post was submitted on 04 Oct 2023
188 points (97.5% liked)

Technology

58303 readers
8 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Meta admits that it trains its AI on your Instagram and Facebook posts::undefined

all 29 comments
sorted by: hot top controversial new old
[–] [email protected] 54 points 1 year ago (3 children)

Every time I read a new story about another example of this, I struggle to understand why people are getting outraged. Did you have some expectation of privacy when you published your thing to the world-connected network run by profit-seeking corporations?

[–] [email protected] 20 points 1 year ago (2 children)

People have been using Facebook for about 15 years now. Of course nobody imagined their posts would be used to train AI at that point. The general population had no idea that was possible until ChatGPT was released last year, if even.

[–] [email protected] 11 points 1 year ago (1 children)

It's not about AI specifically. It's about the awareness that, as soon as you post it to social media, your personal information is not under your control anymore. There was and is still a good reason why you should not post personal info on the internet, even on seemingly "safe" spaces like social media pages.

But people in general don't care anyway and they won't care about this headline too.

[–] [email protected] 3 points 1 year ago

I think it’s less that most people don’t care, or wouldn’t if you explained it, and more that nobody has the power to really do anything about it. Even if you can cut Meta out of your life they still track you and make ghost profiles of you in their data. I’ve never input any identifying information into meta, and my name is so generic it’s practically adjacent to “John smith”, but it still knows who my family and former coworkers are.

[–] [email protected] 5 points 1 year ago (1 children)

It doesn't matter what it's being used for, you post publicly it can be used publicly whether that's for a news article or to train the next ChatGPT.

[–] [email protected] 1 points 1 year ago

Yes - I would be concerned if they were training it off of messages or DMs as those have some expectation of privacy. Which ... they probably are but have fun sorting through years of memes.

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (1 children)

Its media creating a new boogie man. LLM and AI gives regular people a lot of crazy power and creative ability that can challenge current power dynamics. So media is making sure regular folk fear the shit out of it.

We've reached AI is coming for the children articles. AI is stealing your jobs. AI is going to murder your favorite comedians. AI will turn you into a battery. Everyday its some fucking new fear mongering headline

[–] [email protected] 3 points 1 year ago

LLM and AI gives regular people a lot of crazy power and creative ability that can challenge current power dynamics.

My view is opposite in that LLM and AI will further entrench the skewed power dynamics, as only really the big companies like Microsoft, Amazon and Google can fully exploit it, drive and and afford it. Sure I can run some LLMs on my computers at home, but you really need access to a lot of data and computing power to create the models in the first place.

[–] [email protected] -5 points 1 year ago

I don't understand why anyone cares, nothing I posted on Facebook has any value to me at all so if someone can use it to train an AI then good on them.

[–] [email protected] 30 points 1 year ago (3 children)

An LLM trained exclusively on Facebook would be hilarious. It'd be like the Monty Python argument skit.

[–] [email protected] 12 points 1 year ago

No it wouldn’t!

[–] [email protected] 4 points 1 year ago (1 children)

That's where LLaMA came from and it's actually pretty good.

[–] [email protected] 2 points 1 year ago

It uses other datasets. I think FB is mostly for training on how to deal with emotional stupid people and incorrectness.

[–] [email protected] 1 points 1 year ago

I dunno about that, going by the shit my egg sack posts…

[–] [email protected] 25 points 1 year ago (1 children)

So it's racist and can't spell

[–] [email protected] 1 points 1 year ago
[–] [email protected] 18 points 1 year ago (1 children)

So Meta's AI is an angry, racist, self-centered, conspiracy theory obsessed, anti-based, flat-earth, quasi illiterate, artificial intelligence? I always wondered how and why Skynet decided to wipe out humanity. Now I know.

[–] [email protected] 15 points 1 year ago

Meta AI: Some people are just snakes and stab u in the back Im not saying more but u know who u are

Meta AI2: u ok babe

[–] AdmiralShat 9 points 1 year ago

Did anyone think otherwise? Was this ever a mystery?

Least surprising thing they could be doing right now

[–] [email protected] 8 points 1 year ago

No wonder it's terrible

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

So the AI will keep trying to do searches in its status update bar?

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

I'm no fan of meta, but a reminder that they are one of the best right now for keeping their AI developments more open and available. This is thanks to yann lecun and other researchers pressuring meta to keep their info on the subject more open.

Are we looking to punish them for making their work accessible?

Not to mention how important something like joint embedded predictive architecture could be for the future of alignment and real world training/learning. Maybe go after other foundation model developers to be more open, if we're complaining about the inevitably public nature of some information within the mountainous datasets being used.

Although I'm still of the mindset that the model intent matters more than the use of openly available data in training. I.E. I've been shouting about models being used specifically to predict and manipulate user interactions/habits for the better part of a decade. For your "customized advertisements" and the like.

The general public and media interaction on the topic this past year has been insufferably out of touch.

[–] [email protected] 2 points 1 year ago

Not a fan of FB, but this is way overblown AFAIC, you post something publicly expect it to be used publicly

[–] [email protected] 1 points 1 year ago

Isn’t it only open source because of a mistake or a leak? I thought Meta planned on it being 100% proprietary but then tried to spin the leak as them being open.

[–] [email protected] 3 points 1 year ago

I think at this point we can just assume that everything Meta does is the worst possible way that thing could have been done.

[–] [email protected] 2 points 1 year ago

Cue the drop table fb posts

[–] [email protected] 2 points 1 year ago

The AI is going to be weirdly emo and phrasing things like "X is not sure he can do it" because my Facebook activity skews heavily for 10ish years ago when it was popular.

[–] TerminalLover 2 points 1 year ago

Damn, this makes me worried for the future of LLaMA.