this post was submitted on 10 Dec 2023
177 points (94.0% liked)

Technology

58303 readers
70 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 11 months ago (2 children)

A machine learning algorithm is only as good as the data you feed it. flawed or incomplete data just puts out false/biased positives and some systems will always try to output any data that you might accept. So if the approach really was "it's like a streetlight, when it's green we bomb", then that's really dark skynet stuff. But Imho blaming a machine, just sounds like a bad PR excuse for getting out of a potential crimes against humanity charge in den haag.

[–] [email protected] 8 points 11 months ago* (last edited 11 months ago)

Both this use and corporate use of AI isn't really about making things better, it's to avoid anyone having responsibility for anything. A human might have issues with picking out a target that kills 20 innocent people on the off chance a Hamas fighter might be there, and might hold back a little bit if they might be worried the ICC could come knocking, or a critical newspaper article could come out that calls you a merchant of death. AI will pop out coordinates all day and night based on the thinnest evidence, or even no evidence at all. Same with health insurers using AI to deny coverage, AI finding suspects based on grainy CCTV footage, etc, etc.

Nobody's responsible, because 'the machine did it', and we were just following its lead. In the same way that corporations really aren't being held responsible for crimes a private person couldn't get away with, AI is another layer of insulation between 'externalities' and anyone facing consequences for them.

[–] [email protected] 7 points 11 months ago

My favorite ML result was (details may be inaccurate, I'm trying to recall from memory) a model that analyzed scan images from MRI machines, that would have far more confidence of the problems it was detecting if the image was taken on a machine with an old manufacture date. The training data had very few negative results from older machines, so the assumption that an image taken on an old machine showed the issue fit the data.

There was speculation about why that would happen in the training data, but the pattern noticing machine sure noticed the pattern.