this post was submitted on 20 Feb 2025
67 points (89.4% liked)

Technology

63082 readers
3611 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 73 points 2 days ago* (last edited 2 days ago) (1 children)

TL;DR: yes

It's unfortunate that LLMs are the only thing that come to mind when AI is mentioned though. Something that can do pattern recognition better than a human can is good for this application

[–] [email protected] 41 points 2 days ago (1 children)

Even if it were to do pattern recognition as well as or slightly worse than a human, it's still worthwhile. As the article points out: It's basically a non-tiring, always-ready second opinion. That alone helps a lot.

[–] [email protected] 14 points 2 days ago* (last edited 2 days ago) (2 children)

One issue I could see is using it not as a second opinion, but the only opinion. That doesn't mean this shouldn't be pursued, but the incentives toward laziness and cost-cutting are obvious.

EDIT: One another potential issue is the AI detection being more accurate with certain groups (i.e. White Europeans), which could result in underdiagnosis in minority groups if the training data set doesn't include sufficient data for those groups. I'm not sure if that's likely with breast cancer detection, however.

[–] [email protected] 8 points 2 days ago (1 children)

Also if it's integrated poorly. Like if you have the human only serve as a secondary check to the AI, which is mostly right, you condition the human to just click through and defer to the AI. The better way to do this would be to have both the human and AI judge things independently and review carefully where they disagree but that won't save anyone money.

[–] [email protected] 5 points 2 days ago

if the court system allowed deferring partial fault for "preventable" deaths to the hospital for employing practices that are not in the best interests of the patient it might give them a financial incentive.

[–] [email protected] 2 points 2 days ago

Definitely, here's hoping the accountability question will prevent that, but the incentive is there, especially in systems with for-profit healthcare.

[–] Mihies 15 points 2 days ago* (last edited 2 days ago) (1 children)

I remember, when we were learning prolog, that in the 70s, or something like, that they were already experimenting with AI and it was quite good at diagnostics. However doctors were scared of losing jobs instead of embracing it and using it as a tool. So they dropped it at the time. Hopefully they will use it as an additional tool this time and everybody profits.

[–] [email protected] 1 points 1 day ago

If you're going back that far, I remember hearing a story about the Australian military experimenting with immersive AI during a typical "give us money" event where a helicopter was flying over an area and the kangaroos scattered at the sound, disappearing over a hill...

Then reappeared with RPGs and fired them at the helicopter, taking it down. Lots of red faces and mumbling about working out some kinks. 😄

tl;dr: I'm old enough to remember when "AI" was a benign comic novelty. 🙃

[–] [email protected] 13 points 2 days ago (1 children)

My favorite AI fact is from cancer research. The New Yorker has a great article about how an algorithm used to identify and price out pastries at a Japanese bakery found surprising success as a cancer detector. https://www.newyorker.com/tech/annals-of-technology/the-pastry-ai-that-learned-to-fight-cancer