this post was submitted on 12 Jul 2023
277 points (97.6% liked)

Technology

58303 readers
10 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Users of OpenAI's GPT-4 are complaining that the AI model is performing worse lately. Industry insiders say a redesign of GPT-4 could be to blame.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 1 year ago (2 children)

Good, they should be seperate.

You don’t want a medical llm trained on Internet memes or a coding llm trained to write poetry. Specialisation exists for a reason.

[–] [email protected] 5 points 1 year ago (2 children)

Honest question, why would you want a medical LLM anyway? Other kinds of AI, sure, like diagnosis help through pattern learning on medical imaging, etc, that I can understand.

How is a language based approach that completely abstracts away actual knowledge, and just tries to sound "good enough" any kind of useful in a medical workflow?

[–] [email protected] 1 points 1 year ago

How is a language based approach that completely abstracts away actual knowledge, and just tries to sound “good enough” any kind of useful in a medical workflow?

A LLM cross-referencing a list of symptoms against papers and books could be helpful for example. There is so much medical literature available these days and in so many languages that no one person can hope to gain a somewhat clear overview, much less keep up with all the new stuff coming out.

Of course this should only be in assistance to a trained medical professional, as all neural networks are prone to hallucinations. You should also double-check results of NNs that interpret medical images, they may straight-up hallucinate or just pick up on correlation instead of causation (say all the cancer images in your training set having a watermark from the same lab or equipment manufacturer).

[–] [email protected] 1 points 1 year ago (1 children)

I work in the assisted living field. There's frequently 1 nurse tending 40+ beds for 8 hours. If the next nurse is late, that's 1 nurse for 8+ hours until the next one shows. You can bet your ass that nurse isn't providing high quality medical advice 12 hours into a shift. An ai can take a non partial perspective and output a baseline level of advice to help the wheels moving.

[–] [email protected] 1 points 1 year ago

Yep, the benefit is in double checking humans, not replacing them.

[–] [email protected] 1 points 1 year ago

This isn't a person, it's a machine. It doesn't have the same limitations. Higher compute cost, but it can do multiple things at once.

It's not good of it's creating artificial demand and leading to less accessibility and higher costs.