this post was submitted on 22 Dec 2024
409 points (96.2% liked)

Technology

60060 readers
2959 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 4 hours ago (3 children)

At a beach restaurant the other night I kept hearing a loud American voice cut across all conversation, going on and on about “AI” and how it would get into all human “workflows” (new buzzword?). His confidence and loudness was only matched by his obvious lack of understanding of how LLMs actually work.

[–] [email protected] 4 points 2 hours ago (1 children)

Some people can only hear "AI means I can pay people less/get rid of them entirely" and stop listening.

[–] [email protected] 1 points 1 hour ago

AI means C level jobs should be on the block as well. The board can make decisions based on their output.

[–] [email protected] 2 points 3 hours ago (1 children)

I've noticed that the people most vocal about wanting to use AI get very coy when you ask them what it should actually do.

[–] [email protected] 3 points 2 hours ago

Because as a social phenomenon it promises to decide for them what it should actually do.

[–] [email protected] 1 points 4 hours ago* (last edited 4 hours ago) (1 children)

I really like the idea of an LLM being narrowly configured to filter, summarize data which comes in at a irregular/organic form.

You would have to do it multiples in parallel with different models and slightly different configurations to reduce hallucinations (Similar to sensor redundancies in Industrial Safety Levels) but still, ... that alone is a game changer in "parsing the real world" .... that energy amount needed to do this "right >= 3x" is cut short by removing the safety and redundancy because the hallucinations only become apparent down the line somewhere and only sometimes.

They poison their own well because they jump directly to the enshittyfication stage.

So people talking about embedding it into workflow... hi... here I am! =D

[–] [email protected] 1 points 36 minutes ago

A buddy of mine has been doing this for months. As a manager, his first use case was summarizing the statuses of his team into a team status. Arguably hallucinations aren’t critical