this post was submitted on 05 Sep 2023
322 points (98.8% liked)
Technology
58303 readers
12 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Brandolini's law, aka the "bullshit asymmetry principle" : the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
Unfortunately, with the advent of large language models like ChatGPT, the quantity of bullshit being produced is accelerating and is already outpacing the ability to refute it.
I'm curious to see if AI tech can actually help fight some of the bullshit out there someday. I agree that current AI is only making it easier to produce bullshit, but I think with some advances it could be used to parse a long-winded batch of bullshit, and summarize it, maybe with bullet points about how the source material is wrong. If they can make an AI as confident as chatgpt, but without as much of the "makes stuff up left and right" it could be useful.
THEN we just have to worry about who owns the AI that parses and summarizes the info we take in, and what kind of biases they've baked into the tech...
Those AI are the best ones to produce fake scientific papers. It's a cat and mouse game again. Those who can detect bullshit can produce the best bullshit.