this post was submitted on 24 Jun 2023
100 points (99.0% liked)

Technology

37717 readers
524 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

There are lots of articles about bad use cases of ChatGPT that Google already provided for decades.

Want to get bad medical advice for the weird pain in your belly? Google can tell you it's cancer, no problem.

Do you want to know how to make drugs without a lab? Google even gives you links to stores where you can buy the materials for it.

Want some racism/misogyny/other evil content? Google is your ever helpful friend and garbage dump.

What's the difference apart from ChatGPT's inability to link to existing sources?

Edit: Just to clear things up. This post is specifically not about the new use cases that come from AI. Sure, Google cannot make semi-non-functional mini programs automatically, and Google will not write a fake paper in whole for me. I am specifically talking about the "This will change the world" articles, that mirror stuff that Google can do exactly like ChatGPT can.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

It'll include sources if the sentence structure suggests they should be there, but they'll also just be built by probabilistic insertion of words.

I've seen attempts of people trying to train a LLM on information with sources. The end result was a model that would still hallucinate false information, and follow it up with a convincing looking source that doesn't actually exist or a link that just leads to a 404 page. The way current LLMs work makes it impossible for them to mention accurate sources by default as they don't remember full sentences or even any actual information, but just pick up some underlying patterns.

Currently the best you can do is letting a LLM come up with search engine queries to find relevant and up to date information for a certain question, and then making it formulate an answer based on what it found and including links to the page(s) it used. The main problem here is that LLMs are not great yet at verifying if a source is accurate, and most people will just take anything that mentions a source as a hard fact without even looking at what the source is.

[–] [email protected] 1 points 1 year ago

It's like a fancy interface for Google's "I'm feeling lucky" button.