this post was submitted on 26 Jun 2023
52 points (100.0% liked)

Technology

37754 readers
292 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 1 year ago (5 children)

I guess the important thing to understand about spurious output (what gets called "hallucinations") is that it's neither a bug nor a feature, it's just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there's no meaning in that. Deep learning can't be said to generate "true" or "false" information, or rather, it can't be meaningfully said to generate information at all.

So then people say that deep learning is helping out in this or that industry. I can tell you that it's pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they're either buying the hype or witnessing an odd series of coincidences.

[–] [email protected] 5 points 1 year ago (1 children)

The thing is, this is not "intelligence" and so "AI" and "hallucinations" are just humanizing something that is not. These are really just huge table lookups with some sort of fancy interpolation/extrapolation logic. So lot of the copyright people are correct. You should not be able to take their works and then just regurgitate them out. I have problem with copyright and patents myself too because frankly lot of it is not very creative either. So one can look at it from both ends. If "AI" can get close to what we do and not really be intelligent at all, what does that say about us. So we may learn a lot about us in the process.

[–] [email protected] 1 points 1 year ago

I would agree that either you have to start saying the ai is smart or we are not.

[–] [email protected] 3 points 1 year ago (1 children)

Deep learning can be and is useful today, it's just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.

[–] [email protected] 4 points 1 year ago (1 children)

What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?

[–] [email protected] 4 points 1 year ago (1 children)

Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.

[–] [email protected] 1 points 1 year ago

Thats been the case time and again, how many disruptions from the tech bros came to industries that had been stagnant or moving at a snails pace when it came to adopting new technology (esp to lock into more expensive legacy systems).

Most of those industries disrupted could have been secured by the players in those markets instead the allowed a disruptor to appear unchallenged.

Remember the market is not as rational as some might think, you start filling gaps and people often won't ask about the fallout and many of these services did have people warning against these things.

We are for the most part, in a nation that lets you do whatever you want until the effects have hit people, this is even more a thing if you are a business. I don't know an easy answer, in some of these cases, old gaurd needed a smack, in others a more controlled entry may have been better. As of now "controlled" is jut about the size of ones cash pile.

Cue the ethical corporations discussion....

[–] [email protected] 2 points 1 year ago (2 children)

I mean AI is already generating lots of bullshit 'reports'. Like you know, stuff that reports 'news' with zero skill. It's glorified copy-pasting really.

If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it's not intelligence. It's just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.

[–] [email protected] 9 points 1 year ago (2 children)

The thing is - and what's also annoying me about the article - AI experts and computational linguistics know this. It's just the laypeople that end up using (or promoting) these tools now that they're public that don't know what they're talking about and project intelligence onto AI that isn't there. The real hallucination problem isn't with deep learning, it's with the users.

[–] [email protected] 1 points 1 year ago

Spot on. I work on AI and just tell people "Don't worry, we're not anywhere close to terminator or skynet or anything remotely close to that yet" I don't know anyone that I work with that wouldn't roll their eyes at most of these "articles" you're talking about. It's frustrating reading some of that crap lol.

[–] [email protected] 1 points 1 year ago (1 children)

The article really isn’t about the hallucinations though. It’s about the impact of AI. its in the second half of the article.

[–] [email protected] 1 points 1 year ago

I read the article yes

[–] [email protected] 1 points 1 year ago

This is the curation effect: generate lots of chaff, and have humans search for the wheat. Thing is, someone's already gotten in deep shit for trying to use deep learning for legal filings.

[–] [email protected] 2 points 1 year ago

I think it can be useful. I have used it myself, even before chatgpt was there and it was just gpt 3. For example I take a picture, OCR it and then look for mistakes with gpt because it's better than a spell check. I've used it to write code in a language I wasn't familiar with and having seen the names of the commands needed I could fix it to do what I wanted. I've also used it for some inspiration, which I could also have done with an online search. The concept just blew up and people were overstating what it can do, but I think now a lot of people know the limitations.

[–] MagicShel 1 points 1 year ago* (last edited 1 year ago)

In a way, NLP is just sort of an exercise in mental muscle-memory. The AI can't do the math that 1+1=2, but if you ask it what 1+1 equals, it will give you a two. Pretty much like any human would do - we don't hold up one finger and another finger and count them.

So in a way, AI embodies a sort of "fuzzy common sense" knowledge. You can ask it questions it hasn't seen before and it can give answers that haven't been given before, but conceptually it will spit out "basically the answer" to "basically that question". For a lot of things that don't require truly novel thinking, it does sort of know things.

Of course, just like we can misunderstand a question or phrase an answer badly or even just misremember an answer, the AI can be wrong. I'd say it can help out quite a bit, but I think it works best as a sort of brainstorming partner to bounce ideas off of. As a software developer, I find it a useful coding partner. It definitely doesn't have all the answers, but you can ask it something like, "why the hell doesn't his code work?" and it might give you a useful answer. It might not, of course, but nothing ventured, nothing gained.

It's best to not think of it or use it like a database, but more like a conversational partner who is fallible like any other, but can respond at your level on just about any subject. Any job that cannot benefit from discussing ideas and issues is probably not a good fit for AI assistants.