this post was submitted on 12 Oct 2024
223 points (95.9% liked)

Technology

58303 readers
25 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 103 points 2 weeks ago (5 children)

These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.

They're completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.

If they receive an input that doesn't have a strong correlation to their training, they just output whatever bullshit comes close, whether it's true or not. Which makes them truly dangerous.

And I highly doubt that'll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won't ever want their "state of the art AI chatbot" to answer a customer's question with "sorry, I don't know."

I can't wait for this stupid AI craze to eat its own tail.

[–] [email protected] 28 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

Last I checked (which was a while ago) "AI" still can't pass the most basic of tasks such as "show me a blank image"/"show me a pure white image". the LLM will output the most intense fever dream possible but never a simple rectangle filled with #fff coded pixels. I'm willing to debate the potentials of AI again once they manage to do that without those "benchmarks" getting special attention in the training data.

[–] [email protected] 32 points 2 weeks ago* (last edited 2 weeks ago) (3 children)
[–] [email protected] 21 points 2 weeks ago

I will say the next attempt was interesting, but even less of a good try.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

Thats actually quite interesting, you could make the argument that that is an image of "a pure white completely flat object with zero content", its just taken your description of what you want the image to be and given an image of an object that satisfies that.

[–] [email protected] 19 points 2 weeks ago (1 children)

Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.

That's why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.

[–] [email protected] 6 points 2 weeks ago* (last edited 2 weeks ago)

Yep. In cryptography there was a moment when cryptographers realized that the key must be secret, the message should be secret, but the rest of the system can not be secret. For the social purpose of refining said system. EDIT: And that these must be separate entities.

These guys basically use lots of data instead of algorithms. Like buying something with oil money instead of money made on construction.

I just want to see the moment when it all bursts. I'll be so gleeful. I'll go and buy an IPA and will laugh in every place in the Internet I'll see this discussed.

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago)

I tested chatgpt, it needed some nagging but it could do it. Needed the size, blank and white keywords.

Obviously a lot harder than it should be, but not impossible.

[–] [email protected] 3 points 2 weeks ago (1 children)

Because it's not AI, it's sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

Artificial intelligence, like any intelligence, has goals and priorities. It has positive and negative reinforcements from real inputs.

Their AI will be possible when it'll be able to want something and decide something, with that moment based on entropy and not extrapolation.

[–] [email protected] 2 points 2 weeks ago (1 children)

Artificial intelligence, like any intelligence, has goals and priorities

No. Intelligence does not necessitate goals. You are able to understand math, letters, words, meaning of those without pursuing a specific goal.

Because it's not AI, it's sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

And our brains work in a similar way.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 11 points 2 weeks ago (1 children)

I generally agree with your comment, but not on this part:

parroting the responses to questions that already existed in their input.

They're quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.

They're completely incapable of critical thought or even basic reasoning.

Critical thought, generally no. Basic reasoning, that they're somewhat capable of. And chain of thought amplifies what little is there.

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago)

I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.

[–] [email protected] 5 points 2 weeks ago

Synthesis versus generation. Yes.

And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”

It's a tower of Babel IRL.

load more comments (2 replies)
[–] [email protected] 47 points 2 weeks ago (3 children)

Of course they don't, logical reasoning isn't just guessing a word or phrase that comes next.

As much as some of these tech bros want human thinking and creativity to be reducible to mere pattern recognition, it isn't, and it never will be.

But the corpos and Capitalists don't care, because their whole worldview is based in the idea that humans are only as valuable as the profitability they generate for a company.

They don't see any value in poetry, or philosophy, or literature, or historical analysis, or visual arts unless it can be patented, trademarked, copyrighted, and sold to consumers at a good markup.

As if the only difference between Van Goh's art and an LLM is the size of sample data and efficiency of an algorithm.

[–] [email protected] 18 points 2 weeks ago (1 children)

You don't have to get all philosophical, since the value art is almost by definition debatable.

These models can't do basic logic. They already fail at this. And that's actually relevant to corpos if you can suddenly convince a chatbot to reduce your bill by 60% because bears don't eat mangos or some other nonsensical statement.

[–] [email protected] 7 points 2 weeks ago (4 children)

It's all connected, the reasons why it can't do basic logical reasoning are the same for why it can't replace human art.

It's because neither of those activities are mere pattern recognition and statistical inference, which is all LLMs will ever be.

load more comments (4 replies)
[–] [email protected] 2 points 2 weeks ago

I'm just thinking - 12 years ago there was a lot of talk of politicians and big corpo chiefs being replaceable with a shell script. As both a joke and an argument in favor of something requiring change.

One can say it was saying that these people are not needed - engineers can build their replacements.

In some sense AI is politicians and big bosses trying to build a replacement for engineers, using means available to these people.

Maybe they noticed, got pissed and are trying to enact revenge. Sort of a domain area war.

load more comments (1 replies)
[–] [email protected] 18 points 2 weeks ago (1 children)

I work for a consulting company and they're truly going off the deep end pushing consultants to sell this miracle solution. They are now doing weekly product demos and all of them are absolutely useless hype grifts. It's maddening.

[–] [email protected] 3 points 2 weeks ago (1 children)

So... Just another Tuesday for consulting then?

[–] [email protected] 2 points 2 weeks ago

No. In the non sales world, I've built some really cool solutions for clients.

[–] [email protected] 17 points 2 weeks ago* (last edited 2 weeks ago)

Apple's study proves that LLM-based AI models are flawed because they cannot reason

This really isn't a good title, I think. It was understood that LLM-based models don't reason, not on their own.

A better one would be that researchers at Apple proposed a metric that better accounts for reasoning capability, a better sort of "score" for an AI's capability.

[–] Timely_Jellyfish_2077 16 points 2 weeks ago
[–] [email protected] 16 points 2 weeks ago (1 children)

I still think it's better to refer to LLMs as "stochastic lexical indexes" than AI

[–] [email protected] 15 points 2 weeks ago (1 children)

AI in general is a shitty term. It's mostly PR. The Term "Intelligence" is very fuzzy and difficult to define - especially for people who are not in the field of machine learning.

[–] [email protected] 4 points 2 weeks ago (1 children)

So for those in ML it's easier?

[–] [email protected] 1 points 2 weeks ago

No it's not, that's why some smart people are starring by defining a more interesting concept: educability.

[–] [email protected] 16 points 2 weeks ago (1 children)

What, reasoning was an expected feature?

[–] [email protected] 11 points 2 weeks ago (3 children)

I still fail to see how people expect LLMs to reason. It's like expecting a slice of pizza to reason. That's just not what it does.

Although Porsche managed to make a car with the engine in the most idiotic place win literally everything on Earth, so I guess I'm leaving a little possibility that the slice of pizza will outreason GPT 4.

[–] Michal 3 points 2 weeks ago

LLMs keep getting better at imitating humans thus for those who don't know how the technology works, it'll seem just like it thinks for itself.

load more comments (2 replies)
[–] [email protected] 8 points 2 weeks ago* (last edited 2 weeks ago) (2 children)
[–] [email protected] 3 points 2 weeks ago (1 children)

Errors from your links like this :
Unable to load conversation 670a...6ed2c

[–] [email protected] 2 points 2 weeks ago (1 children)

Sorry! I've updated my links now.

[–] [email protected] 3 points 2 weeks ago

"... So, Mary has 190 kiwifruit."
nice 😋🥝

[–] [email protected] 3 points 2 weeks ago (1 children)

I wouldn't doubt that LLMs got some special input to deal with the specific examples of this paper, or similar enough.

load more comments (1 replies)
[–] [email protected] 5 points 2 weeks ago (1 children)
[–] [email protected] 3 points 2 weeks ago (1 children)

Water isn’t wet, water wets things, and watered things are wet by the wet but the water ain’t wet as it simply causes wet and thus water isn’t truly wet as water is pure water and pure water isn’t wet and water is not wet and water isn’t wet it’s not wet it’s not wet it’s not dry it’s not wet and it’s not wet it is wet it’s wet and you can see it is wet but it doesn’t look like it it’s dry it’s just wet and it’s wet so I just need it and it’s wet it’s not like it’s dry it’s wet it’s wet so it’s not dry but it’s wet it’s not wet so it’s wet it’s not dry and it’s not dry it’s wet and I just want you know how it was just to be careful that I just don’t know what to say I don’t know what you can tell him I just don’t

[–] [email protected] 6 points 2 weeks ago (1 children)

if water makes other things wet then most water is wet because it (usually) is surrounded by more water. qed

[–] [email protected] 2 points 2 weeks ago (2 children)

An alternative argument: Water generally makes things "wet" due to it forming hydrogen bonds with said things. Water also readily forms hydrogen bonds with itself. Therefore, water is wet.

[–] embed_me 5 points 2 weeks ago

AI could never

load more comments (1 replies)
[–] [email protected] 4 points 2 weeks ago (1 children)

Do we know how human brains reason? Not really... Do we have an abundance of long chains of reasoning we can use as training data?

...no.

So we don't have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.

But also - even if we did, that wouldn't produce 'thought' any more than a book about thought can produce thought.

Thinking is relational. It requires an internal self awareness. We can't discuss that in text so much that a book is suddenly conscious.

This is the idea that"Sentience can't come from semantics"... More is needed than that.

[–] [email protected] 5 points 2 weeks ago

i like your comment here, just one reflection :

Thinking is relational, it requires an internal self awareness.

i think it's like the chicken and the egg : they both come together ... one could try to argue that self-awareness comes from thinking in the fashion of : "i think so i am"

[–] [email protected] 3 points 2 weeks ago

@Timely_Jellyfish_2077 interesting read, thanks for sharing

load more comments
view more: next ›