this post was submitted on 21 Sep 2024
47 points (79.0% liked)

Asklemmy

43418 readers
2494 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 11 hours ago

They're still much closer to token predictors than any sort of intelligence. Even the latest models "with reasoning" still can't answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it's never seen the answer anywhere in its training dataset then it's completely incapable of coming up with the correct answer.

Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

[–] [email protected] 3 points 12 hours ago (1 children)

You're trying to graph something that you can't quantify.

You're also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.

[–] [email protected] 0 points 12 hours ago

I agree, people who think LLMs are intelligent are as smart as phone keyboard autocomplete

[–] [email protected] 8 points 18 hours ago (5 children)

Human intelligence is a next word predictor.

Change my mind.

[–] [email protected] 1 points 19 minutes ago

Your face is a next word predictor.

[–] [email protected] 1 points 9 hours ago

What about people who don't speak any language? (Raised by wolves, etc.)

[–] [email protected] 2 points 12 hours ago

It could be.

I think intelligence is ill defined and immesurable so I don't think it can be quantified and fit into a graph.

[–] [email protected] 3 points 17 hours ago

I think you point out the main issue here. Wtf is intelligence as defined by this axis? IQ? Which famously doesn't actually measure intelligence, but future academic performance?

[–] [email protected] 1 points 16 hours ago (2 children)

Human intelligence created language. We taught it to ourselves. That's a higher order of intelligence than a next word predictor.

[–] Sl00k 2 points 12 hours ago

I can't seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.

Not sure if it was peer reviewed though.

[–] [email protected] 2 points 16 hours ago (1 children)

That’s like looking at the β€œwho came first, the chicken or the egg” question as a serious question.

[–] [email protected] 1 points 15 hours ago

Eggs existed long before chickens evolved.

[–] [email protected] 1 points 12 hours ago

Are you interested in this from a philosophical perspective or from a practical perspective?

From a philosophical perspective:

It depends on what you mean by "intelligent". People have been thinking about this for millennia and have come up with different answers. Pick your preference.

From a practical perspective:

This is where it gets interesting. I don't think we'll have a moment where we say "ok now the machine is intelligent". Instead, it will just slowly and slowly take over more and more jobs, by being good at more and more tasks. And just so, in the end, it will take over a lot of human jobs. I think people don't like to hear it due to the fear of unemployedness and such, but I think that's a realistic outcome.

[–] [email protected] 1 points 14 hours ago* (last edited 11 hours ago)

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.

They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.

[–] [email protected] 1 points 16 hours ago (1 children)

I hold a very strong hypothesis, which I’ve not seen any data contradict yet, that intelligence is only possible with formal language and symbolics and therefore formal language and intelligence is very hard to separate. I don’t think one created the other; they evolved together.

[–] [email protected] 1 points 12 hours ago

Yeah I think the human brain is a vehicle for "mind virus" which is script and ideas.

[–] [email protected] 5 points 23 hours ago* (last edited 23 hours ago)

the entire thing is an illusion. what is someone supposed to do with this graph

[–] [email protected] 64 points 1 day ago (9 children)

That's literally how llma work, they quite literally are just next word predictors. There is zero intelligence to them.

It's literally a while token is not "stop", predict next token.

It's just that they are pretty good at predicting the next token so it feels like intelligence.

So on your graph, it would be a vertical line at 0.

[–] [email protected] 11 points 1 day ago (1 children)

What is intelligence though? Maybe I'm getting through life just by being pretty good at predicting what to say or do next...

[–] [email protected] 10 points 1 day ago (1 children)

yeah yeah I've heard this argument before. "What is learning if not like training." I'm not going to define it here. It doesn't "think". It doesn't have nuance. It is simply a prediction engine. A very good prediction engine, but that's all it is. I spent several months of unemployment teaching myself the ins and outs, developing against llms, training a few of my own. I'm very aware that it is not intelligence. It is a very clever trick it pulls off, and easy to fool people that it is intelligence - but it's not.

[–] [email protected] 1 points 14 hours ago (1 children)

But how do you know that the human brain is not just a super sophisticated next-thing predictor that by being super sophisticated manages to incorporate nuance and all that stuff to actually be intelligent? Not saying it is but still.

[–] [email protected] 2 points 14 hours ago

Because we have reason, understanding. Take something as simple as the XY problem. Humans understand that there are nuances to prompts and questions. I like the XY because a human knows to step back and ask "what are you really trying to do?". AI doesn't have that capability, it doesn't have reasoning to say "maybe your approach is wrong".

So, I'm not the one to define what it is or on what scale. But I can say that it's not human intelligence.

load more comments (8 replies)
[–] [email protected] 3 points 22 hours ago

The way I would classify it is if you could somehow extract the "creative writing center" from a human brain, you'd have something comparable to to a LLM. But they lack all the other bits, and reason and learning and memory, or badly imitate them.

If you were to combine multiple AI algorithms similar in power to LLM but designed to do math, logic and reason, and then add some kind of memory, you probably get much further towards AGI. I do not believe we're as far from this as people want to believe, and think that sentience is on a scale.

But it would still not be anchored to reality without some control over a camera and the ability to see and experience reality for itself. Even then it wouldn't understand empathy as anything but an abstract concept.

My guess is that eventually we'll create a kind of "AGI compiler" with a prompt to describe what kind of mind you want to create, and the AI compiler generates it. A kind of "nursing AI". Hopefully it's not about profit, but a prompt about it learning to be friends with humans and genuinely enjoy their company and love us.

[–] [email protected] 42 points 1 day ago

Intelligence is a measure of reasoning ability. LLMs do not reason at all, and therefore cannot be categorized in terms of intelligence at all.

LLMs have been engineered such that they can generally produce content that bears a resemblance to products of reason, but the process by which that's accomplished is a purely statistical one with zero awareness of the ideas communicated by the words they generate and therefore is not and cannot be reason. Reason is and will remain impossible at least until an AI possesses an understanding of the ideas represented by the words it generates.

[–] [email protected] 24 points 1 day ago* (last edited 1 day ago)

There's a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they'd require. I don't know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.

[–] [email protected] 40 points 1 day ago (9 children)

They’re still word predictors. That is literally how the technology works

load more comments (9 replies)
[–] [email protected] 14 points 1 day ago* (last edited 1 day ago) (1 children)

Shouldn't those be opposite sides of the same axis, not two different axes? I'm not sure how this graph should work.

[–] Timely_Jellyfish_2077 2 points 23 hours ago

It could have both abilities right?

[–] [email protected] 11 points 1 day ago (5 children)

I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it's still just stasticial models that can easily be fooled.

[–] [email protected] 7 points 1 day ago (4 children)

I disagree here. Dogs breeds are so diverse, there's no way you could show some pictures of a few dogs and they'd be able to pick other dogs, but also rule out other dog like creatures. Especially not with 100 percent accuracy.

load more comments (4 replies)
load more comments (4 replies)
[–] [email protected] 13 points 1 day ago (8 children)

Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI

load more comments (8 replies)
[–] [email protected] 21 points 1 day ago (4 children)

i think the first question to ask of this graph is, if "human intelligence" is 10, what is 9? how you even begin to approach the problem of reducing the concept of intelligence to a one-dimensional line?

the same applies to the y-axis here. how is something "more" or "less" of a word predictor? LLMs are word predictors. that is their entire point. so are markov chains. are LLMs better word predictors than markov chains? yes, undoubtedly. are they more of a word predictor? um...


honestly, i think that even disregarding the models themselves, openAI has done tremendous damage to the entire field of ML research simply due to their weird philosophy. the e/acc stuff makes them look like a cult, but it matches with the normie understanding of what AI is "supposed" to be and so it makes it really hard to talk about the actual capabilities of ML systems. i prefer to use the term "applied statistics" when giving intros to AI now because the mind-well is already well and truly poisoned.

load more comments (4 replies)
load more comments
view more: next β€Ί