this post was submitted on 27 Dec 2024
377 points (95.6% liked)

Technology

60265 readers
3313 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 322 points 1 week ago (5 children)

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities.. just the ability to make piles and piles of money.

[–] [email protected] 102 points 1 week ago

The same way these capitalists evaluate human beings.

[–] [email protected] 48 points 1 week ago (19 children)

Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

load more comments (19 replies)
[–] [email protected] 27 points 1 week ago

That's an Onion level of capitalism

[–] [email protected] 20 points 1 week ago* (last edited 1 week ago) (1 children)

The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it's not a philosophical term but a business one.

[–] [email protected] 15 points 1 week ago (1 children)

Right but that's not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you've developed something that you might not have developed.

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn't have anything to do with "AGI". For context, Apple's net income is 90 Billion this year.

I've listened to enough interviews to know that all of AI leaders want this holy grail title of "inventor of AGI" more than anything else so I don't think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

load more comments (1 replies)
[–] Mikina 183 points 1 week ago (47 children)

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[–] [email protected] 37 points 1 week ago

There are already a few papers about diminishing returns in LLM.

[–] [email protected] 27 points 1 week ago (4 children)

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

load more comments (4 replies)
[–] [email protected] 16 points 1 week ago (2 children)

The only text predictor I want in my life is T9

load more comments (2 replies)
[–] [email protected] 14 points 1 week ago (2 children)

I just tried Google Gemini and it would not stop making shit up, it was really disappointing.

load more comments (2 replies)
[–] [email protected] 10 points 1 week ago (3 children)

Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.

So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

[–] [email protected] 7 points 1 week ago

a series of switches is not ever going to create a sentient being

Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

load more comments (2 replies)
[–] suy 9 points 1 week ago

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

This is correct, and I don't think many serious people disagree with it.

If we ever get it, it won’t be through LLMs.

Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.

Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago)

I mean, human intelligence is ultimately too "just" something.

And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.

My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.

[–] [email protected] 7 points 1 week ago (1 children)

I'm not sure that not bullshitting should be a strict criterion of AGI if whether or not it's been achieved is gauged by its capacity to mimic human thought

[–] [email protected] 15 points 1 week ago (2 children)

The LLM aren't bullshitting. They can't lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (12 children)

Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn't a single thing in the universe that can't be broken down to a mathematical equation for physics or chemistry? I'm curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it's a leap and I could be wrong but I thought I've heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

Like I said in the beginning this is straight up bong rips philosophy and haven't looked up any of the shit I brought up.

I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won't see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can't out perform any more independently than a 3 year old.

load more comments (12 replies)
load more comments (1 replies)
load more comments (39 replies)
[–] [email protected] 78 points 1 week ago (4 children)

We taught sand to do math

And now we're teaching it to dream

All the stupid fucks can think to do with it

Is sell more cars

[–] [email protected] 17 points 1 week ago

Cars, and snake oil, and propaganda

load more comments (3 replies)
[–] [email protected] 55 points 1 week ago* (last edited 1 week ago)

"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

[–] [email protected] 54 points 1 week ago (9 children)

We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.

[–] [email protected] 15 points 1 week ago (3 children)

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether... And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I'd say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can't, but language models to me aren't "AGI" in my opinion.

[–] [email protected] 8 points 1 week ago (4 children)

Agree. And these tasks can't be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn't enough in my eyes. Especially since it even struggles to do that. It's the "general" that is missing.

load more comments (4 replies)
load more comments (2 replies)
[–] [email protected] 7 points 1 week ago* (last edited 1 week ago) (31 children)

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

load more comments (31 replies)
load more comments (7 replies)
[–] [email protected] 45 points 1 week ago (1 children)

This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.

Except that it isn't AGI.

[–] [email protected] 21 points 1 week ago* (last edited 1 week ago) (1 children)

But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved

The real motivation is to not be beholden to Microsoft

load more comments (1 replies)
[–] [email protected] 30 points 1 week ago* (last edited 1 week ago) (7 children)

That's not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It's reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.

[–] brie 18 points 1 week ago (4 children)

Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?

So it'd take a 100M users to sign up for the $200/mo plan. All it'd take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.

load more comments (4 replies)
load more comments (6 replies)
[–] [email protected] 22 points 1 week ago

So they don't actually have a definition of a AGI they just have a point at which they're going to announce it regardless of if it actually is AGI or not.

Great.

[–] [email protected] 19 points 1 week ago (1 children)

I'm gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.

[–] [email protected] 10 points 1 week ago (3 children)

it would be quite trope inversion if people sided with the ai overlord

load more comments (3 replies)
[–] [email protected] 18 points 1 week ago* (last edited 1 week ago) (4 children)

Why does OpenAI "have" everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there... They have a definition of AGI... Yet, they release none of that...

Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company's value, and you'd better not tell the truth. But with all the other things, it's just silly not to share anything.

Either they're even more greedy than the Metas and Googles out there, or all the articles and "leaks" are just unsubstantiated hype.

[–] [email protected] 26 points 1 week ago

Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

[–] [email protected] 21 points 1 week ago

Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

load more comments (2 replies)
[–] [email protected] 16 points 1 week ago

Does anyone have a real link to the non-stalkerware version of:

https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition

-and the only place with the reference this article claims to cite but doesn't quote?

load more comments
view more: next ›