this post was submitted on 19 Mar 2025
809 points (98.3% liked)

Technology

66975 readers
3716 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 hour ago (1 children)

LLMs are good for learning, brainstorming, and mundane writing tasks.

[–] [email protected] 2 points 36 minutes ago

Yes, and maybe finding information right in front of them, and nothing more

[–] [email protected] 1 points 4 minutes ago

Misleading title. From the article,

Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.

In no way does this imply that the "industry is pouring billions into a dead end". AGI isn't even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.

[–] [email protected] 7 points 3 hours ago (1 children)

I used to support an IVA cluster. Now the only thing I use AI for is voice controls to set timers on my phone.

[–] [email protected] 8 points 2 hours ago

That's what I did on my Samsung galaxy S5 a decade ago .

[–] [email protected] 8 points 4 hours ago

LLMs are fundamentally limited, the only interesting application with them is research more or less. There are some practical applications, but those are already being used in industry today, so meh.

Whether or not it's a dead end, is questionable, because scientific research is often met with many a dead end, that's just how it is.

[–] [email protected] 8 points 4 hours ago* (last edited 4 hours ago) (1 children)

I think the first llm that introduces a good personality will be the winner. I don't care if the AI seems deranged and seems to hate all humans to me that's more approachable than a boring AI that constantly insists it's right and ends the conversation.

I want an AI that argues with me and calls me a useless bag of meat when I disagree with it. Basically I want a personality.

[–] [email protected] 8 points 4 hours ago (2 children)

I'm not AI but I'd like to say thay thing to you at no cost at all you useless bag of meat.

[–] [email protected] 3 points 3 hours ago* (last edited 3 hours ago)

To be honest I welcome that response in an AI I have chat gpt set to be as deranged as possible giving it examples like the Dungeon Crawler AI among others like the novels of expeditionary force with Ai's like skippy.

I want an AI with attitude honestly. Even when it's wrong it's amusing. Don't get me wrong I want the right info just given to me arrogantly

[–] [email protected] 0 points 9 hours ago

This is slightly misleading. Even if you can't achieve "agi" (a barely defined term anyways) it doesn't mean AI is a dead end.

[–] [email protected] 278 points 1 day ago (2 children)
[–] [email protected] 85 points 1 day ago* (last edited 1 day ago) (5 children)

I like my project manager, they find me work, ask how I'm doing and talk straight.

It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

[–] [email protected] 29 points 1 day ago* (last edited 1 day ago) (2 children)

COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.

This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.

I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you're having on a personal project or what toy to buy for your cat's birthday.

[–] [email protected] 4 points 9 hours ago

I think my CEO is doing something wrong then because he seems to be trying to maximize IC whiplash sometimes.

[–] [email protected] 3 points 8 hours ago

Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.

Exactly. Do the daily corpo dance and cheer if they babbling about innovation, progress, growth and new products. Do not fight against it. Just take your money and put your valuable time and energy elsewhere.

load more comments (4 replies)
[–] [email protected] 15 points 1 day ago* (last edited 1 day ago) (1 children)

The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don't give a fuck. Spend 1 trillion on AI and say we're near over and over again and literally nobody can stop them right now.

[–] [email protected] 2 points 7 hours ago (1 children)

IBM used to controll the hardware as well, what's the moat?

[–] [email protected] 2 points 6 hours ago* (last edited 6 hours ago)

How many governments were using computers back then when IBM was controlling hardware and how many relied on paper and calculators ? The problem is that gov are dependend on companies right now, not companies dependent on governments.

Imagine Apple, Google, Amazon and Microsoft decides to leave EU on Monday. They say we ban all European citizens from all of our services on Monday and we close all of our offices and delete data from all of our datacenters. Good Fucking Luck !

What will happen in Europe on Monday ? Compare it with what would happen if IBM said 50 years ago they are leaving Europe.

[–] [email protected] 74 points 1 day ago* (last edited 1 day ago) (2 children)

It's ironic how conservative the spending actually is.

Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

No.

Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

[–] [email protected] 1 points 3 minutes ago

Good ideas are dime a dozen. Implementation is the game.

Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.

[–] [email protected] 4 points 4 hours ago (1 children)

wait so the people doing the work don't get paid and the people who get paid steal from others?

that is just so uncharacteristic of capitalism, what a surprise

[–] [email protected] 3 points 2 hours ago* (last edited 2 hours ago)

It’s also cultish.

Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that's what is trending on social media.

It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There's a lot of groupthink, an urgency to "catch up and ship" and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.

This sector is trying to innovate and make something efficient, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.

[–] [email protected] 33 points 1 day ago* (last edited 1 day ago) (2 children)

I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.

The fact nothing got optimized, and it still didn't collapse, after deepseek? kind of gave the whole game away. there's something else going on here. this isn't about the technology, because there is no meaningful technology here.

I have been called a killjoy luddite by reddit-brained morons almost every time.

[–] [email protected] 7 points 1 day ago

What’re you talking about? What happened in 1952?

I have to disagree, I don’t think it’s meaningless. I think that’s unfair. But it certainly is overhyped. Maybe just a semantic difference?

[–] [email protected] 6 points 1 day ago

Companies aren't investing to achieve AGI as far as I'm aware, that's not the end game so I this title is misinformation. Even if AGI was achieved it'd be a happy accident, not the goal.

The goal of all these investments is to convince businesses to replace their employees with AI to the maximum extent possible. They want that payroll money.

The other goal is to cut out all third party websites from advertising revenue. If people only get information through Meta or Google or whatever, they get to control what's presented. If people just take their AI results at face value and don't actually click through to other websites, they stay in the ecosystem these corporations control. They get to sell access to the public, even more so than they do now.

[–] [email protected] 15 points 1 day ago

Good, let them go broke in the pursuit of a dead end.

[–] [email protected] 8 points 1 day ago* (last edited 1 day ago) (1 children)

It doesnt matter if they reach any end result, as long as stocks go up and profits go up.

Consumers arent really asking for AI but its being used to push new hardware and make previous hardware feel old. Eventually everyone has AI on their phone, most of it unused.

[–] [email protected] 2 points 1 day ago* (last edited 13 hours ago)

If enough researchers talk about the problems then that will eventually break through the bubble and investors will pull out.

We're at the stage of the new technology hype cycle where it crashes, essentially for this reason. I really hope it does soon because then they'll stop trying to force it down our throats in every service we use.

[–] [email protected] 71 points 1 day ago (10 children)

The actual survey result:

Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed. 

So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

[–] [email protected] 18 points 1 day ago (1 children)

It's becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what's been built is a glorified homework cheater and a better search engine.

[–] [email protected] 32 points 1 day ago

what's been built is a glorified homework cheater and an ~~better~~ unreliable search engine.

load more comments (9 replies)
[–] [email protected] 7 points 1 day ago (2 children)

Why won't they pour billions into me? I'd actually put it to good use.

load more comments (2 replies)
[–] [email protected] 112 points 1 day ago (7 children)

Optimizing AI performance by “scaling” is lazy and wasteful.

Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.

[–] [email protected] 1 points 9 hours ago

don’t worry about performance, GHz will always go up

TF2 devs lol

load more comments (6 replies)
[–] [email protected] 91 points 1 day ago (35 children)

They're throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

[–] [email protected] 78 points 1 day ago (7 children)

I mean it's pretty clear they're desperate to cut human workers out of the picture so they don't have to pay employees that need things like emotional support, food, and sleep.

They want a workslave that never demands better conditions, that's it. That's the play. Period.

load more comments (7 replies)
load more comments (34 replies)
[–] [email protected] 59 points 1 day ago* (last edited 1 day ago) (3 children)

Technology in most cases progresses on a logarithmic scale when innovation isn't prioritized. We've basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we're in the "bells and whistles" phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

load more comments (3 replies)
[–] [email protected] 42 points 1 day ago (17 children)

Me and my 5.000 closest friends don't like that the website and their 1.300 partners all need my data.

load more comments (17 replies)
[–] [email protected] 7 points 1 day ago

The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.

load more comments
view more: next ›