this post was submitted on 05 Sep 2024
48 points (100.0% liked)

TechTakes

1370 readers
96 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

http://web.archive.org/web/20240904174555/https://ssi.inc/

I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

They aren't interested in anything besides "superintelligence" which strikes me as an optimistic business strategy. If you are "cracked" you can join them:

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

top 23 comments
sorted by: hot top controversial new old
[–] [email protected] 29 points 1 month ago (1 children)

ah yeah, 10 employees and “worth” $5 billion, utterly normal bubble shit

Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT.

but don’t sweat it, the $1 billion they raised is going straight to doing shit that doesn’t fucking work but does fuck up the environment, trying to squeeze more marginal performance gains out of systems that plateaued when they sucked up all the data on the internet (and throwing money at these things not working isn’t even surprising, given a tiny amount of CS knowledge)

[–] [email protected] 16 points 1 month ago (2 children)

I don't get it. If scaling is all you need, what does a "cracked team" of 5 mean in the end? Nothing?

What's, the different between super intelligence being scaling, and super intelligence, being whatever happens? Can someone explain to me the difference between what is and what SUPER is? When someone gives me the definition of super intelligence as "the power to make anything happen," I always beg, again, "and how is that different precisely from not, that?"

The whole project is tautological.

[–] [email protected] 8 points 1 month ago

I'm just amused that their scaling program doesn't scale properly. Due to the hungry hungry AI needing more and more data.

[–] [email protected] 4 points 1 month ago (2 children)

Superintelligence is an AI meaningfully beyond human capability.

It pretty obviously can't be achieved by brute forcing something already way past diminishing returns, though.

[–] [email protected] 6 points 1 month ago

just a few billion more bro

[–] [email protected] 3 points 1 month ago (2 children)

I'm actually, not convinced that AI meaningfully beyond human capability actually makes any sense, either. The most likely thing is that after stopping the imitation game, an AI developed further would just.. have different goals than us. Heck, it might not even look intelligent at all to half of human observers.

For instance, does the Sun count as a super intelligence? It has far more capability than any human, or humanity as a whole, on the current time scale.

[–] [email protected] 6 points 1 month ago

No, but the moon does.

[–] [email protected] 5 points 1 month ago

Capability is such a vague word, and intelligence isn't all that much better defined.

[–] [email protected] 17 points 1 month ago (1 children)

Eliezer is skeptical, can find flaw in any alignment strategy in 2 minutes: https://x.com/ESYudkowsky/status/1803676608320192617

If you have an alignment plan I can't shoot down in 120 seconds, let's hear it. So far you have not said anything different from the previous packs of disaster monkeys who all said exactly this almost verbatim, but I'm open to hearing better.

[–] [email protected] 23 points 1 month ago (2 children)

is… is yud one of the disaster monkeys? or are we supposed to forget he spent a bunch of years running and renaming an institute that tried and failed to do this exact same alignment grift?

[–] [email protected] 5 points 1 month ago

yud is the uniquely capable person in this area. anyone who even sets foot in it should make groveling to him a high priority. these people are disaster monkeys because they aren't doing that

[–] [email protected] 12 points 1 month ago (2 children)

you know a company is very serious when it uses game balance terminology to describe its HR practices. "I'm sorry, we're going to have to nerf your salary"

[–] [email protected] 11 points 1 month ago (2 children)

“Your wages feel overtuned.”

“We’re thinking of reworking your position.”

“This company is in beta. Your position is not fixed and might not be in the final product.”

Tangent: As a first time participant in an early access game, I came across the term “overtuned” which made me irrationally angry. My understanding is that it is supposed to mean “you buffed this because of feedback but maybe a little too much” but I have rarely seen it used to mean other than “this needs a nerf, and I won’t justify my position, but I want to sound like I’m reasonable”

[–] [email protected] 13 points 1 month ago

“you buffed this because of feedback but maybe a little too much” but I have rarely seen it used to mean other than “this needs a nerf, and I won’t justify my position, but I want to sound like I’m reasonable”

please never let me make the mistake of becoming a game developer; I absolutely would be the guy who closes this ticket with “fuck off gamer” and gets us review bombed by the worst parts of our playerbase

[–] [email protected] 8 points 1 month ago (1 children)

Tangent: As a first time participant in an early access game, I came across the term “overtuned” which made me irrationally angry. My understanding is that it is supposed to mean “you buffed this because of feedback but maybe a little too much” but I have rarely seen it used to mean other than “this needs a nerf, and I won’t justify my position, but I want to sound like I’m reasonable”

never, ever, ever play EVE. you might pop a vein in apoplexy

(CCP demonstrates some of the most stunning lack of systemic thinking and basic hypothesis testing on the very damn thing they control, and it frequently does my head in)

[–] [email protected] 5 points 1 month ago

I think I’ve seen a bit of it and thought that it was a game that could kill me in that manner!!!

[–] [email protected] 6 points 1 month ago

C-level team killer

[–] [email protected] 11 points 1 month ago

It’s called “safe superintelligence” because they want investors to sign SAFE agreements and feel very smart about it.

[–] [email protected] 10 points 1 month ago

Lol. The obvious pump and dump scheme is working as intended.

[–] [email protected] 5 points 1 month ago (1 children)

I’m thinking this is a POSIWID kind of deal. The mission statement of this company should just be: “A. I. Money please!”

[–] [email protected] 5 points 1 month ago

or "AI! Money, please?", in the most extreme conductor voice possible

[–] [email protected] 3 points 1 month ago

Well look if you no longer had a Silicon Valley executive's salary you might have opinions about that situation too.

Weird sort of wartime to be investing new dollars into Israel though I thought?

Oh wait right. https://bdsmovement.net/news/israel%E2%80%99s-most-important-source-capital-california