this post was submitted on 26 Jan 2025
34 points (100.0% liked)

TechTakes

1748 readers
79 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 2 months ago* (last edited 2 months ago) (4 children)

The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.

And people believe this ... why? I mean, shouldn't the default assumption about anything anyone in AI says is that it's a lie?

[–] [email protected] 15 points 2 months ago (3 children)

A) Putting on my conspiracy theory hat… OpenAI has been bleeding for most of a year now, with execs hitting the door running and taking staff with them. It’s not at all implausible that somebody lower on the totem pole could have been convinced to leak some reinforcement training weights to help Deepseek along.

B) Putting on my best LessWronger hat (random brown stains, full of holes)… I estimate no less than a 25% chance that by the end of this week, Sammy-boy will be demanding an Oval Office meeting, banging the table and screaming about “theft!” and “hacking!!”

[–] [email protected] 7 points 2 months ago

Tired: moats

Wired: drawbridges

Inspired: trebuchets

[–] [email protected] 5 points 2 months ago

screaming about “theft!” and “hacking!!”

Sounds plausible. Or maybe they will go with a don't use it, because privacy! take. Funny thing is, I actually agree people shouldn't give them their data. But they shouldn't give it to OpenAI either...

[–] [email protected] 5 points 1 month ago

On one hand this is true. On the other hand, I can absolutely buy that nobody in silicon valley was particularly trying to optimize for costs when they had access to more VC money than God.

[–] [email protected] 4 points 2 months ago

Should be, but it isn't.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

And people believe this … why?

Maybe people believe that all the AI stuff is just magic [insert sparkle emoji], and that can terminate further thought...

Edit: heh, turns out there's science about that notion

[–] [email protected] 17 points 2 months ago (2 children)

This shows the US is falling behind China, so you gotta give OpenAI more money!

Fear of a "bullshit gap", I guess.

Oh, and: simply perfect choice of header image on that article.

[–] [email protected] 14 points 2 months ago

Altman: Mr. President, we must not allow a bullshit gap!

Musk: I have a plan... Mein Führer, I can walk!

[–] [email protected] 12 points 2 months ago (1 children)

Kind of reminds me of the government funding Star wars programs that never produced anything but was credited for spending the Soviet Union into their grave because they couldn't keep up. But I don't think it's going to work the same this time...

[–] [email protected] 8 points 2 months ago

i understand that the thing that worked was mostly space-based surveillance

[–] [email protected] 16 points 2 months ago* (last edited 2 months ago)

This but it’s american vs chinese slop

[–] [email protected] 4 points 2 months ago

I tried the chicken, fox and grain riddle on it. It rambled for 477 lines before I killed it.

I tried to make it easier by saying it can put three things in the boat. It did seem to realise it was a trick question. Now it's stuck repeatedly solving the classic riddle dismissing it, then saying the correct solution and dismissing it.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

wrong thread :(