Architeuthis

joined 2 years ago
[–] [email protected] 6 points 1 week ago (1 children)

Didn't mean to imply otherwise, just wanted to point out that the call is coming from inside the house.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (14 children)

He claims he was explaining what others believe not what he believes

Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.

I'm pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying "I disagree that this is a likely timescale but I'm going to try to explain Daniel's position" immediately before. The reason I feel able to explain Daniel's position is that I argued with him about it for ~2 hours until I finally had to admit it wasn't completely insane and I couldn't find further holes in it.

Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn't into, it's not really relevant context.

Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it's fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .

[–] [email protected] 8 points 1 week ago (4 children)

(Are there multiple ai Nobel prize winners who are ai doomers?)

There's Geoffrey Hinton I guess, even if his 2024 Nobel in (somehow) Physics seemed like a transparent attempt at trend chasing on behalf of the Nobel committee.

[–] [email protected] 7 points 1 week ago (3 children)
[–] [email protected] 9 points 1 week ago

Also, add obvious and overdetermined to the pile of siskindisms next to very non-provably not-correct.

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

Scoot makes the case that agi could have murderbot factories up and running in a year if it wanted to https://old.reddit.com/r/slatestarcodex/comments/1kp3qdh/how_openai_could_build_a_robot_army_in_a_year/

edit: Wrote it up

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

What is the analysis tool?

The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.

When to use the analysis tool

Use the analysis tool for:

  • Complex math problems that require a high level of accuracy and cannot easily be done with "mental math"
  • To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.

uh

[–] [email protected] 4 points 1 week ago

Come on, the AI wrote code that published his wallet key and then he straight up tweeted it in a screenshot, it's objectively funny/harrowing.

Also the thing with AI tooling isn't so much that it isn't used wisely as it is that you might get several constructive and helpful outputs followed by a very convincingly correct looking one that is in fact utterly catastrophic.

[–] [email protected] 3 points 1 week ago

You run CanadianGirlfriendGPT, got it.

[–] [email protected] 14 points 2 weeks ago (4 children)

If LLM hallucinations ever become a non-issue I doubt I'll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

[–] [email protected] 16 points 2 weeks ago* (last edited 2 weeks ago)

copilot assisted code

The article isn't really about autocompleted code, nobody's coming at you for telling the slop machine to convert a DTO to an html form using reactjs, it's more about prominent CEO claims about their codebases being purely AI generated at rates up to 30% and how swengs might be obsolete by next tuesday after dinner.

[–] [email protected] 17 points 2 weeks ago

Ask chatgpt to explain it to you.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

 

Transcription:

Thinking about that guy who wants a global suprasovereign execution squad with authority to disable the math of encryption and bunker buster my gaming computer if they detect it has too many transistors because BonziBuddy might get smart enough to order custom RNA viruses online.

view more: ‹ prev next ›