this post was submitted on 25 Oct 2024
313 points (100.0% liked)

TechTakes

1489 readers
83 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 89 points 1 month ago (1 children)

I heard openai execs are so scared of how powerful the next model will be that they're literally shitting themselves every day thinking about it. they don't even clean it up anymore, the openai office is one of the worst smelling places on earth

[–] [email protected] 47 points 1 month ago (3 children)

dude. the AGI will simply vanish the evidence wherever they're standing

[–] [email protected] 28 points 1 month ago

Better than that, AGI will figure out a way to exponentially increase the value of their soiled pants. Blows your fucking mind.

[–] [email protected] 25 points 1 month ago

for every one of me that shit my pants the AGI is simulating ten million of me that didn't, so on average i'm doing pretty ok

[–] [email protected] 11 points 1 month ago (4 children)
[–] [email protected] 16 points 1 month ago (1 children)

Remember when wizards magicking away their shits was the stupidest thing to come out of Rowling's mouth? Pepperidge Farm remembers.

(Seriously, I was not prepared for Rowling's TERFward Turn)

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 46 points 1 month ago (1 children)

Orion is so powerful and dangerous it can write a memetic virus that mindwipes any reader who sees it. It is beyond science. If you use it within three meters of a lit candle it will summon the devil.

[–] [email protected] 28 points 1 month ago* (last edited 1 month ago) (2 children)

It's crazy how these guys will burn billions of dollars and boil the oceans to speak to their invisible friends, when all you really need is a tea candle and 3 cc of mouse blood.

[–] [email protected] 14 points 1 month ago (1 children)

I just use a phone to talk to friends who are out of sight.

[–] [email protected] 11 points 1 month ago (1 children)

The reception in the Seventh Circle of Hell is pretty shite though, I think they're still on 3G

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 44 points 1 month ago (2 children)

really stretching the meaning of the word release past breaking if it’s only going to be available to companies friendly with OpenAI

Orion has been teased by an OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.

so I’m calling it now, this absolute horseshit’s only purpose is desperate critihype. as with previous rounds of this exact same thing, it’ll only exist to give AI influencers a way to feel superior in conversation and grift more research funds. oh of course Strawberry fucks up that prompt but look, my advance access to Orion does so well I’m sure you’ll agree with me it’s AGI! no you can’t prompt it yourself or know how many times I ran the prompt why would I let you do that

That timing lines up with a cryptic post on X by OpenAI Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February (but it also hallucinates that you can rearrange the letters to spell “ORION”).

there’s something incredibly embarrassing about the fact that Sammy announced the name like a lazy ARG based on a GPT response, which GPT proceeded to absolutely fuck up when asked about. a lot like Strawberry really — there’s so much Binance energy in naming the new version of your product after the stupid shit the last version fucked up, especially if the new version doesn’t fix the problem

[–] [email protected] 24 points 1 month ago (2 children)

You forgot the best part, the screenshot of the person asking ChatGPT's "thinking" model what Altman was hiding:

Thought for 95 seconds ... Rearranging the letters in "they are so great" can form the word ORION.

AI is a complete joke, and I have no idea how anyone can think otherwise.

[–] [email protected] 27 points 1 month ago (1 children)

I'm already sick and tired of the "hallucinate" euphemism.

It isn't a cute widdle hallucination, It's the damn product being wrong. Dangerously, stupidly, obviously wrong.

In a world that hadn't already gone well to shit, this would be considered an unacceptable error and a demonstration that the product isn't ready.

Now I suddenly find myself living in this accelerated idiocracy where wall street has forced us - as a fucking society - to live with a Ready, Fire, Aim mentality in business, especially tech.

[–] [email protected] 15 points 1 month ago (2 children)

I think it's weird that "hallucination" would be considered a cute euphemism. Would you trust something that's perpetually tripping balls and confidently announcing whatever comes to them in a dream? To me that sounds worse than merely being wrong.

[–] [email protected] 12 points 1 month ago (2 children)

I think the problem is that it portrays them as weird exceptions, possibly even echoes from some kind of ghost in the machine. Instead of being a statistical inevitability when you're asking for the next predicted token instead of meaningfully examining a model of reality.

"Hallucination" applies only to the times when the output is obviously bad, and hides the fact that it's doing exactly the same thing when it incidentally produces a true statement.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 19 points 1 month ago

[ChatGPT interrupts a Scrabble game, spills the tiles onto the table, and rearranges THEY ARE SO GREAT into TOO MANY SECRETS]

[–] [email protected] 15 points 1 month ago (2 children)

teased by an OpenAI executive as potentially up to 100 times more powerful

"potentially up to 100 times" is such a peculiar phrasing too... could just as well say "potentially up to one billion trillion times!"

[–] [email protected] 9 points 1 month ago

I'd love to get an interview with saltman and ask him to explain how they measure "power" of those things. What's the methodology? Do you have charts? Or does it just somehow consume 100x more power as in watts.

load more comments (1 replies)
[–] [email protected] 39 points 1 month ago (2 children)

So how many ChatGPT 4s have they precariously stacked up on top of each other this time?

[–] [email protected] 16 points 1 month ago (1 children)
[–] [email protected] 10 points 1 month ago

Err 4 I suppose 🤷

[–] [email protected] 13 points 1 month ago

According to the totally unintentional and legit executive leak, they stacked 100 them!

[–] [email protected] 36 points 1 month ago (2 children)

It's the least of this thing's problems, but I've had it with the fucking teasers and "coming soon" announcements. You woke me up for this? Shut the fuck up, finish your product and release it and we'll talk (assuming your product isn't inherently a pile of shit like AI to begin with). Teaser more like harasser. Do not waste my time and energy telling me about stuff that doesn't exist and for the love of all that is holy do not try and make it a cute little ARG puzzle.

load more comments (2 replies)
[–] [email protected] 34 points 1 month ago

The release of this next model comes at a crucial time for OpenAI, which just closed a historic $6.6 billion funding round that requires the company to restructure itself as a for-profit entity. The company is also experiencing significant staff turnover: CTO Mira Murati just announced her departure along with Bob McGrew, the company’s chief research officer, and Barret Zoph, VP of post training.

All the problems with “AI” are suddenly solved now that Altman needs to justify his funding. I’m sure senior executives are jumping ship right on the cusp of their great triumph, because they want to spend more time with their families.

[–] [email protected] 33 points 1 month ago (1 children)

Just don't ask it to count the number of Rs in the word ORION, as that will trigger it to turn us all into paperclips and then output the wrong answer.

[–] CHKMRK 16 points 1 month ago (2 children)

Nah it can do that, probably because they wrote a workaround to use python to count chars in a string, just like they did with arethmetics.

[–] [email protected] 16 points 1 month ago (2 children)

out of curiosity once I tried to ask it to make a colouring picture from a photo of a toy for my kids and it just ran what seemed like imagemagick filters over the photo to convert to black and white and pump up contrast to only show the hard lines - just like all the free convert to outline web tools that have existed forever. I asked it to try again but without the filters, instead to identify the object, and to draw it in a colouring book outline style, and it spat out some shitty stylised mishmash derived from all the illustration IP it stole and ingested. I still feel guilty for trying even that

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 25 points 1 month ago (1 children)

I'm pretty confident they'll continue to roll out new stuff that, like the 4o release, are mild (if, at all) technical improvements made to seem massive by UI stuff that has almost nothing to do with AI. SJ's voice talking to you, bouncy animations, showing "reasoning" aka loading progress.

load more comments (1 replies)
[–] [email protected] 16 points 1 month ago (1 children)

Every model they've released (after 4) has been seemingly worse than the previous.

[–] [email protected] 15 points 1 month ago (18 children)

they're well at the top of the S-curve and now there's only desperate over-engineering and bolting on special cases left

[–] [email protected] 13 points 1 month ago (3 children)

I still cannot believe that they couldn't special-case count 'R' in "strawberry" for their Strawberry model like what the fuck

load more comments (3 replies)
[–] [email protected] 11 points 1 month ago* (last edited 1 month ago)

it is tickling me that this won’t even be GA but “selected companies”

best to keep ~~scamming the easy marks~~ “work with clients aligned to the technology you wish to deliver”, I guess

load more comments (16 replies)
[–] [email protected] 14 points 1 month ago

Orion is coming?

Quick, get him a towel!

[–] [email protected] 13 points 1 month ago* (last edited 1 month ago) (2 children)

Thought for 95 seconds

Rearranging the letters in "they are so great" can form the word ORION.

That’s from the screenshot where they asked the o1 model about the cryptic tweet. There’s certainly utility in these LLMs, but it made me chuckle thinking about how much compute power was spent coming up with this nonsense.

Edit: since this is the internet and there are no non-verbal cues, maybe I should make it clear that this “chuckle” is an ironic chuckle, not a careless or ignorant chuckle. It’s pointing out how inefficient and wasteful a LLM can be, not meant to signal that wasting resources is funny or that it doesn’t matter. I thought that would be clear, but you can read it both ways.

[–] [email protected] 10 points 1 month ago

Introducing Chat-GPT version EATERY SHORTAGE

[–] [email protected] 9 points 1 month ago (9 children)

yes, the massive waste of resources involved is definitely “funny”, that’s definitely the bit of this awful shit to post a take about

load more comments (9 replies)
[–] [email protected] 12 points 1 month ago

They've updated the article. Apparently there isn't a model releasing later this year.

load more comments
view more: next ›