this post was submitted on 17 Sep 2024
78 points (100.0% liked)

TechTakes

1394 readers
72 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 1 month ago* (last edited 1 month ago)

That’s OpenAI admitting that o1’s “chain of thought” is faked after the fact. The “chain of thought” does not show any internal processes of the LLM — o1 just returns something that looks a bit like a logical chain of reasoning.

I think it's fake "reasoning" but I don't know if (all of) OpenAI thinks that. They probably think hiding this data prevents cot training data from being extracted. I just don't know how deep the stupid runs.