this post was submitted on 13 May 2025
459 points (100.0% liked)
TechTakes
1858 readers
335 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because it's a upscaled translation tech maybe?
These views on LLMs are simplistic. As a wise man once said, "check yoself befo yo wreck yoself", I recommend more education thus
LLM structures arw over hyped, but they're also not that simple
Yes you're right, it has some keyboard equivalent autocomplete as well.
Autocomplete LLMs are different from instruct LLMs
From what i know from recent articles about retracing LLM indepth, they are indeed best suited for language translation and perfectly explain the halucinations. And i think i've read somewhere that this was the originally intended purpose of the tech?
Ah, here, and here more tabloid-ish.
many of the proponents of things in this field will propose/argue $x thing to be massively valuable for $x
thing is, that doesn't often work out
yes, there's some value in the tech for translation outcomes. to anyone even mildly online, "so are language teaching apps/sites using this?" is probably a very nearby question. and rightly so!
and then when you go digging into how that's going in practice, wow fuck damn doesn't that Glorious AI Future sheen just fall right off...
Translation/text processing are some of the best cases of LLM performance, that is true. Although, translation is much harder than other processing because of training data.
But considering new research from anthropic on model structures I really think it's unfair to beat these things down to just that.
Yeah, sure, it's not that simple.