this post was submitted on 05 Feb 2024
202 points (84.1% liked)
Asklemmy
43986 readers
730 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Maybe but we are essentially throwing petabyte sized models and lots of compute power at it and the results are somewhere on the level where a three year old would do better in not giving away that they don't understand what they are talking about.
Don't get me wrong, LLMs and the other recent developments in generative AI models are very impressive but it is becoming increasingly clear that the approach is maybe barely useful if we throw about as many computing resources at it as we can afford, severely limiting its potential applications. And even at that level the results are still so bad that you essentially can't trust anything that falls out.
This is very far from being sufficient to fake AGI and has absolutely nothing to do with real AGI.