We could have AI models in a couple years that hold the entire internet in their context window.
That's a really bold claim.
This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.
[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)
We could have AI models in a couple years that hold the entire internet in their context window.
That's a really bold claim.
Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it'll be stupid slow. Even now with llama.cpp, it's annoying when it pauses to do the context window shuffle thing.
Yeah, long term memory where ai can access only what it needs/wants is the way.
For now, I'd be happy with an AI that had access to and remembered the beginning of our conversation.
Anyone know what progress has been made with hallucinations .
Perplexity has pretty much solved that since it searches the internet and uses the information it finds. But I don't know about any advances to solve it directly in LLMs.