this post was submitted on 15 Feb 2025
10 points (100.0% liked)
LocalLLaMA
2640 readers
2 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The programs usually mmap the file into memory. That means that parts of it are loaded as used and unloaded if there is no memory left. That's why it does not say it is using memory. Check disc i/o as it is generating the message. For linux that can be seen in htop or iotop, for win idk.
Note that I use lmstudio, that uses llama.cpp to run models. Gpt4all, I think, uses a modified version of same. Doesn't matter they should all be using mmap to load the file.
PS Depending on the model I also get a couple tokens per sec on the cpu.
Edit: Didn't see someone already said the same, I'l leave this here anyway.