bia

joined 1 year ago
[–] [email protected] 11 points 1 year ago (3 children)

Do you have a degree in theoretical physics, or do you theoretical have a degree. ;)

[–] [email protected] 4 points 1 year ago

Yeah, me too. After summer vacation it's hopefully available and I'll dig into it.

[–] [email protected] 4 points 1 year ago

I learned the hard way to never generate anything I couldn't create myself, of at least verify its validity.

[–] [email protected] 6 points 1 year ago (7 children)

I used it quite a lot at the start of the year, for software architecture and development. But the number of areas where it was useful were so small, and running it locally is quite slow. (which I do for privacy reasons)

I noticed that much of what was generated needed to be double checked, and were sometimes just wrong, so I've basically stopped using it.

Now I'm hopeful for better code generation models, and will spend the fall building a framework around a local model. See if the helps in guiding the models generation.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

Hmm. I'd actually argue it's a good solution in some cases. We run multiple services where load is intermittent, services are short-lived, or the code is complex and hard to refactor. Just adding hardware resources can be a much cheaper solution than optimizing code.

[–] [email protected] 3 points 1 year ago
 

I’ve been using llama.cpp, gpt-llama and chatbot-ui for a while now, and I’m very happy with it. However, I’m now looking into a more stable setup using only GPU. Is this llama.cpp still still a good candidate for that?

[–] [email protected] 3 points 1 year ago (2 children)

I've been running debian stable for work laptop, gaming PC and servers for years now. Can confirm it just works!

Debian 12 upgrade coming up soon. Probably (maybe not) some effort to upgrade everything, and that back to smooth sailing. :)