this post was submitted on 26 Feb 2024
84 points (76.6% liked)

Linux

48214 readers
840 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Today i was doing the daily ritual of looking at distrowatch. Todays reveiw section was about a termal called warp, it has built in AI for recomendations and correction for commands (like zhs and nushell). You can also as a chatbot for help. I think its a neat conscept however the security is what makes me a bit skittish. They say the dont collect data and you can check it aswell as opt out. But the idea of a terminal being read by an Ai makes me hesitant aswell as a account needed to use warp. What do you guys think?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 8 months ago (1 children)

So compared to plain bash without autocomplete and Ctrl+R it may be useful. It is probably a step back for everyone else.

I think it could be much worse than even a plain shell with ^R, as the llm will be slower than the normal history search and probably has less context than the $HISTFILE.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

I think so, too. I mean the traditional history search and command option suggestions are instant and come at no additional cost. I don't know how fast ChatGPT is, I only ever play around with local LLMs. And roughly exploring what Github Copilot is about, just made my laptop fans spin on max and started to drain the battery really fast. Would be the same for an 'AI' terminal. And when asking the LLMs for shell commands I got mixed results. It can do easy stuff. So I guess for someone who wonders how to find the IP address... It'll do the trick. But all the things I tried asking some chatbots that would have been really useful to me, failed. It hallucinated parameters or did something else. And I needed to google it anyways or open the man page.

I'm not sure, I currently don't see me using such tools. I like talking to chatbots and have them draft stuff and provide me with ideas. But I also like computers in the other way, that they are machines that just follow my orders and don't talk back. And when working in the terminal or coding, it seems to distract me if suggestions pop up and I need to read them and decide what to do, or occasionally laugh... For me it seems to work better if I think about something, have an idea in my head and type it down without discussing it with the machine... I mean not 100% of the time, sometimes a suggestion helps... But I think I rather have the chatbot in a separate window and only loosely tied into my workflow if at all. And I don't like proprietary and cloud-based products for something like this.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

It hallucinated parameters

Sound like LLMs to me. This is not going to stop being a problem. This is the fundamental problem with LLMs - they are text prediction algorithms and have no comprehension of their output.

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

I'm not sure. Afaik the research is happening. And AI related stuff always happens faster than I can imagine. Ultimately I want the LLMs to hallucinate. They should be able to combine ideas and come up with new and creative answers and be more than just autocomplete. I think what we need is the LLM knowing what it knows and what is made up, and a setscrew. I can see this happening with a higher level of intelligence and/or a clever architecture. I'm not an expert on machine learning myself, however that is what I took from news, companies struggling with their chatbots and everyone wanting their AI assistant to provide factual information. And I don't see anything ruling that out completely. I mean we humans also sometimes get things wrong or mis-adjust our level of creativity. But I think the concept of facts can be taught to LLMs to some degree, they already seem to grasp it. And concepts have been proposed and things like AI agents that come up with ideas and other agents that check for factuality are in active use. Along with the big tech companies making their AIs cite the sources. In my eyes, progess is being made.

But this is why I currently don't use LLMs for important and unsupervised stuff, and i try to avoid them when I need correctness. However... I really like to tinker with them, do AI assisted storywriting, or have them come up with 5 creative ideas for a birthday party for my wife. That works well, and with a bit of trickery you can make them output more than the most obvious ideas. And I'm impressed by their ability to code, but as I said it's still far away from being useful to me. I currently don't fear for my job. And I additionally struggle with the size of models I can do inference with and their respective intelligence... We're in the Linux community here, so I think I can be open... I don't like big tech companies doing my compute and providing me with closed and proprietary services. I don't use ChatGPT, only open-weight models I can run myself. They aren't as smart, but I don't want the future of humankind to be shaped by services and good will of big tech companies.