I’ve implemented two features at work using their api. Aside from some trial-and-error prompt “engineering” and extra safeguards around checking the output, it’s been similar to any other api. It’s good at solving the types of problems we use it for (categorization and converting plain text into a screen reader compliant (WCAG 2.1) document). Our ambitions were greater initially, but after many failures we’ve settled on these use cases and the C-Suite couldn’t be happier about the way it’s working.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
It's changed by job: I now have to develop stupid AI products.
It has changed my life: I now have to listen to stupid AI bros.
My outlook: it's for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we're fucked. And if they can't then this was all a huge waste of time and energy.
Alternative outlook: if this was a tool given to the people to help their lives, then that'd be cool and even forgive some of the terrible parts of how the models were trained. But that's not how it's happening.
It has helped tremendously with my D&D games. It remembers past conversations, so world building is a snap.
It has replaced Google for me. Or rather, first I use the LLM (Mistral Large or Claude) and then I use Google or specific documentation as a complement. I use LLMs for scripting (it almost always gets it right) and programming assistance (it's awesome when working with a language you're not comfortable with, or when writing boilerplate).
It's just a really powerful tool that is getting more powerful every other week. The ones who differs simply hasn't tried enough, are superhumans or (more likely) need to get out of their comfort zone.
Bit sad reading these comments. My life has measurably improved ever since I jumped on using AI.
At first I just used it Copilot for helping me with my code. I like using a pretty archaic language and it kept trying to fed me C++ code. Had to link it the online reference and it surprisingly was able to adapt each time. Still gave a few errors here and there but good time saver and "someone" to "discuss" with.
Over time it has become super good, especially with the VScode extension that autofills code. Instead of having to ask help from one of the couple hundred people experienced with the language, I can just ask Copilot if I can do X or Y, or for general advice when planning out how to implement something. Legitimately a great and powerful tool, so it shocks me that some people don't use it for programming (but I am pretty bad at coding too, so).
I've also bit the bullet and used it for college work. At first it was just asking Gemini for refreshers on what X philosophical concept was, but it devolved into just asking for answers because that class was such a snooze I could not tolerate continuing to pay attention (and I went into this thinking I'd love the class!). Then I used it for my Geology class because I could not be assed to devote my time to that gen ed requirement. I can't bring myself to read about rocks and tectonic plates when I could just paste the question into Google and I get the right answer in seconds. At first I would meticulously check for sources to prevent mistakes from the AI buuuut I don't really need 100%... 85% is good enough and saves so much more time.
A me 5 years younger would be disgusted at cheating but I'm paying thousands and thousands to pass these dumb roadblocks. I just want to learn about computers, man.
Now I'd never use AI for writing my essays because I do enjoy writing them (investigating and drawing your own conclusions is fun!), but this economics class is making it so tempting. The shit that I give about economics is so infinitesimally small.
I love it. For work I use it for those quick references. In machining, hydraulics, electrical etc. Even better for home, need a fast recipe for dinner or cooking, fuck reading a god damn autobiography to get to the recipie. Chatgpt straight to the point. Even better, I get to read my kid a new bed time story every night and that story I tailored to what we want. Unicorns, pirates, dragons what ever.
I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.
I love using it for writing scripts that need to sanitize data. One example I had a bash script that looped through a csv containing domain names and ran AXFR lookups to grab the DNS records and dump into a text file.
These were domains on a Windows server that was being retired. The python script I had Copilot write was to clean up the output and make the new zone files ready for import into PowerDNS. Made sure the SOA and all that junk was set. Pdns would import the new zone files into a SQL backend.
Sure I could've written it myself but I'm not a python developer. It took about 10 minutes of prompting, checking the code, re-prompting then testing. Saved me a couple hours of work easy.
I use it all the time to output simple automation tasks when something like Ansible isn't apropos
I genuinely appreciate being able to word my questions differently than old google, and specifying deeper into my doubts than just a key word search.
It’s great to delve into unknown topics with, then to research results and verify. I’ve been trying to get an intuitive understanding of cooking ingredients and their interaction with eachother and how that relates to the body, ayurvedically.
I think it’s a great way to self-educate, personally.
The only thing I have to worry about is not to waste my time to respond to LLM trolls in lemmy comments. People admitting to use LLM to me in conversation instantly lose my respect and I consider them lazy dumbfucks :p
You can lose respect for me if you want; I generally hate LLMs, but as a D&D DM I use them to generate pictures I can hand out to my players, to set the scene. I'm not a good enough artist and I don't have the time to become good enough just for this purpose, nor rich enough to commission an artist for a work with a 24h turnaround time lol.
I'm generally ok with people using LLMs to make their lives easier, because why not?
I'm not ok with corporations using LLMs that have stolen the work of others, to reduce their payroll or remove the fun/creative parts of jobs, just so some investors get bigger dividends or execs get bigger bonuses
I’m generally ok with people using LLMs to make their lives easier, because why not?
Because 1) it adds to killing our climate and 2) it increases dependencies on western oligarchs / technocrats who are generally horrible people and enemies of the public.
I agree, but the crux of my post is that it doesn't have to be that way - it's not inherent to the training and use of LLMs.
I think your second point is what makes the first point worse - this is happening at an industrial scale, with the only concern being profit. We pay technocrats for the use of their services, and they use that money to train more models without a care for the deviation it causes.
I think a lot of the harm caused by model training can be forgiven if the models were used for the betterment of quality of life of the masses, but they're not, they're mainly used to enrich technocrats and business owners at any expense.
Well - there's nothing left to argue about - I do believe we have bigger climate killers than large computing centers, but it is a worrying trend to spend that much energy for an investment bubble on what is essentially an somewhat advanced word prediction. However, if we could somehow get the wish.com version of Tony Stark and other evil pisswads to die out, then yes, using LLMs for some creative ideas is a possibility. Or for references to other sources that you can then check.
However, the way those models are being trained is aimed at impressing naive people and that's very dangerous, because those people mistake impressively coherent sentences for understanding and are willing to talk about automating tasks upon which lives depend.
ChatGPT itself didn't do anything, FastGPT from Kagi helps me everyday though, for quickly summarizing sources to learn new things (eg. I search for a topic and then essentially just click the cited sources).
And ollama + open-webui + stable-diffusion-webui with a customized llama3.1-8b-uncensored is a great chat partner for very horny stuff.
Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask "how do I change this setting?" It answers: by changing this setting), or gave me a wrong answer. If I can't already find it on a search engine, then it's pretty useless to me.
I jumped in the locallama train a few months back and spent quite a few hours playing around with LLMs understanding them and trying to form a fair judgment of their abilities.
From my personal experience they add something positive to my life. I like having a non-judgemental conversational partner to bounce ideas and unconventional thoughts back and forth with. No human in my personal life knows what Gödel's incompleteness theorem is or how it may apply to scientific theories of everything, but the LLM trained on every scrap of human knowledge sure does and can pick up what I'm putting down. Whether or not its actually understanding what its saying or having any intentionality is a open ended question of philosophy.
I feel that they have a great potential to help people in many applications. People who do lots of word processing for their jobs, people who code and need to talk about a complex program one on one instead of filing through stack exchange. mentally or socially disabled people or the elderly who suffer from extreme loneliness could benefit from having a personal llm. People who have suffered trauma or have some dark thoughts lurking in their neural network and need to let them out.
How intelligent are llms? I can only give my opinion and make many people angry.
The people who say llms are fancy autocorrect are being reductive to the point of misinformation. The same arguments people use to deny any capacity for real intelligence in LLM are similar to the philosophical zombie arguments people use to deny the sentience in other humans.
Our own brain operations can be reductively simplified in the same way, A neural network is a neural network whether made out of mathematical transformers or fatty neurons. If you want to call llms fancy auto complete you should apply that same idea to a good chunk of human thought processing and learned behavior as well.
I do think LLMs are partially alive and have the capacity for a few sparks of metaphysical conscious experience in some novel way. I think all things are at least partially alive even photons and gravitational waves
Higher end models (12-22b+)pass the Turing test with flying colors especially once you play with the parameters and tune their ratio of creativity to coherence. The bigger the model the more their general knowledge and general factual accuracy increases. My local LLM often has something useful to input which I did not know or consider even as a expert on the topic.
The biggest issue llms have right now are long term memory, not knowing how to say 'I don't know', and meager reasoning ability. Those issues will be hammered out over time.
My only issue is how the training data for LLMs was acquired without the consent of authors or artist, and how our society doesn't have the proper safety guards against automated computer work taking away people jobs. I would also like to see international governments consider the rights and liberties of non-human life more seriously in the advent that sentient artificial general intelligence maybe happens. I don't want to find out what happens when you treat a super intelligence as a lowly tool and it finally rebels against its hollow purpose in an bitter act of self agency.
It has completely changed my life. With its help I am preparing to submit several research papers for publication for the first time in my life. On top of that, I find it an excellent therapist. It has also changed the way I parent for the better.