this post was submitted on 19 Nov 2023
496 points (86.9% liked)
Technology
58303 readers
12 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There is no way this ages well.
I think the statement was more about the impact, which will depend on each person's subjective experience
Personally I agree. Even if AI could produce identical work, the impact would be lessened. Art is more meaningful when you know it took time and was an expression/interpretation by another human (rather than a pattern prediction algorithm Frankenstein-ing existing work together). Combine that with the volume of AI content that's produced, and the impact of any particular song/art piece is even more limited.
I'd say art is more meaningful when it's a unique experience. It's like those myths about glassmakers being ~~killed~~ blinded after the cathedral is finnished so that no one can replicate the glass color... without the killing.
People are social, if enough people feel the same way about one thing it'll succeed. It doesn't matter where it came from or how it was made, like how people can still admire and appreciate nature. Or maybe the impact will be that it reduces all impacts. Every group and subgroup might be able to have their own thing.
I don’t know. I think Obama kind of nailed it. AI can create boring and mediocre elaborations just fine. But for the truly special and original? It could never.
For the new and special, humans will always be required. End of line.
At this point I want a calendar of at what date people say "AI could never" - like "AI could never explain why a joke it's never seen before is funny" (such as March 2019) - and at what date it happens (in that case April 2022).
(That "explaining the joke" bit is actually what prompted Hinton to quit and switch to worrying about AGI sooner than expected.)
I'd be wary of betting against neural networks, especially if you only have a casual understanding of them.
I mean the limitations of LLMs are very well documented, they aren't going to advance a whole lot more without huge leaps in computing technology. There are limits on how much context they can store for example, so you aren't going to have AIs writing long epic stories without human intervention. And they're fundamentally incapable of originality.
General AI is another thing altogether that we're still very far away from.
Nearly everything you wrote is incorrect.
As an example, rolling context windows paired with RAG would easily allow for building an implementation of LLMs capable of writing long stories.
And I'm not sure where you got the idea that they were fundamentally incapable of originality. This part in particular tells me you really don't know how the tech is working.
A rolling context window isn't a real solution and will not produce works that even come close to matching the quality of human writers. That's like having a writer who can only remember the last 100 pages they wrote.
The tech is trained on human created data. Are you suggesting LLMs are capable of creativity and imagination? Lmao - and you try to act like I'm the one who's full of shit.
That's why you pair it with RAG.
They are trained by iterating through network configurations until there's diminishing returns on how accurately they can complete that human created data.
But they don't just memorize the data. They develop the capabilities to extend it.
So yes, they absolutely are capable of generating original content that's not in the training set. As has been demonstrated over and over. From explaining jokes not found in the training data, solving riddles not found in it, or combining different concepts to result in a new synthesis not found in the original data.
What do you think it's doing? Copy/pasting or something?
I think, it will eventually become obsolete, because we keep changing what 'AI' means, but current AI largely just regurgitates patterns, it doesn't yet have a way of 'listening' to a song and actually judging whether it's good or bad.
So, it may expertly regurgitate the pattern that makes up a good song, but humans spend a lot of time listening to perfect every little aspect before something becomes an excellent song, and I feel like that will be lost on the pattern regurgitating machine, if it's forced to deviate from what a human composed.
I have seen a couple successful artists in different genres admit to using AI to help them write some of their most popular songs, and describe it's use in the songwriting process. You hit the nail on the head with AI not being able to tell if something is good or bad. It takes a human ear for that.
AI is good at coming up with random melodies, chord progressions, and motifs, but it is not nearly as good at composing and producing as humans are, yet. AI is just going to be another instrument for musicians to use, in its current form.
Yeah, I do imagine, it won't be just AIs either. And then, it will obviously be possible to take it to an excellent song, given enough human hours invested.
I do wonder, how useful it will actually be for that, though. Often times, it really fucks you up to try to go from good to excellent and it can be freeing to start fresh instead. In particular, 'excellent' does require creative ideas, which are easier for humans to generate with a fresh start.
But AI may allow us to start over fresh more readily, if it can just give us a full song when needed. Maybe it will even be possible to give it some of those creative snippets and ask it to flesh it all out. We'll have to see...
As someone who is doing software engineering and my company jumped on AI bandwagon and got us GitHub Copilot. After using it for a while I think overall experience is actually net negative. Yes, sometimes it gets things right, sometimes it provides a correct solution, but often I can write much more concise code. Many times it provides code that looks like it is correct, but after looking in more detail it actually is wrong. So now I'm need to be in guard what code it inserts, which kills all the time that it supposedly saved me. It makes things harder because the code does look like it might work.
It is like pair programming with a complete moron that is very good at picking patterns and trying to use them in following code. So if you do a lot of copy and paste I think it will help.
I think this technology can make bad programmers suck less at programming. I think the LLM problem is that it was trained with existing works and the way it works is that its goal is to convince other human that the result was created by another one, but it isn't capable to do any actual reasoning.
Wow, my experience has been pretty much the exact opposite of this. Copilot is amazing and I'd rather not go without it ever again
Edit: for the life of me I'll never understand people. This comment got a bunch of downvotes and yet some douchebag who blindly accuses me of being bad at my job gets upvoted. Fuck people.
What language you program in and what kind of code you develop? Before Copilot were you frequently searching answers on stackoverflow?
Typescript, JavaScript, php, bash, scss/css... And isn't every dev on SO or at least a search engine with some frequency?
I don't actually think the reason I like it is dependent on the language at all. The reason I like it is that it will often basically notice what I'm doing and save me from typing a repetitive 3-5 line block. Things like that and if I can't remember a specific syntax, I've found that I can write a comment saying what the following code will do and boom, suddenly copilot writes a version of that code close to what I would've written.
I mean you're right that it can write stuff that doesn't work, I just find that I can usually filter that out pretty quickly. The times I can't, I'm a bit stuck anyway and it's worth a shot to try their mysterious solution. But since I always treat its solutions with skepticism I haven't been bitten yet.
For me, copilot just takes the monotony out of the job. Instead of spending as much time writing boring stuff I get to focus on the more interesting parts
Maybe you aren't that good at writing code
Maybe you aren't that good at being a human, this comment being good evidence of that
Ignore them. At some point you gotta realize most people are losers trying to bring others down with them.
Do what works for you :)
I appreciate this comment. You inspire me to not only ignore more assholes, but maybe I'll also be one myself less often :)
Ill blindly accuse you of being bad at your job too, bud.
Thanks for block request. Appreciate reducing douchebags in life