this post was submitted on 10 Mar 2024
692 points (100.0% liked)

196

16528 readers
1987 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
top 33 comments
sorted by: hot top controversial new old
[–] [email protected] 97 points 8 months ago (2 children)

Shit, another existential crisis. At least I'll forget about it soon

[–] [email protected] 22 points 8 months ago

Your a computer plugged into an organic matrix.

[–] [email protected] 13 points 8 months ago

at this rate the next meme i see is going to tell me to wake up from my coma

i'm trying

[–] [email protected] 60 points 8 months ago (1 children)

As a large language model i cannot answer that question

[–] [email protected] 35 points 8 months ago (1 children)

Do LLMs dream of weighted sheep?

[–] ICastFist 4 points 8 months ago

I'm sorry, I cannot answer that as I was not trained enough to differentiate between all the possible weights used to weigh a sheep during my dreams.

[–] [email protected] 34 points 8 months ago (1 children)

I've always said that Turing's Imitation Game is a flawed way to determine if an AI is actually intelligent. The flaw is the assumption that humans are intelligent.

Humans are capable of intelligence, but most of the time we're just responding to stimulus in predictable ways.

[–] [email protected] 7 points 8 months ago (1 children)

There's a running joke in the field that AI is the set of things that computers cannot yet do well.

We used to think that you had to be intelligent to be a chess grandmaster. Now we know that you only have to be freakishly good at chess.

Now we're having a similar realization about conversation.

[–] [email protected] 2 points 8 months ago

Didn't really need an AI for chess to know that. A look at how crazy some grandmasters will show you that. Bobby Fischer is the most obvious one, but there's quite a few where you wish they would stop talking about things that aren't chess.

[–] [email protected] 31 points 8 months ago

Look at this another way. We succeeded too well and instead of making a superior AI we made a synthetic human with all our flaws.

Realistically LLMs are just complex models based on our own past creations. So why wouldn't they be a mirror of their creator, good and bad?

[–] [email protected] 31 points 8 months ago

Do LLMs have ADHD?

[–] [email protected] 12 points 8 months ago (1 children)
[–] [email protected] 1 points 8 months ago

They can certainly be promoted towards it

[–] [email protected] 12 points 8 months ago (1 children)

hot take, mods should look into cracking down on baseless bot accusations. it’s dehumanizing and more often intended as an insult, akin to the r-slur, than an actual concern.

(except in occasions where there is actual evidence of bot activity, obviously. but there never is.)

[–] [email protected] 1 points 8 months ago (1 children)

I've been guilty of this, but do get how it's a bad thing. It's like calling people NPCs.

[–] [email protected] 1 points 8 months ago
[–] [email protected] 9 points 8 months ago (1 children)

what if the whole universe is just the algorithm and data used to feed and LLM? we're all just chat gpt

(i don't know how LLMs work)

[–] [email protected] 7 points 8 months ago (1 children)

We basically are. We’re biological pattern recognising machines, where inputs influence everything.

The only difference is somehow our electricity has decided it’s got free will.

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago) (1 children)

well that decides it, gods are real and we're their chat gpt, all our creations are just responses to their prompts lmao

it's wild though, i've heard that we don't really have free will but i guess i'm personally mixed on it as i haven't really looked that into it/thought much about it. it intuitively makes sense to me, though. that we wouldn't, i mean, really have free will. i mean we're just big walking colonies of micro-organisms, right? what is me, what is them? -- idk where i'm going with this

[–] [email protected] 7 points 8 months ago (2 children)

Welcome to philosophy... I think.

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago) (1 children)
[–] [email protected] 1 points 8 months ago

I think I think, therefore, I think I think I am, I think?

This is as close to a statement of certainty as you will get from philosophy.

[–] [email protected] 2 points 8 months ago (1 children)
[–] [email protected] 1 points 7 months ago (2 children)

... is the answer Gilbert Gottfried?

[–] [email protected] 1 points 7 months ago

... uhh... I don't know who that is. Wanna infodump?

[–] [email protected] 1 points 7 months ago

A (somewhat wrong) reference to another exurb1a video on philosophy, found here with relevant timestamp. While Gilbert Gottfried is brought up in that section, the joke I was making should have ended in Emma Stone instead.

As for who he is, he's a comedian with a very specific voice, and voiced Iago in Disney's Alladin if you've seen it. He got a bit canceled for making jokes that iirc at least bordered on racist to the Japanese shortly after a disaster of theirs, and lost his role as the voice of the Aflac duck.

Oh, I forgot he passed... hmmm.

[–] [email protected] 7 points 8 months ago (1 children)

What's LLM? I feel addressed and need another rabbit hole I can delve into.

[–] [email protected] 12 points 8 months ago (1 children)

Large Language Model, ChatGPT and friends

[–] [email protected] 5 points 8 months ago

Oh, of course. Hadn't thought in that direction.

[–] [email protected] 3 points 8 months ago

Well, I never wanted you to be any kind of model, but here I am.

[–] fnmain 2 points 8 months ago

Can't LLMs take an insane number of tokens as context now (I think we're up to 1M)

Anywho, he just like me fr