this post was submitted on 07 Mar 2024
353 points (94.2% liked)

Showerthoughts

29325 readers
1 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The best ones are thoughts that many people can relate to and they find something funny or interesting in regular stuff.

Rules

founded 1 year ago
MODERATORS
 

it will loose its ability to differentiate between there and their and its and it’s.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 158 points 8 months ago (2 children)
[–] [email protected] 133 points 8 months ago (4 children)

must of made a mistake their

[–] [email protected] 64 points 8 months ago (2 children)
[–] [email protected] 55 points 8 months ago (1 children)
[–] [email protected] 47 points 8 months ago (3 children)
[–] [email protected] 29 points 8 months ago (2 children)
[–] [email protected] 11 points 8 months ago (2 children)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 9 points 8 months ago (1 children)

I need to of a word with you

[–] [email protected] 6 points 8 months ago
[–] [email protected] 7 points 8 months ago (3 children)

This one must be the worst. "Could care less" being a close second

load more comments (3 replies)
[–] [email protected] 12 points 8 months ago (1 children)

OP hasn't payed enough attention in English class.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 5 points 8 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 104 points 8 months ago (1 children)

Now when you submit text to chat GPT, it responds with “this.”

[–] [email protected] 47 points 8 months ago (1 children)
[–] [email protected] 32 points 8 months ago (1 children)
[–] [email protected] 35 points 8 months ago (1 children)

As a language model, I laughed at this way harder than I should have

[–] [email protected] 8 points 8 months ago

NTA, that was funny.

[–] [email protected] 42 points 8 months ago (1 children)

And it will get LOSE and LOOSE mixed up like you did

load more comments (1 replies)
[–] [email protected] 31 points 8 months ago

I'm waiting for it to start using units of banana for all quantities of things

[–] [email protected] 24 points 8 months ago (3 children)

ChatGPT trained used Reddit posts -> ChatGPT goes temporarily “insane”

Coincidence? I don't think so.

[–] [email protected] 11 points 8 months ago (2 children)

This is exactly what I was thinking.

And maybe some more people did what i did. Not deleting my accounts but replacing all my posts with content created by a bullshit-generator. Made texts look normal, but everything was completely senseless.

[–] [email protected] 5 points 8 months ago

Back in june-july, I used a screen tapping tool + boost to go through and change every comment i could edit with generic type fill, then waited something like 2 weeks in hopes that all of their servers would update to the new text, and then used the same app to delete each comment and post, and then the account itself. Its about all I could think to do.

load more comments (1 replies)
[–] [email protected] 6 points 8 months ago (1 children)

They have always trained on reddit data, like, gpt2 was, i'm unsure about gpt1

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 18 points 8 months ago (1 children)

ChatGPT also chooses that guy's dead wife

[–] [email protected] 7 points 8 months ago

The Narwhal Bacons at Midnight.

[–] [email protected] 18 points 8 months ago (3 children)

It also won't be able to differentiate between a jackdaw and a crow.

load more comments (3 replies)
[–] [email protected] 15 points 8 months ago

On the contrary, it'll becomes excessively perfectionist about it. Can't even say "could have" without someone coming in and saying "THANK YOU FOR NOT SAYING OF"

[–] [email protected] 14 points 8 months ago

It already was, the only difference is that now reddit is getting paid for it.

[–] [email protected] 13 points 8 months ago (3 children)

From now on, when you say something like "I think I can give my hoodie to my girlfriend", it will answer"and my axe""

[–] [email protected] 5 points 8 months ago
load more comments (2 replies)
[–] [email protected] 13 points 8 months ago

Its going to be a poop knife wielding guy with 2 broken arms out to get those jackdaws.

[–] [email protected] 13 points 8 months ago (3 children)
load more comments (3 replies)
[–] [email protected] 12 points 8 months ago (1 children)

And between were, we’re and where.

[–] [email protected] 8 points 8 months ago

Insure and ensure.

[–] [email protected] 12 points 8 months ago (2 children)

ChatGPT was already trained on Reddit data. Check this video to see how one reddit username caused bugs on it: https://youtu.be/WO2X3oZEJOA?si=maWhUpJRf0ZSF_1T

load more comments (2 replies)
[–] [email protected] 10 points 8 months ago (2 children)

It will also reply "Yes." to questions "is it A or B?".

load more comments (2 replies)
[–] [email protected] 9 points 8 months ago (1 children)

Don't forget the bullshit that is "would of"

load more comments (1 replies)
[–] [email protected] 8 points 8 months ago

Your right.

[–] [email protected] 8 points 8 months ago (1 children)

"What is a giraffe?"

ChatGPT: "geraffes are so dumb."

[–] [email protected] 5 points 8 months ago (1 children)

“I have not been trained to answer questions about stupid long horses.”

load more comments (1 replies)
[–] [email protected] 8 points 8 months ago

"Can't even breath"

[–] [email protected] 7 points 8 months ago

And then and than.

[–] [email protected] 6 points 8 months ago (1 children)

And when it learns something new, the response will be "Holy Hell".

[–] [email protected] 7 points 8 months ago
[–] [email protected] 6 points 8 months ago

Is it a showerthought if it's actually just incorrect

[–] [email protected] 6 points 8 months ago (1 children)

Sure it might have some effect, but a big part of ChatGPT besides "raw" training data is RLHF, reinforcement learning from human feedback. Realistically, the bigger problem is training on AI-generated content that might have correct spelling, but hardly makes sense.

[–] [email protected] 5 points 8 months ago

Then I did the right thing by replacing my texts with correct spelled nonsense.

[–] [email protected] 5 points 8 months ago

The same for Gemini, Google brought its api

load more comments
view more: next ›