intensely_human

joined 1 year ago
[–] [email protected] 3 points 1 hour ago

Well, as long as we ensure training data needs to be paid for and can't just be scraped from the web, we will ensure that only large corporations with deep pockets can train models.

That is the reason there is a big "grassroots" push to stop AI from training on all our web content: it's a play to ensure no small players can make AI, and that AI is dominated by a few big players.

[–] [email protected] -2 points 2 hours ago (1 children)

It's a free market invention and, therefore, will be used by whatever a free market decides it should be used for.

[–] [email protected] 3 points 2 hours ago

The best way to ensure AI is used for good purposes is to make sure AI is in as many hands as possible. That was the original idea behind OpenAI (hence the name), which was supposed to be a nonprofit pushing open-source AI into the world to ensure a multipolar AI ecosystem.

That failed badly.

[–] [email protected] 3 points 2 hours ago

What ith thith? Marthmallow and peanut buttah?

[–] [email protected] 2 points 5 hours ago

Butt inspector

[–] [email protected] 1 points 5 hours ago

I like having a face for expressions

[–] [email protected] 1 points 5 hours ago

When you’re studying for a class you need to study hours to hit those deadlines. In adult life you can do 5 minutes a week if you want.

[–] [email protected] 1 points 5 hours ago

It’s a pbj with banana in it

[–] [email protected] 1 points 5 hours ago

What if I told you that knowing I’m going to die in poverty isn’t helping me escape poverty?

[–] [email protected] 2 points 5 hours ago

The bun top is also typically less dense.

[–] [email protected] 1 points 5 hours ago

Why have regular chocolate when you can have hard chocolate with no flavor?

 

O’Neill cylinder is that big rotating cylinder space station format that uses the spin for artificial gravity.

At higher elevations the gravity will be lower. BMX bikes will be fun too. Make a big jump and you can go across the center and land on the other side, or go into a zero-gee part in the middle, which works out if you’re always inside a curve.

 

I’ve noticed ChatGPT gets less able to do precise reasoning or respond to instructions, the longer the conversation gets.

It felt exactly like working with a student who was getting tired and needed to rest.

Then I had above shower thought. Pretty cool right?

Every few months a new ChatGPT v4 is deployed. It’s got new training data, up through X date. They train up a new model on the new content in the world, including ChatGPT conversations from users who’ve opted into that (or didn’t opt out, can’t remember how it’s presented).

It’s like GPT is “sleeping”, to consolidate “the day’s” knowledge into long term memory. All the data in the current conversation is its short term memory. After handling a certain amount of complexity in one conversation, the coherence of responses breaks down, becomes more habitual and less responsive to nuance. It gets tired and can’t go much further.

 

I asked GPT-4 for a list of the most important threats to human civilization, their likelihood, and why they were considered threats.

GPT's output is also pasted into the comments.

view more: next ›