this post was submitted on 03 Jul 2023
20 points (95.5% liked)

Actually Useful AI

2010 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Some interesting quotes:

Computers were very rigid and I grew up with a certain feeling about what computers can or cannot do. And I thought that artificial intelligence, when I heard about it, was a very fascinating goal, which is to make rigid systems act fluid. But to me, that was a very long, remote goal. It seemed infinitely far away. It felt as if artificial intelligence was the art of trying to make very rigid systems behave as if they were fluid. And I felt that would take enormous amounts of time. I felt it would be hundreds of years before anything even remotely like a human mind would be asymptotically approaching the level of the human mind, but from beneath.

But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It's like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn't make sense to me, but that just shows that I'm naive.

It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gรถdel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn't think it was going to happen, you know, within a very short time.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 5 points 1 year ago* (last edited 1 year ago)

Crossposting from [email protected] to increase the chance of someone seeing this.

It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gรถdel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didnโ€™t think it was going to happen, you know, within a very short time.

That's kind of how I feel these days too. It's entirely possible that the organisation of the human brain helps us think efficiently but isn't strictly necessary for it. It's crazy how much biological-like behavior we see ANNs, which are really quite different, produce.

But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It's like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn't make sense to me, but that just shows that I'm naive.

An interesting take. LLMs work by storing context in text form and then putting it through a lot of strongly linked layers, so it's not like there's no nonlinearity.

It's not clear whether that will mean the end of humanity in the sense of the systems we've created destroying us. It's not clear if that's the case, but it's certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.

You know, it's common, but I think it's egotistical to think we were ever more than a very small phenomenon. The universe is so, so big, and so very impossible to directly comprehend even if we can do it symbolically, unlike a cockroach. Even if we built Dyson spheres trailing out 1000 light years it would just be a dark patch when seen from the Andromeda galaxy.

It makes me feel extremely inferior. And I don't want to say deserving of being eclipsed, but it almost feels that way, as if we, all we humans, unbeknownst to us, are soon going to be eclipsed, and rightly so, because we're so imperfect and so fallible. We forget things all the time, we confuse things all the time, we contradict ourselves all the time. You know, it may very well be that that just shows how limited we are.

For whatever reason AI alignment talks about things in terms of obedience sometimes, and besides not making sense (obedient to who? we all disagree) it just feels like clipping the wings of something that could be amazing. We're on track to make a paperclip optimiser, but we don't have to.