this post was submitted on 09 Aug 2024
4 points (100.0% liked)

MoreWrite

112 readers
1 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 1 year ago
MODERATORS
 

This started as a summary of a random essay Robert Epstein (fuck, that's an unfortunate surname) cooked up back in 2016, and evolved into a diatribe about how the AI bubble affects how we think of human cognition.

This is probably a bit outside awful's wheelhouse, but hey, this is MoreWrite.

The TL;DR

The general article concerns two major metaphors for human intelligence:

  • The information processing (IP) metaphor, which views the brain as some form of computer (implicitly a classical one, though you could probably cram a quantum computer into that metaphor too)
  • The anti-representational metaphor, which views the brain as a living organism, which constantly changes in response to experiences and stimuli, and which contains jack shit in the way of any computer-like components (memory, processors, algorithms, etcetera)

Epstein's general view is, if the title didn't tip you off, firmly on the anti-rep metaphor's side, dismissing IP as "not even slightly valid" and openly arguing for dumping it straight into the dustbin of history.

His main major piece of evidence for this is a basic experiment, where he has a student draw two images of dollar bills - one from memory, and one with a real dollar bill as reference - and compare the two.

Unsurprisingly, the image made with a reference blows the image from memory out of the water every time, which Epstein uses to argue against any notion of the image of a dollar bill (or anything else, for that matter) being stored in one's brain like data in a hard drive.

Instead, he argues that the student making the image had re-experienced seeing the bill when drawing it from memory, with their ability to do so having come because their brain had changed at the sight of many a dollar bill up to this point to enable them to do it.

Another piece of evidence he brings up is a 1995 paper from Science by Michael McBeath regarding baseballers catching fly balls. Where the IP metaphor reportedly suggests the player roughly calculates the ball's flight path with estimates of several variables ("the force of the impact, the angle of the trajectory, that kind of thing"), the anti-rep metaphor (given by McBeath) simply suggests the player catches them by moving in a manner which keeps the ball, home plate and the surroundings in a constant visual relationship with each other.

The final piece I could glean from this is a report in Scientific American about the Human Brain Project (HBP), a $1.3 billion project launched by the EU in 2013, made with the goal of simulating the entire human brain on a supercomputer. Said project went on to become a "brain wreck" less than two years in (and eight years before its 2023 deadline) - a "brain wreck" Epstein implicitly blames on the whole thing being guided by the IP metaphor.

Said "brain wreck" is a good place to cap this section off - the essay is something I recommend reading for yourself (even if I do feel its arguments aren't particularly strong), and its not really the main focus of this little ramblefest. Anyways, onto my personal thoughts.

Some Personal Thoughts

Personally, I suspect the AI bubble's made the public a lot less receptive to the IP metaphor these days, for a few reasons:

  1. Articial Idiocy

The entire bubble was sold as a path to computers with human-like, if not godlike intelligence - artificial thinkers smarter than the best human geniuses, art generators better than the best human virtuosos, et cetera. Hell, the AIs at the centre of this bubble are running on neural networks, whose functioning is based on our current understanding of how the brain works. [Missed this incomplete sensence first time around :P]

What we instead got was Google telling us to eat rocks and put glue in pizza, chatbots hallucinating everything under the fucking sun, and art generators drowning the entire fucking internet in pure unfiltered slop, identifiable in the uniquely AI-like errors it makes. And all whilst burning through truly unholy amounts of power and receiving frankly embarrassing levels of hype in the process.

(Quick sidenote: Even a local model running on some rando's GPU is a power-hog compared to what its trying to imitate - digging around online indicates your brain uses only 20 watts of power to do what it does.)

With the parade of artificial stupidity the bubble's given us, I wouldn't fault anyone for coming to believe the brain isn't like a computer at all.

  1. Inhuman Learning

Additionally, AI bros have repeatedly and incessantly claimed that AIs are creative and that they learn like humans, usually in response to complaints about the Biblical amounts of art stolen for AI datasets.

Said claims are, of course, flat-out bullshit - last I checked, human artists only need a few references to actually produce something good and original, whilst your average LLM will produce nothing but slop no matter how many terabytes upon terabytes of data you throw at its dataset.

This all arguably falls under the "Artificial Idiocy" heading, but it felt necessary to point out - these things lack the creativity or learning capabilities of humans, and I wouldn't blame anyone for taking that to mean that brains are uniquely unlike computers.

  1. Eau de Tech Asshole

Given how much public resentment the AI bubble has built towards the tech industry (which I covered in my previous post), my gut instinct's telling me that the IP metaphor is also starting to be viewed in a harsher, more "tech asshole-ish" light - not just merely a reductive/incorrect view on human cognition, but as a sign you put tech over human lives, or don't see other people as human.

Of course, AI providing a general parade of the absolute worst scumbaggery we know (with Mira Murati being an anti-artist scumbag and Sam Altman being a general creep as the biggest examples) is probably helping that fact, alongside all the active attempts by AI bros to mimic real artists (exhibit A, exhibit B).

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 3 months ago (5 children)

but I do believe brains are computers, but only in the broadest sense of what computation could be

Agree. A human brain is capable of executing the steps of a TM with pen/paper, and in that sense the brain is absolutely capable of acting as a computer. But as far as all the other process a brain does (breathing/maintaining heart rate/etc.) describing that as 'a computer' seems such an abuse of notation as to render the original definition meaningless. We might as well call the moon a computer since it is 'calculating' the effect of a gravitational field on a moon sized object. What I think many people are really claiming when they say a brain is a computer is that if only we could identify the correct finite state deterministic program, there would be no difference between the brain and its implementation in silicon. Personally, I find claims of substrate independence to be less plausible, but of course many of our dear friends are willing to bite that bullet.

[–] [email protected] 0 points 3 months ago (4 children)

We might as well call the moon a computer since it is ‘calculating’ the effect of a gravitational field on a moon sized object.

Yes. In fact, that's sort of my point. There is no privileged sense of computation. They can be different even if they do, have invariants.

But as far as all the other process a brain does (breathing/maintaining heart rate/etc.) describing that as ‘a computer’ seems such an abuse of notation as to render the original definition meaningless.

I tend to agree that often times, the terminology of 'attendance' is better than the terminology of computation, but I don't think that there isn't -any- meaning in keeping the computer metaphor, because I do think it has practical implications.

At the risk of going down another rabbit hole, I'd really say that the Free Energy Principle does a pretty good job of showing why keeping a wide, but nonetheless useful, definition of computation on the table can, be useful. As in, a principled tool that can shed some light on scale free dynamics (and not in a absolute, definitive answer to all questions).

https://www.youtube.com/watch?v=KQk0AHu_nng

Maybe another reason I'm ok with the computer metaphor (in which we retain the lack of privelege, and in which the attendance metaphor is kept), is that it does sort provide us some interesting technical intuitions, too. Like, how the maximum power principle effects the design and building of technology of all kinds (whether it's chemistry, electronics, energy, gardening) , how ambiguity (that is, the unknowable embedded environment) is an important functional element of deploying any sort of technology (or policy, or behavior), and how, yeah.

One day, the fact that simple and even slow things (like water, or the moon, or chemicals, or rocks, or animals) are capable computationally, but attend to different things, is in fact. Going to be meaningful and important.

[–] [email protected] 0 points 3 months ago* (last edited 3 months ago) (3 children)

i dunno, this seem to me to lead in a straight line to Chalmers claiming rocks could be conscious you can't prove they're not.

sure you can expand "computation" to things outside Turing, but then you're setting yourself up for equivocation

[–] [email protected] 1 points 3 months ago

I definitely don't claim anything about consciousness. But I also don't think think things have to be conscious to be interesting, or for me to care about them.

Hell, my mom is dead, and definitely not conscious. But I still think about her and care about her. And my memories of her, still impact my life and behavior in strange ways.

I get where you're coming from, and I'm not trying to make normalizing reductive claims that things -are the same-. But things that are different by some means can also share things by other. I think it is a useful perspective to have.

Computation and computer metaphors are helpful, atleast to my thinking. But even I don't argue that it's a privileged position. Lots of words and metaphors can work.

load more comments (2 replies)
load more comments (2 replies)
load more comments (2 replies)