this post was submitted on 05 Aug 2023
10 points (100.0% liked)

TechTakes

1389 readers
153 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago (2 children)

There isn’t a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn’t also fail

Man it's so sad how this is so so so so close to the point-- they could have correctly concluded that this means GI as a concept is meaningless. But no, they have to maintain their sci-fi web of belief so they choose to believe LLMs Really Do Have A Cognitive Quality.

[–] [email protected] 9 points 1 year ago

the concept of intelligence testing is so central to rats (and therefore to a big portion of HN’s poster base via cultural osmosis) that when folks like this lose their faith in GI, they tend to abandon the site as a whole

[–] [email protected] 9 points 1 year ago (1 children)

The next comment is so peak tech hubris to me.

It's "just" predicting the next token so it means nothing

This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.

This is the part of the AGI Discourse I hate because anyone can approach this with aesthetic and analogy from any field at all to make any argument about AI and its just mind-grating.

This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

I've never seen a non-sequitur more non. The argument is that predicting the next term is categorically not what language is. That is, it's not that there is nothing emerging, but that what is emerging is just straight up not language.

The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.

"Look! This person thinks predicting the next token is not consciousness. I bet they must also not believe that humans are made of cells, or that many small things can make complex thing. I bet they also believe the soul exists and lives in the pineal gland just like old NON-SCIENCE PEOPLE."

[–] [email protected] 10 points 1 year ago (1 children)

This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

Over and above the non-sequitur already observed, this poster/posting is one of the most condensed examples of techbro Ignoring All Prior Knowledge In Related Fields Of Study that I've seen in a while

must be doing heavy lines of pure uncut Innovation for this vivid a performance

[–] [email protected] 8 points 1 year ago

@froztbyte @jasperty

Yeah, ignorance of history (things that happened more than 20 years ago) is strong in these people.

The basis of IQ tests is Spearman's, g, a general intelligence. Inventing a new branch of statistical analysis, Spearman exhaustively shows that scientists can ignore errors if they believe hard enough. (Mismeasure of Man, SJ Gould).

As T. Gebru points out, how can you design or verify a system you can't spec? There's no definition and no evidence g exists.