this post was submitted on 10 Mar 2025
21 points (100.0% liked)

TechTakes

1689 readers
143 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 3 hours ago (1 children)
[–] [email protected] 1 points 45 minutes ago

Such a treasure of a channel

[–] [email protected] 11 points 17 hours ago (3 children)

Hacker News is truly a study in masculinity. This brave poster is willing to stand up and ask whether Bluey harms men by making them feel emotions. Credit to the top replies for simply linking him to WP's article on catharsis.

[–] [email protected] 5 points 6 hours ago

I "ugly cried" (I prefer the term "beautiful cried") at the last episode of Sailor Moon and it was such an emotional high that I've been chasing it ever since.

[–] [email protected] 7 points 15 hours ago

But Star Trek says the smartest guys in the room don't have emotions

[–] [email protected] 11 points 17 hours ago

men will literally debate children’s tv instead of going to therapy

[–] [email protected] 7 points 17 hours ago (1 children)

ICYI here's me getting a bit ranty about generative ai products https://www.youtube.com/watch?v=x5MQb-uNf2U

[–] [email protected] 3 points 3 hours ago (1 children)

With that voice? Rant all you like!

[–] [email protected] 2 points 2 hours ago
[–] [email protected] 10 points 1 day ago

The Columbia Journalism Review does a study and finds the following:

  • Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
  • Premium chatbots provided more confidently incorrect answers than their free counterparts.
  • Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
  • Generative search tools fabricated links and cited syndicated and copied versions of articles.
  • Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.
[–] [email protected] 8 points 1 day ago (1 children)
[–] [email protected] 6 points 1 day ago (1 children)

this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)

but also, hoo boy what a painful talk page

[–] [email protected] 6 points 1 day ago (1 children)

it's not actually any more painful than any wikipedia talk page, it's surprisingly okay for the genre really

remember: wikipedia rules exist to keep people like this from each others' throats, no other reason

[–] [email protected] 4 points 1 day ago (1 children)

that’s fair, and I can’t argue with the final output

[–] [email protected] 7 points 1 day ago (1 children)

wikipedia articles: hey this is pretty good!
wikipedia talk pages: what is wrong with you people

[–] [email protected] 8 points 21 hours ago (1 children)

wikipedia talk pages: what is wrong with you people

Sorry this remark is a WP:NAS, WP:SDHJS, WP:NNNNNNANNNANNAA and WP:ASDF violation.

[–] [email protected] 6 points 19 hours ago

no WP:NaNaNaNaNaNaNaBATMAN?

[–] [email protected] 9 points 2 days ago (2 children)

the btb zizians series has started

surprisingly it's only 4 episodes

[–] [email protected] 12 points 1 day ago (1 children)

On one hand: all of this stuff entering greater public awareness is vindicating, i.e. I knew about all this shit before so many others, I’m so cool

On the other hand: I want to stop being right about everything please, please just let things not become predictably worse

[–] [email protected] 9 points 1 day ago (1 children)

I maintain that our militia ought to be called the Cassandra Division

[–] [email protected] 5 points 1 day ago

Even just The Cassandras would work well (that way all the weird fucks who are shitty about gender would hate the name even more)

[–] [email protected] 8 points 2 days ago

David Gborie! One of my fave podcasters and podcast guests. Adding this to the playlist

[–] [email protected] 23 points 2 days ago (1 children)

was discussing a miserable AI related gig job I tried out with my therapist. doomerism came up, I was forced to explain rationalism to him. I would prefer that all topics I have ever talked to any of you about be irrelevant to my therapy sessions

[–] [email protected] 20 points 2 days ago (1 children)

Regrettably I think that the awarereness of these things is inherently the kind of thing that makes you need therapy, so...

[–] [email protected] 13 points 1 day ago

Sweet mother of Roko it's an infohazard!

I never really realized that before.

[–] [email protected] 14 points 2 days ago* (last edited 2 days ago) (4 children)

Tech stonks continuing to crater 🫧 🫧 🫧

I'm sorry for your 401Ks, but I'd pay any price to watch these fuckers lose.

spoiler(mods let me know if this aint it)

[–] [email protected] 9 points 1 day ago

(mods let me know if this aint it)

the only things that ain’t it are my chances of retiring comfortably, but I always knew that’d be the case

[–] [email protected] 13 points 2 days ago (1 children)

it's gonna be a massive disaster across the wider economy, and - and this is key - absolutely everyone saw this coming a year ago if not two

[–] [email protected] 10 points 1 day ago

In b4 there's a 100k word essay on LW about how intentionally crashing the economy will dry up VC investment in "frontier AGI labs" and thus will give the 🐀s more time to solve "alignment" and save us all from big 🐍 mommy. Therefore, MAGA harming every human alive is in fact the most effective altruism of all! Thank you Musky, I just couldn't understand your 10,000 IQ play.

[–] [email protected] 6 points 1 day ago (1 children)

This kind of stuff, which seems to hit a lot harder than the anti trump stuff, makes me feel that a vance presidency would implode quite quickly due to other maga toadies trying to backstab toadkid here.

[–] [email protected] 9 points 1 day ago (1 children)

I know longer remember what this man actually looks like

[–] [email protected] 4 points 21 hours ago

I still can never tell when Charlie Kirk's face has been photoshopped to be smaller and when not.

[–] [email protected] 8 points 2 days ago (2 children)

...why do I get the feeling the AI bubble just popped

[–] [email protected] 8 points 1 day ago

For me it feels like this is pre ai/cryptocurrency bubble pop. But with luck (as the maga gov infusions of both fail, and actually quicken the downfall (Musk/Trump like it so it must be iffy), if we are lucky). Sadly it will not be like the downfall of enron, as this is all very distributed, so I fear how much will be pulled under).

[–] [email protected] 9 points 2 days ago* (last edited 2 days ago)

Mr. President, this is simply too much winning, I cannot stand the winning anymore 😭

[–] [email protected] 11 points 2 days ago (1 children)
[–] [email protected] 7 points 2 days ago (1 children)

I've been beating this dead horse for a while (since July of last year AFAIK), but its clear to me that the AI bubble's done horrendous damage to the public image of artificial intelligence as a whole.

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst - a trend I expect that'll last for a good while after the bubble pops.

To beat a slightly younger dead horse, I also anticipate AI as a concept will die thanks to this bubble, with its utterly toxic optics as a major reason why. With relentless slop, nonstop hallucinations and miscellaneous humiliation (re)defining how the public views and conceptualises AI, I expect any future AI systems will be viewed as pale imitations of human intelligence, theft-machines powered by theft, or a combination of the two.

[–] [email protected] 10 points 1 day ago

Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst

it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for

[–] [email protected] 20 points 3 days ago* (last edited 3 days ago) (1 children)

A hackernews doesn't think that LLMs will replace software engineers, but they will replace structural engineers:

https://news.ycombinator.com/item?id=43317725

The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don't crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.

Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy

At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.

Gotta reaffirm the dogma!

[–] [email protected] 16 points 3 days ago* (last edited 3 days ago) (1 children)

but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices

days since orangeposter (incorrectly) argued in certainty from 3 seconds of thought as to what they think is involved in a process: [0]

it's so fucking frustrating to know easy this bullshit is to see if you know a slight bit of anything, and doubly frustrating as to how much of the software world is this thinking. I know it's nothing particularly new and that our industry has been doing this for years, but scream

[–] [email protected] 15 points 2 days ago* (last edited 2 days ago)

You basically just need to know a lot of rules / tables and how things interact to know what’s possible and the best practices

And to be a programmer you basically just need to know a lot of languages / libraries and how things interact, really easy, barely an inconvenience.

The actual irony is that this is more true than for any other engineering profession since programmers uniquely are not held to any standards whatsoever, so you can have both skilled engineeres and complete buffoons coexist, often within the same office. There should be a Programmers' Guild or something where the experienced master would just slap you and throw you out if you tried something idiotic like using LLMs for code generation.

load more comments
view more: next ›