this post was submitted on 09 Aug 2024
47 points (100.0% liked)

TechTakes

1490 readers
30 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 4 months ago (1 children)

Also, this is just an impromptu addendum to my extended ramble on the AI bubble crippling tech's image, but I can easily see military involvement in AI building public resentment/stigma against the industry further.

Any military use of AI is already gonna be seen in a warcrimey light thanks to Israel using it in their Gaza Geneva Checklist Speedrun - add in the public being fully aware of your average LLM's, shall we say, tenuous connection to reality, and you have a recipe for people immediately assuming the worst.

[–] [email protected] 4 points 4 months ago (1 children)

That was the current example we were thinking of, though we did look up war crimes law thinking on the subject tl;dr you risk war crimes if there isn't a human in the loop. e.g., think of a minefield as the simplest possible stationary autonomous weapon system, the rest is that with computers.

[–] [email protected] 4 points 4 months ago

As a personal sidenote, part of me says the “Self-Aware AI Doomsday” criti-hype might end up coming back to bite OpenAI in the arse if/when one of those DoD tests goes sideways.

Plenty of time and money's been spent building up this idea of spicy autocomplete suddenly turning on humanity and trying to kill us all. If and when one of those spectacular disasters you and Amy predicted does happen, I can easily see it leading to wild stories of ChatGPT going full Terminator or some shit like that.