gerikson

joined 2 years ago
[–] [email protected] 15 points 16 hours ago (5 children)

This broke containment at the Red Site: https://lobste.rs/s/gkpmli/if_ai_is_so_good_at_coding_where_are_open

Reader discretion is advised, lobste.rs is home to its fair share of promptfondlers.

[–] [email protected] 8 points 17 hours ago (1 children)

Of all the people he could choose to sell out to, he chose Worldcoin???

[–] [email protected] 9 points 20 hours ago* (last edited 20 hours ago) (7 children)

LWer suggests people who believe in AI doom make more efforts to become (internet) famous. Apparently not bombing on Lex Fridman's snoozecast, like Yud did, is the baseline.

The community awards the post one measly net karma point, and the lone commenter scoffs at the idea of trying to convince the low-IQ masses to the cause. In their defense, Vanguardism has been tried before with some success.

https://www.lesswrong.com/posts/qcKcWEosghwXMLAx9/doomers-should-try-much-harder-to-get-famous

[–] [email protected] 8 points 1 day ago

Yes the rationalists are an incredibly large market and their opinion can make or break an author, sure you betcha

[–] [email protected] 4 points 1 day ago

Thanks for posting this, it was entertaining.

[–] [email protected] 6 points 1 day ago (1 children)
[–] [email protected] 12 points 2 days ago (6 children)

Expect the "AI safety" weenuses to have a giant freakout too.

[–] [email protected] 11 points 3 days ago (4 children)

There are anti-blockchain people in the comments.

Dude where have you been these last 3 years.

[–] [email protected] 9 points 6 days ago (1 children)

New eugenics conference just dropped

https://www.lesswrong.com/posts/8ZExgaGnvLevkZxR5/attend-the-2025-reproductive-frontiers-summit-june-10-12

"Chatham House rules" so they can happily be racist without anyone pointing fingers at them.

[–] [email protected] 7 points 1 week ago

Good on Quora members for debunking too.

[–] [email protected] 7 points 1 week ago (1 children)

should have gone with "Moldbuggery" Scott

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (2 children)

Here's an interesting nugget I discovered today

A long LW post tries to tie AI safety and regulations together. I didn't bother reading it all, but this passage caught my eye

USS Eastland Disaster. After maritime regulations required more lifeboats following the Titanic disaster, ships became top-heavy, causing the USS Eastland to capsize and kill 844 people in 1915. This is an example of how well-intentioned regulations can create unforeseen risks if technological systems aren't considered holistically.

https://www.lesswrong.com/posts/ARhanRcYurAQMmHbg/the-historical-parallels-preliminary-reflection

You will be shocked to learn that this summary is a bit lacking in detail. According to https://en.wikipedia.org/wiki/SS_Eastland

Because the ship did not meet a targeted speed of 22 miles per hour (35 km/h; 19 kn) during her inaugural season and had a draft too deep for the Black River in South Haven, Michigan, where she was being loaded, the ship returned in September 1903 to Port Huron for modifications, [...] and repositioning of the ship's machinery to reduce the draft of the hull. Even though the modifications increased the ship's speed, the reduced hull draft and extra weight mounted up high reduced the metacentric height and inherent stability as originally designed.

(my emphasis)

The vessel experiences multiple listing incidents between 1903 and 1914.

Adding lifeboats:

The federal Seamen's Act had been passed in 1915 following the RMS Titanic disaster three years earlier. The law required retrofitting of a complete set of lifeboats on Eastland, as on many other passenger vessels.[10] This additional weight may have made Eastland more dangerous by making her even more top-heavy. [...] Eastland's owners could choose to either maintain a reduced capacity or add lifeboats to increase capacity, and they elected to add lifeboats to qualify for a license to increase the ship's capacity to 2,570 passengers.

So. Owners who knew they had an issue with stability elected profits over safety. But yeah it's the fault of regulators.

 

current difficulties

  1. Day 21 - Keypad Conundrum: 01h01m23s
  2. Day 17 - Chronospatial Computer: 44m39s
  3. Day 15 - Warehouse Woes: 30m00s
  4. Day 12 - Garden Groups: 17m42s
  5. Day 20 - Race Condition: 15m58s
  6. Day 14 - Restroom Redoubt: 15m48s
  7. Day 09 - Disk Fragmenter: 14m05s
  8. Day 16 - Reindeer Maze: 13m47s
  9. Day 22 - Monkey Market: 12m15s
  10. Day 13 - Claw Contraption: 11m04s
  11. Day 06 - Guard Gallivant: 08m53s
  12. Day 08 - Resonant Collinearity: 07m12s
  13. Day 11 - Plutonian Pebbles: 06m24s
  14. Day 18 - RAM Run: 05m55s
  15. Day 04 - Ceres Search: 05m41s
  16. Day 23 - LAN Party: 05m07s
  17. Day 02 - Red Nosed Reports: 04m42s
  18. Day 10 - Hoof It: 04m14s
  19. Day 07 - Bridge Repair: 03m47s
  20. Day 05 - Print Queue: 03m43s
  21. Day 03 - Mull It Over: 03m22s
  22. Day 19 - Linen Layout: 03m16s
  23. Day 01 - Historian Hysteria: 02m31s
 

Problem difficulty so far (up to day 16)

  1. Day 15 - Warehouse Woes: 30m00s
  2. Day 12 - Garden Groups: 17m42s
  3. Day 14 - Restroom Redoubt: 15m48s
  4. Day 09 - Disk Fragmenter: 14m05s
  5. Day 16 - Reindeer Maze: 13m47s
  6. Day 13 - Claw Contraption: 11m04s
  7. Day 06 - Guard Gallivant: 08m53s
  8. Day 08 - Resonant Collinearity: 07m12s
  9. Day 11 - Plutonian Pebbles: 06m24s
  10. Day 04 - Ceres Search: 05m41s
  11. Day 02 - Red Nosed Reports: 04m42s
  12. Day 10 - Hoof It: 04m14s
  13. Day 07 - Bridge Repair: 03m47s
  14. Day 05 - Print Queue: 03m43s
  15. Day 03 - Mull It Over: 03m22s
  16. Day 01 - Historian Hysteria: 02m31s
 

The previous thread has fallen off the front page, feel free to use this for discussions on current problems

Rules: no spoilers, use the handy dandy spoiler preset to mark discussions as spoilers

 

This season's showrunners are so lazy, just re-using the same old plots and antagonists.

 

“It is soulless. There is no personality to it. There is no voice. Read a bunch of dialogue in an AI generated story and all the dialogue reads the same. No character personality comes through,” she said. Generated text also tends to lack a strong sense of place, she’s observed; the settings of the stories are either overly-detailed for popular locations, or too vague, because large language models can’t imagine new worlds and can only draw from existing works that have been scraped into its training data.

 

The grifters in question:

Jeremie and Edouard Harris, the CEO and CTO of Gladstone respectively, have been briefing the U.S. government on the risks of AI since 2021. The duo, who are brothers [...]

Edouard's website: https://www.eharr.is/, and on LessWrong: https://www.lesswrong.com/users/edouard-harris

Jeremie's LinkedIn: https://www.linkedin.com/in/jeremieharris/

The company website: https://www.gladstone.ai/

 

HN reacts to a New Yorker piece on the "obscene energy demands of AI" with exactly the same arguments coiners use when confronted with the energy cost of blockchain - the product is valuable in of itself, demands for more energy will spur investment in energy generation, and what about the energy costs of painting oil on canvas, hmmmmmm??????

Maybe it's just my newness antennae needing calibrating, but I do feel the extreme energy requirements for what's arguably just a frivolous toy is gonna cause AI boosters big problems, especially as energy demands ramp up in the US in the warmer months. Expect the narrative to adjust to counter it.

view more: next ›