Chthonic

joined 1 year ago
[–] [email protected] 18 points 11 months ago (4 children)

What's fucked up is that if you die here you die for real

[–] [email protected] 26 points 11 months ago (2 children)

How long will we continue to get news stories whenever a minor entity leaves X (formerly Twitter)?

[–] [email protected] 3 points 11 months ago

If he were smarter and/or not a walking ego then yeah, that would have been the move. Though if he were smart he probably wouldn't be in this mess.

[–] [email protected] 27 points 1 year ago* (last edited 1 year ago) (5 children)

It's not. He never wanted to buy twitter, he just wanted to pump and dump the stock, but because he is stupid and the plan was obvious they sued him to make him honor the deal.

So if he just turned around and shut the company down, it would give the SEC legal grounds to argue that his intention all along was market manipulation.

[–] [email protected] 31 points 1 year ago (8 children)

My understanding is that the SEC would have fucked him if he just shut it down, because it would indicate that he never intended to buy it in the first place and instead was just trying to manipulate the stock market (which is definitely what he was doing).

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

They don't reason, they're stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don't know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

If you would like the perspective of real scientists instead of a "tech-bro" like me I would recommend Emily Bender and Timnit Gebru. I'd recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.

[–] [email protected] 13 points 1 year ago (4 children)

I work on chatbots for a big tech company. Every team is trying to use GenAI for everything. 90% of the stuff they try won't work. I have to explain that LLMs can't actually think at least three times a week. The hype train was too strong. Even calling it AI feels misleading.

That said, there are some genuinely great applications for LLMs that i've enjoyed looking into.

[–] [email protected] 6 points 1 year ago

If you're gonna link to That Scene from Spec Ops you gotta include a "Seriously Gnarly Shit Ahead" content warning or something.

[–] [email protected] 3 points 1 year ago

That may be true for warehouse employees, but the corporate offices are a toxic mess of shitty culture and dated ideas. I've never seen a tech department bleed so much underpaid talent to Amazon.

When I quit because they tried to force me back into the office mid-pandemic (August 2020) I had multiple offers for fully remote positions with twice the salary within a few weeks.

But yeah, if you are a cashier at a warehouse or whatever I hear it's a solid gig.

[–] [email protected] 57 points 1 year ago (6 children)

When I was at Costco, for Member Service Week they literally gave us a rock, like from the gravel outside the office, with the note: "You rock!"

[–] [email protected] 2 points 1 year ago (4 children)

That's real fuckin Nito

[–] [email protected] 11 points 1 year ago

You don't need to self actualize through work. There is no such thing as a waste of time.

“We are here on Earth to fart around, and don’t let anybody tell you any different.” - Vonnegut

view more: ‹ prev next ›