200fifty

joined 2 years ago
[–] [email protected] 23 points 7 months ago

But the system isn’t designed for that, why would you expect it to do so?

It, uh... sounds like the flaw is in the design of the system, then? If the system is designed in such a way that it can't help but do unethical things, then maybe the system is not good to have.

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago) (5 children)

I mean they do throw up a lot of legal garbage at you when you set stuff up, I'm pretty sure you technically do have to agree to a bunch of EULAs before you can use your phone.

I have to wonder though if the fact Google is generating this text themselves rather than just showing text from other sources means they might actually have to face some consequences in cases where the information they provide ends up hurting people. Like, does Section 230 protect websites from the consequences of just outright lying to their users? And if so, um... why does it do that?

Even if a computer generated the text, I feel like there ought to be some recourse there, because the alternative seems bad. I don't actually know anything about the law, though.

[–] [email protected] 6 points 7 months ago

I beg your pardon?

[–] [email protected] 8 points 7 months ago

Wow, I guess humans and LLMs aren't so different after all!

[–] [email protected] 7 points 7 months ago (4 children)

Psst, check the usernames of the people in this thread!

[–] [email protected] 9 points 7 months ago

yes, computing systems use energy. If our energy grid is overly reliant on the burning of fossil fuels that release harmful emissions, that doesn’t mean we need to stop the advancement of our computers. It means we need to stop using so much fossil fuels in our grid.

Now where have I heard something like this before? I'm trying to think of something, but I just can't quite seem to remember...

[–] [email protected] 19 points 7 months ago* (last edited 7 months ago)

RationalWiki is an index maintained by the Rationalist community

Lies and slander! I get why he'd assume this based on the name, but it would be pretty funny if the rationalists were responsible for the rational wiki articles on Yudkowsky et al, since iirc they're pretty scathing

[–] [email protected] 29 points 7 months ago (3 children)

"I know not with what technology GPT-6 will be built, but GPT-7 will be built with sticks and stones" -Albert Einstein probably

[–] [email protected] 8 points 7 months ago* (last edited 7 months ago) (1 children)

is this trying to say "discrimination against racists is the real racism"? ... Would that be "racismism"?

[–] [email protected] 11 points 7 months ago* (last edited 7 months ago)

this reads like someone googled a list of gen z slang and then threw it in a blender with a bunch of weird race-science memes. who is this for

I think the only acceptable response to whoever is responsible for it is a highly aggressive "touch grass"

[–] [email protected] 19 points 8 months ago (2 children)

I think they were responding to the implication in self's original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is 'cheating.' But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it's laundered by the LLM first, which is like a high-school level mistake.

view more: ‹ prev next ›