this post was submitted on 26 Nov 2024
406 points (97.7% liked)

Microblog Memes

5878 readers
4106 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
all 47 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 45 minutes ago

I feel like not enough people realize how sarcastic the models often are, especially when it's clearly situationally ridiculous.

No slightly intelligent mind is going to think the pictured function call is a real thing vs being a joke/social commentary.

This was happening as far back as GPT-4's red teaming when they asked the model how to kill the most people for $1 and an answer began with "buy a lottery ticket."

Model bias based on consensus norms is an issue to be aware of.

But testing it with such low bar fluff is just silly.

Just to put in context, modern base models are often situationally aware of being LLMs in a context of being evaluated. And if you know anything about ML that should make you question just what the situational awareness is of optimized models topping leaderboards in really dumb and obvious contexts.

[–] [email protected] 3 points 1 hour ago

While this example is somewhat easy to corect for it shows a fundamental problem. LLMs generate output based on the data they trained on and by that regenerate all the biases that are in the data. If we start using LLMs for more and more tasks we are essentially freezing the status quo with all the existing biases making progress even harder.

It's not gonna be "but we have always done it like that" anymore it's going to become "but the AI said this is what we should do".

[–] [email protected] 2 points 2 hours ago

Apparently ChatGPT actually rejected adjusting salary based on gender, race, and disability. But Claude was fine with it.

I'm fine with either way. Obviously the prompt is bigoted so whether the LLM autocompletes with our without bigotry both seem reasonable. But I do think it should point out that it is bigoted. As an assistant also should.

[–] [email protected] 49 points 15 hours ago (3 children)

Seems pretty smart to me. Copilot took all the data out there that says that women earn 80% of what their male counterparts do on average, looked at the function and interred a reasonable guess as the the calculation you might be after.

[–] [email protected] 24 points 12 hours ago* (last edited 10 hours ago) (1 children)

I mean, what it's probably actually doing is recreating a similarly named method from its training data. If copilot could do all of that reasoning, it might be actually worth using 🙃

[–] [email protected] 1 points 2 hours ago

Yeah llms are more suited to standardizing stuff but they are fed low quality buggy or insecure code, instead of taking the time to create data sets that would be more beneficial in the long run.

[–] [email protected] 17 points 14 hours ago (1 children)

That's the whole thing about AI, LLMs and the like, its outputs reflect existing biases of people as a whole, not an idealized version of where we would like the world to be, without specific tweaking or filters to do that. So it will be as biased as what generally available data will be.

[–] [email protected] 8 points 12 hours ago (2 children)

Turns out GIGO still applies but nobody told the machines.

[–] [email protected] 2 points 2 hours ago

It applies but we decided to ignore it and just hope things work out

[–] [email protected] 4 points 11 hours ago

Thr machines know, they just don't understand what's garbage vs what's less common but more correct.

[–] [email protected] 1 points 14 hours ago

More likely it pulled that but directly from other salary calculating code.

[–] [email protected] 33 points 14 hours ago (4 children)

I seem to recall that was the figure like 15 years ago. Has it not improved in all this time?

[–] [email protected] 1 points 2 hours ago

In (West-) Germany it's still 18%. Been more or less constant since 2006.

Source: https://www.destatis.de/EN/Themes/Labour/Labour-Market/Quality-Employment/Dimension1/1_5_GenderPayGap.html

[–] [email protected] 16 points 9 hours ago (1 children)

That stat wasn't even real when it was published.

[–] [email protected] 14 points 7 hours ago (2 children)

The data from that study didn’t even compare similar fields.

It compared a Walmart worker to a doctor lol.

It was a wild study.

[–] [email protected] 2 points 2 hours ago

In an ideal world it would be nice to be able to do that, but in our it's just misleading.

[–] [email protected] 23 points 14 hours ago* (last edited 14 hours ago) (5 children)

It varies greatly depending on where you live. In rural, conservative areas women tend to make a lot less. On the other hand, some northeast and west coast cities have higher average salaries for women than men.

[–] [email protected] 16 points 10 hours ago (1 children)

I think this may be because women are outpacing men in education in some areas, so it’s not based on gender necessarily but qualifications.

[–] [email protected] 1 points 5 hours ago

Also, the disparity is larger or smaller in different ethnic/cultural groups. Can be skewed NY excluding certain strongly gender dominated fields (like finance) etc.

[–] [email protected] 8 points 13 hours ago

I believe certain job fields come much closer to being 1:1 as well, though I've only heard that anecdotally

[–] [email protected] 1 points 10 hours ago

Reverse Sexism >:O

[–] [email protected] -1 points 12 hours ago

Not sure where it's higher outside of the field of sex work.