this post was submitted on 03 Aug 2023
23 points (92.6% liked)

Technology

104 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

The student ended up with a fairer complexion, dark blonde hair and blue eyes after her Playground AI request

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 year ago (1 children)

Just below the article there is another article where it is claimed the AI has Asian bias in ai generated images. So my outrage is confused.

[–] [email protected] 11 points 1 year ago (1 children)

Image to image software isn't going to return an image of you unless you create Lora, Lycoris or Textual Inversion of your face for it to work with. It doesn't "know" what you look like. For things like this, it "looks" at the image based on shapes and colors alone and generates a face that fits those general dimensions. For AI, the word professional would simply mean the picture was taken in a studio.

She used Playground.ai which uses stable diffusion models. I'm not familiar with their interface, but definitely relies on a good prompt for the model to give you good results. Asking it to do something isn't how diffusion models work. They weight keywords and infer based on those.

[–] [email protected] 1 points 1 year ago

In addition to this, it's all about the seed too. Let's just say she used the prompt "professional looking" to an img2img. Based on the seed this could give her billions of different images.

Between the training data on the model and the seeds, there is simply little to no way to implicate biases from models in this fashion.

As always, check your model biases by having a blank positive prompt and a negative prompt for "low quality" then let the generations run for a long time.

Only then do you have a snippet of what the model by default trends towards. And the moment you add other tokens, that can go out the window.