this post was submitted on 06 Feb 2025
93 points (96.0% liked)

196

1906 readers
1828 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 2 weeks ago
MODERATORS
 

image description (contains clarifications on background elements)Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg "big brother is watching" poser, two images of fluttershy (a pony from my little pony) one of them reading "u only kno my swag, not my lore", a picture of parkzer parkzer from the streamer "dougdoug" and a slider gameplay element from the rhythm game "osu". The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn't cause too much hate. i just wanna know what u people and creatures think <3

(page 2) 24 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 22 hours ago

i'm personally not too fond of llms, because they are being pushed everywhere, even when they don't make sense and they need to be absolutely massive to be of any use, meaning you need a data center.

i'm also hesitant to use the term "ai" at all since it says nothing and encompasses way too much.

i like using image generators for my own amusement and to "fix" the stuff i make in image editors. i never run any online models for this, i bought extra hardware specifically to experiment. and i live in a city powered basically entirely by hydro power so i'm pretty sure i'm personally carbon neutral. otherwise i wouldn't do it.

the main things that bother me is partially the scale of operations, partially the philosophy of the people driving this. i've said it before but open ai seem to want to become e/acc tech priests. they release nothing about their models, they hide them away and insinuate that we normal hoomans are unworthy of the information and that we wouldn't understand it anyway. which is why deepseek caused such a market shake, it cracked the pedestal underneath open ai.

as for the training process, i'm torn. on the one hand it's shitty to scrape people's work without consent, and i hope open ai gets their shit smacked out of them by copyright law. on the other hand i did the math on the final models, specifically on stable diffusion 1.0: it used the LAION 5B scientific dataset of tagged images, which has five billion ish data points as the name suggests. stable diffusion 1.0 is something like 4GB. this means there's on average less than eight bits in the model per image and description combination. given that the images it trained on were 512x512 on average, that gives a shocking 0.00003 bits per pixel. and stable diffusion 1.5 has more than double the training data but is the same size. at that scale there is nothing of the original image in there.

the environmental effect is obviously bad, but the copying argument? i'm less certain. that doesn't invalidate the people who are worried it will take jobs, because it will. mostly through managers not understanding how their businesses work and firing talented artists to replace with what is basically noise machines.

[–] ICastFist 3 points 23 hours ago (1 children)

Honest question, how does AI help disabled people, or which kinds of disabilities?

One of the few good uses I see for audio AI is translation using the voice of the original person (though that'd deal a significant blow to dubbing studios)

[–] [email protected] 1 points 23 hours ago

fair question. i didn't think that much about what i meant by that, but here's the obvious examples

  • image captioning using VLMs, including detailed multi-turn question answering
  • video subtitles, already present in youtube and VLC apparently

i really should have thought more about that point.

[–] [email protected] 2 points 23 hours ago

pretty balanced take

[–] [email protected] 2 points 23 hours ago (1 children)

In my experience, the best uses have been less fact-based and more "enhancement" based. For example, if I write an email and I just feel like I'm not hitting the right tone, I can ask it to "rewrite this email with a more inviting tone" and it will do a pretty good job. I might have to tweak it, but it worked. Same goes for image generation. If I already know what I want to make, I can have it output the different elements I need in the appropriate style and piece them together myself. Or I can take a photograph that I took and use it to make small edits that are typically very time consuming. I don't think it's very good or ethical for having it completely make stuff up that you will use 1:1. It should be a tool to aid you, not a tool to do things for you completely.

[–] [email protected] 3 points 23 hours ago

yesyesyes, can see that completely. i might not be the biggest fan of using parts of generated images, but that still seems fine. using LLMs for fact-based stuff is like - the worst usecase. You only get better output if you provide it with the facts, like in a document or a search result, so it's essentially just rephrasing or summarizing the content, which LLMs are good at.

[–] [email protected] 0 points 23 hours ago (2 children)

I don't see how AI is inherently bad for the environment. I know they use a lot of energy, but if the energy comes from renewable sources, like solar or hydroelectric, then it shouldn't be a problem, right?

[–] [email protected] 2 points 21 hours ago

The problem is that we only have a finite amount of energy. If all of our clean energy output is going toward AI then yeah it's clean but it means we have to use other less clean sources of energy for things that are objectively more important than AI - powering homes, food production, hospitals, etc.

Even "clean" energy still has downsides to the environment also like noise pollution (impacts local wildlife), taking up large amounts of space (deforestation), using up large amounts of water for cooling, or having emissions that aren't greenhouse gases, etc. Ultimately we're still using unfathomably large amounts of energy to train and use a corporate chatbot trained on all our personal data, and that energy use still has consequences even if it's "clean"

[–] [email protected] 2 points 23 hours ago (1 children)

i kinda agree. currently many places still use oil for engery generation, so that kinda makes sense.

but if powered by cool solar panels and cool wind turbine things, that would be way better. then it would only be down to the production of GPUs and the housing.

[–] [email protected] 2 points 20 hours ago* (last edited 20 hours ago)

Also cooling! Right now each interaction from each person using chatGPT uses roughly a bottle's worth of water per 100 words generated (according to a research study in 2023). This was with GPT-4 so it may be slightly more or slightly less now, but probably more considering their models have actually gotten more expensive for them to host (more energy used -> more heat produced -> more cooling needed).

Now consider how that scales with the amount of people using ChatGPT every day. Even if energy is clean everything else about AI isn't.

load more comments
view more: ‹ prev next ›