Peanutbjelly

joined 1 year ago
[–] [email protected] 28 points 2 weeks ago* (last edited 2 weeks ago)

Big. Fan of ai stuff. Not a fan of this. This definitely won't have issues with minority populations and neurodivergents falling outside of distribution and causing false positives that enable more harassment of people who already get unfairly harassed.

Let this die with the mind reading tactics they spawned from.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

I'm an AI enthusiast, but id be the first to say that whoever greenlights a system with such obvious bias issues for something so important should be sacked. Although, before AI, cops just use bullshit mindreading tactics that basically boil down to "harass anyone who doesn't act within your cultural and neurotypical norms" while encouraging them to influence their own perceptions with any personal or local biases. I.E. Cops are borked and need reshaping.

[–] [email protected] 0 points 3 months ago

I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that "nature" has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

Most people haven't even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

That being said, localized narrow generative models are just building large individual models of predictive process that doesn't by default actively update information.

People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.

[–] [email protected] 2 points 3 months ago

Possibly one of my favourites to date. Absolutely love it.

[–] [email protected] 1 points 4 months ago* (last edited 4 months ago)

While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.

On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I'm almost glad it's not being absorbed into the hype and hate cycle.

[–] [email protected] 3 points 7 months ago

Upvote for using censorship. Been seeing worse things left uncensored in ai channels and it's put me off my coffee before. Wonder if having more detailed censored indicators would prevent some of the downvotes, but you have my thanks.

[–] [email protected] 9 points 7 months ago

As always, the problem is our economic system that has funneled every gain and advance to the benefit of the few. The speed of this change will make it impossible to ignore the need for a new system. If it wasn't for AI, we would just boil the frog like always. But let's remember the real issue.

If a free food generating machine is seen as evil for taking jobs, the free food machine wouldn't be the issue. Stop protesting AI, start protesting affluent society. We would still be suffering under them even if we had destroyed the loom.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

Chatgpt website*

That statement is more of an echo of previous similar articles.

Anyone who uses the api or similar bots for their site, such as this one, should be responsible to do the same. If they are using the api/bot without similar warning, they also don't understand basic use of the technology. It's a failure on the human side more that the bot side, but that is not how it tends to be framed

My point is that it doesn't matter how good the tools are if people just assume what they are capable of.

It's like seeing a bridge that says "600 pound weight limit". And deciding it can handle a couple tons just because you saw another bridge hold that much.

Imagine if this situation lead to a bunch of people angry at bridges for being so useless.

[–] [email protected] 3 points 7 months ago (2 children)

It's like nobody cares to even touch base level understanding of the tools they are using.

Can we stop framing this as if llms have actual intent?

This shouldn't surprise me given how many people think that we have access to the literal word of God, but they don't even read the damned book they base their lives and social directives around.

Or is it that "news" sources intentionally leave out basic details to ramp up the story?

Ignore the note on the page you are using that says info might not be accurate. Blame the chat bot for your unprofessional ineptitude.

You shouldn't even be putting that level of blind trust into human beings, or even Wikipedia without checking sources.

Guess what, when i use bots for info, i ask for the sources, and check the original sources. Really not difficult, and I'm not being paid half as much as the people I keep seeing in these news articles.

Maybe this should make it more obvious how wealth is not accrued due to competence and ability.

Or for having reliable news. I feel like i live in a world controlled by children.

[–] [email protected] 19 points 8 months ago (4 children)

All oil profits will be taxed extra until it has paid off what it took from the people, right? Were not just taking from the people for the exclusive benefit of large companies right? I'm still confused as to why this was publically funded. Or at least confused to why I haven't seen the guaranteed public value addressed. No trickle down isn't a real public benefit.

[–] [email protected] -1 points 8 months ago (1 children)

I'm talking about the general strides in cognitive computing and predictive processing.

https://youtu.be/A1Ghrd7NBtk?si=iaPVuRjtnVEA2mqw

Machine learning is still impressive, we just can better frame the limitations now.

For the note on scale and ecosystems, review recent work by karl friston or Michael Levin.

[–] [email protected] 9 points 8 months ago (3 children)

Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn't inevitably lead to unnecessary conflict due to diverging models that haven't grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

breath

We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

We're seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.

 

one of my favourite things about AI art and stable diffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

 

one of my favourite things about stablediffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

view more: next ›