this post was submitted on 30 Sep 2023
1846 points (97.4% liked)

Mildly Infuriating

35045 readers
1186 users here now

Home to all things "Mildly Infuriating" Not infuriating, not enraging. Mildly Infuriating. All posts should reflect that.

I want my day mildly ruined, not completely ruined. Please remember to refrain from reposting old content. If you post a post from reddit it is good practice to include a link and credit the OP. I'm not about stealing content!

It's just good to get something in this website for casual viewing whilst refreshing original content is added overtime.


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means: -No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...


7. Content should match the theme of this community.


-Content should be Mildly infuriating.

-At this time we permit content that is infuriating until an infuriating community is made available.

...


8. Reposting of Reddit content is permitted, try to credit the OC.


-Please consider crediting the OC when reposting content. A name of the user or a link to the original post is sufficient.

...

...


Also check out:

Partnered Communities:

1.Lemmy Review

2.Lemmy Be Wholesome

3.Lemmy Shitpost

4.No Stupid Questions

5.You Should Know

6.Credible Defense


Reach out to LillianVS for inclusion on the sidebar.

All communities included on the sidebar are to be made in compliance with the instance rules.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 11 months ago* (last edited 11 months ago) (2 children)

Nah, I don't know if AI will ever be 100% perfect, and I don't want to trust it fully. Ai is human built, and it's my personal belief that humans aren't perfect, so AI will therefore never be perfect.

Also, you will always want a qualified driver to be able to take over should some part of the car sensor systems fail.

Sensors, unlike humans have a tendency to fail quickly, sometimes instantly, and even AI and autopilot can behave erratically if it gets bad or false inputs from bad sensors.

It's like in a airliner, autopilot even though at this point is pretty much practically capable of flying a plane completely from takeoff to landing, there will always be at least pilots on duty in the cockpit in order to account for unforseen circumstances and failures, even if they never actually fly the plane normally.

[–] [email protected] 3 points 11 months ago (1 children)

AI doesn't need to be perfect, it just needs to be better than your average human driver. Which, you know isn't a very high bar...

Comparing to an airplane pilot isn't the same, a pilot goes through years of training to be able to fly passengers (Well beyond a dinky Cessna or whatever anyways) and you need years of experience on top before you are even considered by the big airlines

A human driver can get a license in as little as a few days

[–] [email protected] 6 points 11 months ago

Or hear me out.... What if we had really long cars, sometimes chained together, put them on rails, and have just 1 human drive hundreds of them.

[–] [email protected] -2 points 11 months ago* (last edited 11 months ago) (1 children)

Oh seems I wasn't clear. Sentient AI should drive us. Give it 30 years and I bet it will be close to the outcome if not on the cusp.

[–] [email protected] 10 points 11 months ago (1 children)

Even if we somehow manage to create a sentient AI, it will still have to rely on the information it receives from various sensors in the car. If those sensors fail, and it doesn't have the information it needs to do the job, it could still make a mistake due to a lack of, or completely incorrect data, or if it manages to realise the data is erroneous it still could flatly refuse to work. I'd rather keep people in the loop as a final failsafe just in case that should ever happen.

[–] [email protected] -2 points 11 months ago* (last edited 11 months ago) (1 children)

I see your point on this but when should an sentient AI be able to decide for itself? What makes it different from a human by this point? Human, us rely on sensors too to react to the world. We make mistakes also, even dangerous one. I guess we just want to make sure this sentient AI is not working against us?

[–] [email protected] 6 points 11 months ago (1 children)

That's why it's layers of security. Humans have a natural instinct - usually we can tell if our eyesight is getting worse. And any mistake we make is most likely due to us not noticing something or reacting in time, something that the AI should be able to compensate for.

The only time where this is not true when we have a medical episode, like a grand Mal or something. But everyone knows safety is always relative. And we mitigate that by redundancies. Sensors will have redundancies, and we ourselves are also an additional redundancy. Heck we could also put in sensors for the occupants to monitor their vitals. There is once again a question of privacy, but really that's all we should need to protect against that.

A sentient AI, not counting any potential issues with its own sentience, would have issues with sudden failed or poorly maintained sensors. Usually when a sensor fails, it either zeros out, maxes out, or starts outputting completely erratic results.

If any of these results look the same as normal results, they can be hard for the AI to tell. We can reconcile those sensors with our own human senses and tell if they failed. A car only has its sensors to know what it needs to know, so if it fails, will it be able to know? Sure sensor redundancy helps, but there is still that minor chance that all the redundant sensors fail in a way that the AI cannot tell, and in that case the driver should be there to take over.

Again I will refer to the system of an aircraft, as even if it's a 1 in a billion chance there have been a few instances where this has happened and the autpilot nearly pitched the plane into the ground or ocean, and the plane was only saved due to the pilots takeover - in one of those cases it was due to a faulty sensor reporting that the angle of attack was too steeply pitched up, so the stick pusher mechanism tried to pitch the nose down, to save the plane, when infact it already was down. An autopilot, even an AI one will have no choice to trust its sensors as that's the only mechanism it has.

When it come to a faulty redundant sensor, the AI also has to work out which sensor to trust, and if it picks the wrong one, well you're fucked. It might not be able to work out which sensor is more trustworthy..

We keep ourselves safe with layered safety mechanisms and redundancy, including ourselves. So if anyone fails, the other can hopefully catch the failure.

[–] [email protected] 2 points 11 months ago

Wow, I appreciate the response must have taken awhile to write.