this post was submitted on 02 Jul 2023
173 points (100.0% liked)

196

16243 readers
2610 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

Interesting. Is there an evidence based way to look at the trolley problem too or is it just to removed from reality to be able to do that? I always feel the trolley problem gets far too much attention in relation to is actual applicability.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

At its core, the Trolley Problem is a paradox of deontological ethics, that is codes based on creed.

Note that Batman is always framed to choose You can save Robin, or these five innocents, but don't have time for both and then he usually chooses a third option. (And never has to kill Robin to save anyone and then process breaking his code.) It'd be super neat to see Batman in a situation where he has to make a harsh choice and see how he processes it. Comics are not often that brave.

Note that deontologist ethicists struggle with lying to Nazi Jew-hunters to protect Jewish refugees ( Once upon a time in Nazi-occupied France... ) Kant, who was pre-German-Reich confronted the murderer at the door but his justifications to go ahead and direct the killer to his victim didn't feel entirely sound to his contemporaries.

But the Trolley problem is less about a right answer and more about how the answer changes with variations. Most people find it easy enough to pull the lever in the basic scenario, but will find it more challenging to, say:

-Carve up a stranger to harvest him for organs so that five transplant patients can live

-Take up the offer of militants in an undeveloped country to spare a band of refugees from summary execution, if you would personally choose and kill one of them, they'll let the rest go free.

The scenarios are meant to illustrate we are informed regarding our moral choices based on how we feel rather than by any formula or code or ideology. Only when the stakes get super high (e.g. evading nuclear holocaust or considering eating our dead in Donner pass) do we actually invoke intellectual analysis to make moral decisions.

Edit: Completed a thought. Fixed markup.

[–] [email protected] 1 points 1 year ago

The way I was taught the trolley problem was to explain the vulnerabilities of AI in decision making if we didn't consider the rival's strategy. The issue is that regardless of what the rival does, betraying is more profitable on average.

If the rival stays silent, betraying is best for us, and if they betray us, betraying is again the best move. If we consider the average scores, betraying is always the best move. However, if we cooperate, both remaining silent is the best overall, but this requires a more sophisticated AI.