This is a thought-provoking article, thank you for sharing it. One paragraph that particularly stood out to me discusses the limitations of AI in dealing with rare events:
The ability to imagine different scenarios could also help to overcome some of the limitations of existing AI, such as the difficulty of reacting to rare events. By definition, Bengio says, rare events show up only sparsely, if at all, in the data that a system is trained on, so the AI canโt learn about them. A person driving a car can imagine an occurrence theyโve never seen, such as a small plane landing on the road, and use their understanding of how things work to devise potential strategies to deal with that specific eventuality. A self-driving car without the capability for causal reasoning, however, could at best default to a generic response for an object in the road. By using counterfactuals to learn rules for how things work, cars could be better prepared for rare events. Working from causal rules rather than a list of previous examples ultimately makes the system more versatile.
On a different note, I asked GPT-4 to visualize the cause and effect flow for lighting a fire. It isn't super detailed but not wrong either:
(Though I think being able to draw a graph like this correctly and actually understanding causality aren't necessarily related.)
If you tell me the original prompts you used, we can test them in GPT-4 and see how well it performs.