Causes of Effects: Learning Individual Responses from Population Data

Causes of Effects: Learning Individual Responses from Population Data

Scott Mueller, Ang Li, Judea Pearl

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2712-2718. https://doi.org/10.24963/ijcai.2022/376

The problem of individualization is crucial in almost every field of science. Identifying causes of specific observed events is likewise essential for accurate decision making as well as explanation. However, such tasks invoke counterfactual relationships, and are therefore indeterminable from population data. For example, the probability of benefiting from a treatment concerns an individual having a favorable outcome if treated and an unfavorable outcome if untreated; it cannot be estimated from experimental data, even when conditioned on fine-grained features, because we cannot test both possibilities for an individual. Tian and Pearl provided bounds on this and other probabilities of causation using a combination of experimental and observational data. Those bounds, though tight, can be narrowed significantly when structural information is available in the form of a causal model. This added information may provide the power to solve central problems, such as explainable AI, legal responsibility, and personalized medicine, all of which demand counterfactual logic. This paper derives, analyzes, and characterizes these new bounds, and illustrates some of their practical applications.
Keywords:
Knowledge Representation and Reasoning: Causality
Uncertainty in AI: Causality, Structural Causal Models and Causal Inference
Uncertainty in AI: Graphical Models