Sander Beckers, Hana Chockler, Joseph Y. Halpern
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 363-371. https://doi.org/10.24963/ijcai.2023/41
In earlier work we defined a qualitative notion of harm: either harm is caused, or it is not. For practical applications, we often need to quantify harm; for example, we may want to choose the least harmful of a set of possible interventions. We first present a quantitative definition of harm in a deterministic context involving a single individual, then we consider the issues involved in dealing with uncertainty regarding the context and going from a notion of harm for a single individual to a notion of "societal harm", which involves aggregating the harm to individuals. We show that the "obvious" way of doing this (just taking the expected harm for an individual and then summing the expected harm over all individuals) can lead to counterintuitive or inappropriate answers, and discuss alternatives, drawing on work from the decision-theory literature.
AI Ethics, Trust, Fairness: ETF: Ethical, legal and societal issues
Uncertainty in AI: UAI: Causality, structural causal models and causal inference
Uncertainty in AI: UAI: Decision and utility theory