IJCAI-97 TUTORIALS

Modeling with Defaults: Causal and Temporal Reasoning

Hector Geffner

Course Description

A robot pushes a block and expects the block to move. The block however does not move. He pushes again but harder. The block moves.

Inferences of this type are easy for people but hard for robots. Part of the problem is that the modeling languages used in AI do not deal with uncertainty in a natural way. Logical languages, for example, do not handle uncertainty at all, while probabilistic languages deal with uncertainty at a precision and cost that is seldom needed.

Default languages are a new type of modeling languages that aim to fill the gap that exists between logical and probabilistic languages, providing modelers with the means to map soft inputs into soft outputs in a meaningful and principled way. Default models combine the convenience of logical languages, the flexibility and clarity of a probabilistic semantics, and the transparency of argumentation algorithms. The goal of the tutorial is to provide a coherent and self-contained survey of such work.

We view default reasoning in two ways: as an extended form of deductive inference and as a qualitative form of probabilistic inference. In each case, we lay out the main concepts, intuitions and algorithms. We then consider the specific problems that arise when reasoning about causality and time, and analyze what works, what doesn't work, and why. We make use of the basic ideas that underlie two probabilistic models: Bayesian Networks and Markov Processes. This allows us to shed light on a number of issues like the distinction between laws and facts, the role of causality, and the conditions for efficient reasoning.

We also illustrate the use of default languages for modeling in areas such as qualitative reasoning, decision making, and planning and control.

Prerequisite Knowledge

The tutorial is intended for people interested in common sense modeling, planning, decision and control. There are no prerequisites except a basic knowledge of logic and probabilities.

About the Lecturers

Hector Geffner got his Ph.D. at UCLA with a dissertation on Default Reasoning that was co-winner of the 1990 ACM Dissertation Award. Then he worked as Staff Research Member at the IBM T.J. Watson Research Center in NY for two years before returning to the Universidad Simon Bolivar, in Caracas, Venezuela where he currently teaches. Hector Geffner has served in the program committee of the major AI conferences and is a member of the editorial board of the Journal of Artificial Intelligence Research.
higuchi@etl.go.jp
Last modified: Thu Feb 20 14:10:58 JST 1997