Abstract

Proceedings Abstracts of the Twenty-Fourth International Joint Conference on Artificial Intelligence

Inference and Learning for Probabilistic Description Logics / 4411
Riccardo Zese
PDF

The last years have seen an exponential increase in the interest for the development of methods for combining probability with Description Logics (DLs). These methods are very useful to model real world domains, where incompleteness and uncertainty are common. This combination has become a fundamental component of the Semantic Web.Our work started with the development of a probabilistic semantics for DL, called DISPONTE, that applies the distribution semantics to DLs. Under DISPONTE we annotate axioms of a theory with a probability, that can be interpreted as the degree of our belief in the corresponding axiom, and we assume that each axiom is independent of the others. Several algorithms have been proposed for supporting the development of the Semantic Web. Efficient DL reasoners, such us Pellet, are able to extract implicit information from the modeled ontologies. Despite the availability of many DL reasoners, the number of probabilistic reasoners is quite small. We developed BUNDLE, a reasoner based on Pellet that allows to compute the probability of queries. BUNDLE, like most DL reasoners, exploits an imperative language for implementing its reasoning algorithm. Nonetheless, usually reasoning algorithms use non-deterministic operators for doing inference. One of the most used approaches for doing reasoning is the tableau algorithm which applies a set of consistency preserving expansion rules to an ABox, but some of these rules are non-deterministic.In order to manage this non-determinism, we developed the system TRILL which performs inference over DISPONTE DLs. It implements the tableau algorithm in the declarative Prolog language, whose search strategy is exploited for taking into account the non-determinism of the reasoning process. Moreover, we developed a second version of TRILL, called TRILL^P, which implements some optimizations for reducing the running time. The parameters of probabilistic KBs are difficult to set. It is thus necessary to develop systems which automatically learn this parameters starting from the information available in the KB. We presented EDGE that learns the parameters of a DISPONTE KB, and LEAP, that learn the structure together with the parameters of a DISPONTE KB. The main objective is to apply the developed algorithms to Big Data. Nonetheless, the size of the data requires the implementation of algorithms able to handle it. It is thus necessary to exploit approaches based on the parallelization and on cloud computing. Nowadays, we are working to improve EDGE and LEAP in order to parallelize them.