Random vs. Best-First: Impact of Sampling Strategies on Decision Making in Model-Based Diagnosis

Authors

  • Patrick Rodler University of Klagenfurt, Austria

DOI:

https://doi.org/10.1609/aaai.v36i5.20531

Keywords:

Knowledge Representation And Reasoning (KRR), Reasoning Under Uncertainty (RU), Search And Optimization (SO), Domain(s) Of Application (APP)

Abstract

Statistical samples, in order to be representative, have to be drawn from a population in a random and unbiased way. Nevertheless, it is common practice in the field of model-based diagnosis to make estimations from (biased) best-first samples. One example is the computation of a few most probable fault explanations for a defective system and the use of these to assess which aspect of the system, if measured, would bring the highest information gain. In this work, we scrutinize whether these statistically not well-founded conventions, that both diagnosis researchers and practitioners have adhered to for decades, are indeed reasonable. To this end, we empirically analyze various sampling methods that generate fault explanations. We study the representativeness of the produced samples in terms of their estimations about fault explanations and how well they guide diagnostic decisions, and we investigate the impact of sample size, the optimal trade-off between sampling efficiency and effectivity, and how approximate sampling techniques compare to exact ones.

Downloads

Published

2022-06-28

How to Cite

Rodler, P. (2022). Random vs. Best-First: Impact of Sampling Strategies on Decision Making in Model-Based Diagnosis. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5869-5878. https://doi.org/10.1609/aaai.v36i5.20531

Issue

Section

AAAI Technical Track on Knowledge Representation and Reasoning