Detecting and repairing anomalous evolutions in noisy environments. Logic programming formalization and complexity results (Q645074)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Detecting and repairing anomalous evolutions in noisy environments. Logic programming formalization and complexity results
scientific article

    Statements

    Detecting and repairing anomalous evolutions in noisy environments. Logic programming formalization and complexity results (English)
    0 references
    0 references
    0 references
    0 references
    8 November 2011
    0 references
    This paper studies the diagnosis and repair of anomalous observations in a noisy system. In particular, it studies the problem of reasoning by an agent about its environment with the help of potentially faulty sensors, such as looking for a parking spot in a garage equipped with potentially faulty cameras, and ways to identify and repair faulty cameras. Thus, the focus of this work is not on agent planning based on observation made through the sensors, but agent reasoning about the collective sensor behavior, identifying and repairing faulty sensors based on its knowledge of the system when the agent carries out such a plan, referred to as \textit{agent revolution} in the paper. Technically speaking, the authors adopt a framework \(\langle {\mathcal F, S, D, K}\rangle\) to represent an agent, where \({\mathcal F}\) refers to a set of propositional letters, each taking a value among true, false, and unknown, used to specify the world for the agent; \({\mathcal S}\) denotes the set of available sensors that the agent may observe; \({\mathcal D}\) is a collection of state transition rules associated with a set of given actions that the agent may take; and \({\mathcal K}\) refers to some internal knowledge that the agent may use to reason about the current state of its world. Under this setting, an anomaly refers to a discrepancy between observations made with sensors that occur in \({\mathcal D}\) and trusted knowledge as stored in \({\mathcal K},\) while a repair refers to replacing faulty sensor observations with alternative values that eliminate all the aforementioned discrepancy, which does not need to reverse actions that already took place. The above transition rules in \({\mathcal D}\) are represented in terms of the action language \({\mathcal A}_k,\) associated with the \(0\)-approximation semantics; while those knowledge in \({\mathcal K}\) represented as propositional logic programs, augmented with the negation-by-default extension, associated with the answer set semantics. Among the problems of interest are the \textit{anomaly existence problem}, -- given an agent and an action plan, does there exist an anomaly? -- and if there is such an anomaly, does there exist a repair, i.e., an alternative action plan for the agent? -- referred to as the \textit{repair existence problem} -- and, given an agent, an action plan and the associated observables, and another action plan, the \textit{anomaly and checking problem} asks whether this set of observables is anomalous and, if it is, if the latter action plan is a repair for the former one. The authors then spend more than half of the paper (twenty-nine pages out of a total of forty-nine pages) to discuss, and prove, the relevant complexity results associated with the above three problems: When no knowledge in \({\mathcal K}\) is involved with negation, both the anomaly existence and the repair existence problems are in P, but the anomaly and repair checking problem is in NP-complete; otherwise, all the above three problems are intrinsically difficult. For example, the anomaly existence problem falls into the \(\sum_2^{\text{P}}\)-complete class, if we follow the ``cautious'' semantics for the above logic program, when a program entails a literal if it belongs to every answer set associated with this program; while it still falls into the NP-complete class, if we follow the ``brave'' semantics, when a program entails a literal if it belongs to at least answer set. This self-contained paper is very well written and surprisingly readable, considering its daunting subject. I truly enjoyed reading this paper. It presents a general but reasonable framework for an interesting problem with possible applications, although I doubt exact algorithms exist because of the exposed complexities for the more complicated but realistic scenario. I notice that a framework for multiple cooperative agents is part of the future work, which should provide a more realistic characterization of the situation. Besides those problems as discussed and characterized, yet another problem of practical significance could be that, given an agent, and an action plan, besides whether there exists an anomaly, we are definitely interested in identifying \textit{which sensors}, among those in \({\mathcal D},\) lead to the observed anomalies. Intuitively, it is more challenging than the anomaly existence problem, thus in a higher complexity class. Is it really? Other than the negation-by-default model assumed in this paper, another often followed natural model of negation in logic programming is negation as failure, i.e., once a program fails to entail a literal \(l,\) we take it that this program entails \(\neg l.\) This model does lead to incompleteness for a more general class of logic programs when variables are involved, but does no harm at all for propositional logic programs. This seems another direction for future research. An issue that people often study in a more practical, but seemingly related, setting of interconnection networks is, given a fault diagnostic model such as the comparison model [\textit{M. Malek}, ``A comparison connection assignment for diagnosis of multiprocessors systems'', in: Proceedings of the 7th annual symposium on computer architecture (ISCA 1980). New York, NY: Association for Computing Macinery (ACM). 31--35 (1980; \url{doi:10.1145/800053.801906}); \textit{J. Maeng} and \textit{M. Malek}, ``A comparison connection assignment for self-diagnosis of multiprocessors systems'', in: Proceedings of the 11th international symposium on fault tolerant computing. Portland, ME: IEEE Computer Society. 173--175 (1981); \textit{A. Sengupta} and \textit{A. Dahbura}, ``On self-diagnosable multiprocessor systems: diagnosis by the comparison approach'', IEEE Trans. Comput. 41, No. 11, 1386--1396 (1992; \url{doi:10.1109/12.177309})], what is the upper bound, or even an exact bound, of necessarily detectable faulty nodes in such a network [\textit{C.-K. Lin}, \textit{J. J. M. Tan}, \textit{L.-H. Hsu}, \textit{E. Cheng} and \textit{L. Lipták}, ``Conditional diagnosability of Cayley graphs generated by transposition trees under the comparison diagnosis model'', J. of Interconnection Networks 9, No. 1--2, 83--97 (2008; \url{doi:10.1142/S0219265908002175})]. As a matter of fact, before taking up this paper for review, I just finished a paper on this topic. I can't help but wondering if the same question(s) can be asked for the number of anomalous sensors within such a framework.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    reasoning under uncertainty
    0 references
    agent technology
    0 references
    complexity results
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references