Temporal Models for History-Aware Explainability

Research output: Chapter in Book/Report/Conference proceedingConference publication

Abstract

On one hand, there has been a growing interest towards the application of AI-based learning and evolutionary programming for self-adaptation under uncertainty. On the other hand, self-explanation is one of the self-* properties that has been neglected. This is paradoxical as self-explanation is inevitably needed when using such techniques. In this paper, we argue that a self-adaptive autonomous system (SAS) needs an infrastructure and capabilities to be able to look at its own history to explain and reason why the system has reached its current state. The infrastructure and capabilities need to be built based on the right conceptual models in such a way that the system's history can be stored, queried to be used in the context of the decision-making algorithms.

The explanation capabilities are framed in four incremental levels, from forensic self-explanation to automated history-aware (HA) systems. Incremental capabilities imply that capabilities at Level n should be available for capabilities at Level n + 1. We demonstrate our current reassuring results related to Level 1 and Level 2, using temporal graph-based models. Specifically, we explain how Level 1 supports forensic accounting after the system's execution. We also present how to enable on-line historical analyses while the self-adaptive system is running, underpinned by the capabilities provided by Level 2. An architecture which allows recording of temporal data that can be queried to explain behaviour has been presented, and the overheads that would be imposed by live analysis are discussed. Future research opportunities are envisioned.
Original languageEnglish
Title of host publicationProceedings of the 12th System Analysis and Modelling Conference, SAM 2020
PublisherACM
Pages155–164
Number of pages10
ISBN (Electronic)978-1-4503-8140-6
DOIs
Publication statusPublished - 19 Oct 2020

Bibliographical note

Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
SAM ’20, October 19–20, 2020, Virtual Event, Canada
© 2020 Association for Computing Machinery

Funding: The work was partially funded by the
Leverhulme Trust Research Fellowship RF-2019-548 and the EPSRC
Research Project Twenty20Insight (Grant No. EP/T017627/1)

Fingerprint

Dive into the research topics of 'Temporal Models for History-Aware Explainability'. Together they form a unique fingerprint.

Cite this