From a Series of (Un)fortunate Events to Global Explainability of Runtime Model-Based Self-Adaptive Systems

Juan Marcelo Parra-Ullauri, Antonio Garcia-Dominguez, Nelly Bencomo

Research output: Chapter in Book/Published conference outputConference publication

Abstract

Self-adaptive systems (SAS) increasingly use AI-based approaches for their flexible decision-making, which often appear to users as 'black boxes'. These systems can exhibit unexpected and surprising behaviours that may violate imposed constraints. Runtime models (RTMs) have been used for SAS management in order to provide capabilities needed to explain reasons why the system present the current emergent behaviour. Existing work on explanations derived from RTMs have focused on justifying why the system has presented a specific behaviour at a given time. Nevertheless, we argue that a more general scope is required for understanding the entire evolution of the system, rather than understanding the behaviour for a given instance or situation. From the point of view of Explainable AI (XAI), the latter type of explanations are called global explanations, whereas understanding a single decision refers to local explanations. Global explanations tend to promote trust on the system in question, while local explanations tend to promote trust on a specific decision. In this paper, we propose the use of event graph models to construct global explanations from evolving RTMs. Event graphs allow the representation of the system behaviour as a state-time diagram, by indicating the occurrence of events and their relationships. RTMs are incrementally queried to look for situations of interest (i.e. events), using Complex Event Processing (CEP) in order to analyze and correlate real-time events and therefore, derive conclusions. The approach is applied to a AI-enhanced SAS in the domain of mobile communications. The encouraging results show that event graphs allow the system to present a summarised overview of the system's behaviour, promoting understandability and trust worthiness.

Original languageEnglish
Title of host publication2021 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C)
PublisherIEEE
Pages807-816
Number of pages10
ISBN (Electronic)9781665424844
ISBN (Print)978-1-6654-2485-1
DOIs
Publication statusPublished - 20 Dec 2021
Event24th International Conference on Model-Driven Engineering Languages and Systems, MODELS-C 2021 - Virtual, Online, Japan
Duration: 10 Oct 202115 Oct 2021

Conference

Conference24th International Conference on Model-Driven Engineering Languages and Systems, MODELS-C 2021
Country/TerritoryJapan
CityVirtual, Online
Period10/10/2115/10/21

Bibliographical note

Funding Information:
This work has been partially sponsored by The Lerverhulme Trust Grant No. RF-2019-548/9 and the EPSRC Research Project Grant No. EP/T017627/1.

Keywords

  • CEP
  • Event Graph Models
  • Global Explainability
  • Runtime Models
  • Self-Adaptive Systems
  • XAI

Fingerprint

Dive into the research topics of 'From a Series of (Un)fortunate Events to Global Explainability of Runtime Model-Based Self-Adaptive Systems'. Together they form a unique fingerprint.

Cite this