Event-driven temporal models for explanations-ETeMoX: explaining reinforcement learning

Juan Parra*, Antonio Garcia-Dominguez, Nelly Bencomo, Changgang Zheng, Chen Zhen, Juan Boubeta-Puig, Guadalupe Ortiz, Shufan Yang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Modern software systems are increasingly expected to show higher degrees of autonomy and self-management to cope with uncertain and diverse situations. As a consequence, autonomous systems can exhibit unexpected and surprising behaviours. This is exacerbated due to the ubiquity and complexity of Artificial Intelligence (AI)-based systems. This is the case of Reinforcement Learning (RL), where autonomous agents learn through trial-and-error how to find good solutions to a problem. Thus, the underlying decision-making criteria may become opaque to users that interact with the system and who may require explanations about the system’s reasoning. Available work for eXplainable Reinforcement Learning (XRL) offers different trade-offs: e.g. for runtime explanations, the approaches are model-specific or can only analyse results after-the-fact. Different from these approaches, this paper aims to provide an online model-agnostic approach for XRL towards trustworthy and understandable AI. We present ETeMoX, an architecture based on temporal models to keep track of the decision-making processes of RL systems. In cases where the resources are limited (e.g. storage capacity or time to response), the architecture also integrates complex event processing, an event-driven approach, for detecting matches to event patterns that need to be stored, instead of keeping the entire history. The approach is applied to a mobile communications case study that uses RL for its decision-making. In order to test the generalisability of our approach, three variants of the underlying RL algorithms are used: Q-Learning, SARSA and DQN. The encouraging results show that using the proposed configurable architecture, RL developers are able to obtain explanations about the evolution of a metric, relationships between metrics, and were able to track situations of interest happening over time windows.
Original languageEnglish
JournalSoftware and Systems Modeling
Early online date18 Dec 2021
DOIs
Publication statusE-pub ahead of print - 18 Dec 2021

Bibliographical note

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Funding: This work has been partially sponsored by The Leverhulme Trust Fellowship “QuantUn: quantification of uncertainty using Bayesian surprises” (Grant No. RF-2019-548/9), the EPSRC Research Project Twenty20Insight (Grant No. EP/T017627/1), The Royal Society of Edinburgh project “A Reinforcement Learning Based Resource Management System for Long Term Care for Elderly People” (Grant No. 961_Yang), the Spanish Ministry of Science and Innovation and the European Regional Development Funds under project FAME (Grant No. RTI2018-093608-B-C33], and the Research Plan from the University of Cadiz and Grupo Ener-gético de Puerto Real S.A. under project GANGES (Grant No. IRTP03_UCA).

Fingerprint

Dive into the research topics of 'Event-driven temporal models for explanations-ETeMoX: explaining reinforcement learning'. Together they form a unique fingerprint.

Cite this