Next steps in variability management due to autonomous behaviour and runtime learning

Nelly Bencomo

    Research output: Chapter in Book/Published conference outputConference publication

    Abstract

    One of the basic principles in product lines is to delay design decisions related to offered functionality and quality to later phases of the life cycle [25]. Instead of deciding on what system to develop in advance, a set of assets and a common reference architecture are specified and implemented during the Domain Engineering process. Later on, during Application Engineering, specific systems are developed to satisfy the requirements reusing the assets and architecture [16]. Traditionally, this is during the Application Engineering when delayed design decisions are solved. The realization of this delay relies heavily on the use of variability in the development of product lines and systems. However, as systems become more interconnected and diverse, software architects cannot easily foresee the software variants and the interconnections between components. Consequently, a generic a priori model is conceived to specify the system's dynamic behaviour and architecture. The corresponding design decisions are left to be solved at runtime [13].
    Surprisingly, few research initiatives have investigated variability models at runtime [9]. Further, they have been applied only at the level of goals and architecture, which contrasts to the needs claimed by the variability community, i.e., Software Product Lines (SPLC) and Dynamic Software Product Lines (DSPL) [2, 10, 14, 22]. Especially, the vision of DSPL with their ability to support runtime updates with virtually zero downtime for products of a software product line, denotes the obvious need of variability models being used at runtime to adapt the corresponding programs. A main challenge for dealing with runtime variability is that it should support a wide range of product customizations under various scenarios that might be unknown until the execution time, as new product variants can be identified only at runtime [10, 11]. Contemporary variability models face the challenge of representing runtime variability to therefore allow the modification of variation points during the system's execution, and underpin the automation of the system's reconfiguration [15]. The runtime representation of feature models (i.e. the runtime model of features) is required to automate the decision making [9].
    Software automation and adaptation techniques have traditionally required a priori models for the dynamic behaviour of systems [17]. With the uncertainty present in the scenarios involved, the a priori model is difficult to define [20, 23, 26]. Even if foreseen, its maintenance is labour-intensive and, due to architecture decay, it is also prone to get out-of-date. However, the use of models@runtime does not necessarily require defining the system's behaviour model beforehand. Instead, different techniques such as machine learning, or mining software component interactions from system execution traces can be used to build a model which is in turn used to analyze, plan, and execute adaptations [18], and synthesize emergent software on the fly [7].
    Another well-known problem posed by the uncertainty that characterize autonomous systems is that different stakeholders (e.g. end users, operators and even developers) may not understand them due to the emergent behaviour. In other words, the running system may surprise its customers and/or developers [4]. The lack of support for explanation in these cases may compromise the trust to stakeholders, who may eventually stop using a system [12, 24]. I speculate that variability models can offer great support for (i) explanation to understand the diversity of the causes and triggers of decisions during execution and their corresponding effects using traceability [5], and (ii) better understand the behaviour of the system and its environment.
    Further, an extension and potentially reframing of the techniques associated with variability management may be needed to help taming uncertainty and support explanation and understanding of the systems. The use of new techniques such as machine learning exacerbates the current situation. However, at the same time machine learning techniques can also help and be used, for example, to explore the variability space [1]. What can the community do to face the challenges associated?
    We need to meaningfully incorporate techniques from areas such as artificial intelligence, machine learning, optimization, planning, decision theory, and bio-inspired computing into our variability management techniques to provide explanation and management of the diversity of decisions, their causes and the effects associated. My own previous work has progressed [3, 5, 6, 8, 11, 12, 19, 21] to reflect what was discussed above.
    Original languageEnglish
    Title of host publicationProceedings - VaMoS 2020
    Subtitle of host publication14th International Working Conference on Variability Modelling of Software-Intensive Systems
    EditorsMaxime Cordy, Mathieu Acher, Danilo Beuche, Gunter Saake
    PublisherACM
    Pages1-2
    ISBN (Electronic)978-1-4503-7501-6
    DOIs
    Publication statusPublished - 5 Feb 2020
    Event14th International Working Conference on Variability Modelling of Software-Intensive Systems - Magdeburg, Germany
    Duration: 5 Feb 20207 Feb 2020

    Conference

    Conference14th International Working Conference on Variability Modelling of Software-Intensive Systems
    Country/TerritoryGermany
    CityMagdeburg
    Period5/02/207/02/20

    Keywords

    • Autonomous systems
    • Dynamic software product lines
    • Dynamic variability
    • Machine learning
    • Uncertainty
    • Variability management

    Fingerprint

    Dive into the research topics of 'Next steps in variability management due to autonomous behaviour and runtime learning'. Together they form a unique fingerprint.

    Cite this