Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover

Soumyadeb Chowdhury*, Sian Joel-Edgar, Prasanta Kumar Dey, Sudeshna Bhattacharya, Alexander Kharlamov

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.
Original languageEnglish
Pages (from-to)2732-2764
Number of pages33
JournalInternational Journal of Human Resource Management
Volume34
Issue number14
Early online date27 Apr 2022
DOIs
Publication statusPublished - 6 Aug 2023

Bibliographical note

Funding: The research reported in this manuscript has received funding from Aston Seed Grants 2020-21 (Aston Business School, College College of Business and Social Science, Aston University) for the project, ‘Developing Artificial Intelligence Capacity, Capability and Strategy.

Keywords

  • AI transparency
  • Artificial intelligence
  • employee turnover
  • human intelligence
  • local interpretation
  • machine learning
  • model explainability

Fingerprint

Dive into the research topics of 'Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover'. Together they form a unique fingerprint.

Cite this