Distributed edge intelligence is a disruptive research area that enables the execution of machine learning and deep learning (ML/DL) algorithms close to where data are generated. Since edge devices are more limited and heterogeneous than typical cloud devices, many hindrances have to be overcome to fully extract the potential benefits of such an approach (such as data-in-motion analytics). In this paper, we investigate the challenges of running ML/DL on edge devices in a distributed way, paying special attention to how techniques are adapted or designed to execute on these restricted devices. The techniques under discussion pervade the processes of caching, training, inference, and offloading on edge devices. We also explore the benefits and drawbacks of these strategies.
Bibliographical note© 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de
Nível Superior—Brasil (CAPES)—Finance Code 001, also by Brazilian funding agencies FAPESP
(grant number 2015/24144-7), FAPERJ and CNPq. Prof. Chang’s work is partly supported by VC
- artificial intelligence
- edge intelligence
- fog intelligence
- Internet of Things
- machine learning