Reservoir computing and extreme learning machines for non-linear time-series data analysis

J.B. Butcher, D. Verstraeten, B. Schrauwen, C.R. Day, P.W. Haycock

Research output: Contribution to journalArticle

Abstract

Random projection architectures such as Echo state networks (ESNs) and Extreme Learning Machines (ELMs) use a network containing a randomly connected hidden layer and train only the output weights, overcoming the problems associated with the complex and computationally demanding training algorithms traditionally used to train neural networks, particularly recurrent neural networks. In this study an ESN is shown to contain an antagonistic trade-off between the amount of non-linear mapping and short-term memory it can exhibit when applied to time-series data which are highly non-linear. To overcome this trade-off a new architecture, Reservoir with Random Static Projections (R2SP) is investigated, that is shown to offer a significant improvement in performance. A similar approach using an ELM whose input is presented through a time delay (TD-ELM) is shown to further enhance performance where it significantly outperformed the ESN and R2SP as well other architectures when applied to a novel task which allows the short-term memory and non-linearity to be varied. The hard-limiting memory of the TD-ELM appears to be best suited for the data investigated in this study, although ESN-based approaches may offer improved performance when processing data which require a longer fading memory.
Original languageEnglish
Pages (from-to)76-89
JournalNeural Networks
Volume38
DOIs
Publication statusPublished - Feb 2013

Fingerprint

Learning systems
Time series
Data storage equipment
Short-Term Memory
Recurrent neural networks
Time delay
Neural networks
Weights and Measures
Machine Learning

Cite this

Butcher, J.B. ; Verstraeten, D. ; Schrauwen, B. ; Day, C.R. ; Haycock, P.W. / Reservoir computing and extreme learning machines for non-linear time-series data analysis. In: Neural Networks. 2013 ; Vol. 38. pp. 76-89.
@article{81c4140444084085a40df15a84734f99,
title = "Reservoir computing and extreme learning machines for non-linear time-series data analysis",
abstract = "Random projection architectures such as Echo state networks (ESNs) and Extreme Learning Machines (ELMs) use a network containing a randomly connected hidden layer and train only the output weights, overcoming the problems associated with the complex and computationally demanding training algorithms traditionally used to train neural networks, particularly recurrent neural networks. In this study an ESN is shown to contain an antagonistic trade-off between the amount of non-linear mapping and short-term memory it can exhibit when applied to time-series data which are highly non-linear. To overcome this trade-off a new architecture, Reservoir with Random Static Projections (R2SP) is investigated, that is shown to offer a significant improvement in performance. A similar approach using an ELM whose input is presented through a time delay (TD-ELM) is shown to further enhance performance where it significantly outperformed the ESN and R2SP as well other architectures when applied to a novel task which allows the short-term memory and non-linearity to be varied. The hard-limiting memory of the TD-ELM appears to be best suited for the data investigated in this study, although ESN-based approaches may offer improved performance when processing data which require a longer fading memory.",
author = "J.B. Butcher and D. Verstraeten and B. Schrauwen and C.R. Day and P.W. Haycock",
year = "2013",
month = "2",
doi = "10.1016/j.neunet.2012.11.011",
language = "English",
volume = "38",
pages = "76--89",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier",

}

Reservoir computing and extreme learning machines for non-linear time-series data analysis. / Butcher, J.B.; Verstraeten, D.; Schrauwen, B.; Day, C.R.; Haycock, P.W.

In: Neural Networks, Vol. 38, 02.2013, p. 76-89.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Reservoir computing and extreme learning machines for non-linear time-series data analysis

AU - Butcher, J.B.

AU - Verstraeten, D.

AU - Schrauwen, B.

AU - Day, C.R.

AU - Haycock, P.W.

PY - 2013/2

Y1 - 2013/2

N2 - Random projection architectures such as Echo state networks (ESNs) and Extreme Learning Machines (ELMs) use a network containing a randomly connected hidden layer and train only the output weights, overcoming the problems associated with the complex and computationally demanding training algorithms traditionally used to train neural networks, particularly recurrent neural networks. In this study an ESN is shown to contain an antagonistic trade-off between the amount of non-linear mapping and short-term memory it can exhibit when applied to time-series data which are highly non-linear. To overcome this trade-off a new architecture, Reservoir with Random Static Projections (R2SP) is investigated, that is shown to offer a significant improvement in performance. A similar approach using an ELM whose input is presented through a time delay (TD-ELM) is shown to further enhance performance where it significantly outperformed the ESN and R2SP as well other architectures when applied to a novel task which allows the short-term memory and non-linearity to be varied. The hard-limiting memory of the TD-ELM appears to be best suited for the data investigated in this study, although ESN-based approaches may offer improved performance when processing data which require a longer fading memory.

AB - Random projection architectures such as Echo state networks (ESNs) and Extreme Learning Machines (ELMs) use a network containing a randomly connected hidden layer and train only the output weights, overcoming the problems associated with the complex and computationally demanding training algorithms traditionally used to train neural networks, particularly recurrent neural networks. In this study an ESN is shown to contain an antagonistic trade-off between the amount of non-linear mapping and short-term memory it can exhibit when applied to time-series data which are highly non-linear. To overcome this trade-off a new architecture, Reservoir with Random Static Projections (R2SP) is investigated, that is shown to offer a significant improvement in performance. A similar approach using an ELM whose input is presented through a time delay (TD-ELM) is shown to further enhance performance where it significantly outperformed the ESN and R2SP as well other architectures when applied to a novel task which allows the short-term memory and non-linearity to be varied. The hard-limiting memory of the TD-ELM appears to be best suited for the data investigated in this study, although ESN-based approaches may offer improved performance when processing data which require a longer fading memory.

UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-84871651156&partnerID=MN8TOARS

UR - https://www.sciencedirect.com/science/article/pii/S0893608012003085?via%3Dihub

U2 - 10.1016/j.neunet.2012.11.011

DO - 10.1016/j.neunet.2012.11.011

M3 - Article

VL - 38

SP - 76

EP - 89

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -