Abstract
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The algorithm resembles back-propagation in that an error function is minimized using a gradient-based method, but the optimization is carried out in the hidden part of state space either instead of, or in addition to weight space. Computational results are presented for some simple dynamical training problems, one of which requires response to a signal 100 time steps in the past.
Original language | English |
---|---|
Publication status | Unpublished - 1990 |
Event | Advances in Neural Information Processing Systems 1990 - Dublin, United Kingdom Duration: 29 Aug 1990 → 31 Aug 1990 |
Other
Other | Advances in Neural Information Processing Systems 1990 |
---|---|
Country/Territory | United Kingdom |
City | Dublin |
Period | 29/08/90 → 31/08/90 |
Keywords
- dynamical behavior
- neural network
- error