Abstract
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The method resembles back-propagation in that it is a least-squares, gradient-based optimization method, but the optimization is carried out in the hidden part of state space instead of weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for simple dynamical training problems, with varied success. The failures appear to arise when the method converges to a chaotic attractor. A patch-up for this problem is proposed. The patch-up involves a technique for implementing inequality constraints which may be of interest in its own right.
Original language | English |
---|---|
Publication status | Unpublished - 1990 |
Event | Distributed Adaptive Information Processing (DANIP) - Duration: 1 Jan 1990 → 1 Jan 1990 |
Other
Other | Distributed Adaptive Information Processing (DANIP) |
---|---|
Period | 1/01/90 → 1/01/90 |
Bibliographical note
Figures unavailable electronicallyKeywords
- dynamical behavior
- neural network
- networks
- back-propagation