The 'moving targets' training algorithm

Richard Rohwer, J. Kindermann (Editor), A. Linden (Editor)

Research output: Unpublished contribution to conferenceUnpublished Conference Paperpeer-review

Abstract

A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The method resembles back-propagation in that it is a least-squares, gradient-based optimization method, but the optimization is carried out in the hidden part of state space instead of weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for simple dynamical training problems, with varied success. The failures appear to arise when the method converges to a chaotic attractor. A patch-up for this problem is proposed. The patch-up involves a technique for implementing inequality constraints which may be of interest in its own right.
Original languageEnglish
Publication statusUnpublished - 1990
EventDistributed Adaptive Information Processing (DANIP) -
Duration: 1 Jan 19901 Jan 1990

Other

OtherDistributed Adaptive Information Processing (DANIP)
Period1/01/901/01/90

Bibliographical note

Figures unavailable electronically

Keywords

  • dynamical behavior
  • neural network
  • networks
  • back-propagation

Fingerprint

Dive into the research topics of 'The 'moving targets' training algorithm'. Together they form a unique fingerprint.

Cite this