Abstract
The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
Original language | English |
---|---|
Pages (from-to) | 882-884 |
Number of pages | 3 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 4 |
Issue number | 5 |
DOIs | |
Publication status | Published - Sept 1993 |
Bibliographical note
©1993 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Keywords
- feed-forward neural networks
- real applications
- a-priori information
- interpolation
- network mapping
- error
- back-propagation
- algorithm
- arbitrary topology