Curvature-driven smoothing: a learning algorithm for feed-forward networks

Christopher M. Bishop

    Research output: Contribution to journalArticle

    Abstract

    The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
    Original languageEnglish
    Pages (from-to)882-884
    Number of pages3
    JournalIEEE Transactions on Neural Networks and Learning Systems
    Volume4
    Issue number5
    DOIs
    Publication statusPublished - Sep 1993

    Fingerprint

    Learning algorithms
    Backpropagation algorithms
    Feedforward neural networks
    Interpolation
    Topology
    Derivatives

    Bibliographical note

    ©1993 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Keywords

    • feed-forward neural networks
    • real applications
    • a-priori information
    • interpolation
    • network mapping
    • error
    • back-propagation
    • algorithm
    • arbitrary topology

    Cite this

    @article{769322b42092490d86ddba52d289ad33,
    title = "Curvature-driven smoothing: a learning algorithm for feed-forward networks",
    abstract = "The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.",
    keywords = "feed-forward neural networks, real applications, a-priori information, interpolation, network mapping, error, back-propagation, algorithm, arbitrary topology",
    author = "Bishop, {Christopher M.}",
    note = "{\circledC}1993 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
    year = "1993",
    month = "9",
    doi = "10.1109/72.248466",
    language = "English",
    volume = "4",
    pages = "882--884",
    journal = "IEEE Transactions on Neural Networks and Learning Systems",
    issn = "1045-9227",
    publisher = "IEEE",
    number = "5",

    }

    Curvature-driven smoothing: a learning algorithm for feed-forward networks. / Bishop, Christopher M.

    In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 4, No. 5, 09.1993, p. 882-884.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Curvature-driven smoothing: a learning algorithm for feed-forward networks

    AU - Bishop, Christopher M.

    N1 - ©1993 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    PY - 1993/9

    Y1 - 1993/9

    N2 - The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.

    AB - The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.

    KW - feed-forward neural networks

    KW - real applications

    KW - a-priori information

    KW - interpolation

    KW - network mapping

    KW - error

    KW - back-propagation

    KW - algorithm

    KW - arbitrary topology

    UR - https://ieeexplore.ieee.org/document/248466

    U2 - 10.1109/72.248466

    DO - 10.1109/72.248466

    M3 - Article

    VL - 4

    SP - 882

    EP - 884

    JO - IEEE Transactions on Neural Networks and Learning Systems

    JF - IEEE Transactions on Neural Networks and Learning Systems

    SN - 1045-9227

    IS - 5

    ER -