Learning with regularizers in multilayer neural networks

David Saad, Magnus Rattray

    Research output: Contribution to journalArticlepeer-review

    Abstract

    We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.
    Original languageEnglish
    Pages (from-to)2170-2176
    Number of pages7
    JournalPhysical Review E
    Volume57
    Issue number2
    Publication statusPublished - Feb 1998

    Bibliographical note

    Copyright of the American Physical Society

    Keywords

    • on-line gradient-descent learning scenario
    • Gaussian
    • noise
    • weight decay
    • error

    Fingerprint

    Dive into the research topics of 'Learning with regularizers in multilayer neural networks'. Together they form a unique fingerprint.

    Cite this