General gaussian priors for improved generalisation

    Research output: Contribution to journalArticlepeer-review

    Abstract

    We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.
    Original languageEnglish
    Pages (from-to)937-945
    Number of pages9
    JournalNeural Networks
    Volume9
    Issue number6
    DOIs
    Publication statusPublished - 1 Aug 1996

    Bibliographical note

    NOTICE: this is the author’s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Saad, David (1996). General gaussian priors for improved generalisation. Neural Networks, 9 (6), pp. 937-945. DOI 10.1016/0893-6080(95)00133-6

    Keywords

    • learning and generalization
    • regularizers
    • priors

    Fingerprint

    Dive into the research topics of 'General gaussian priors for improved generalisation'. Together they form a unique fingerprint.

    Cite this