General gaussian priors for improved generalisation

Research output: Contribution to journalArticle

Abstract

We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.
Original languageEnglish
Pages (from-to)937-945
Number of pages9
JournalNeural Networks
Volume9
Issue number6
DOIs
Publication statusPublished - 1 Aug 1996

Fingerprint

Statistical mechanics
Neural Networks (Computer)
Parameterization
Mechanics
Noise
Neural networks

Bibliographical note

NOTICE: this is the author’s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Saad, David (1996). General gaussian priors for improved generalisation. Neural Networks, 9 (6), pp. 937-945. DOI 10.1016/0893-6080(95)00133-6

Keywords

  • learning and generalization
  • regularizers
  • priors

Cite this

@article{198a715c86b742939748409ad74c3c94,
title = "General gaussian priors for improved generalisation",
abstract = "We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.",
keywords = "learning and generalization, regularizers, priors",
author = "David Saad",
note = "NOTICE: this is the author’s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Saad, David (1996). General gaussian priors for improved generalisation. Neural Networks, 9 (6), pp. 937-945. DOI 10.1016/0893-6080(95)00133-6",
year = "1996",
month = "8",
day = "1",
doi = "10.1016/0893-6080(95)00133-6",
language = "English",
volume = "9",
pages = "937--945",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier",
number = "6",

}

General gaussian priors for improved generalisation. / Saad, David.

In: Neural Networks, Vol. 9, No. 6, 01.08.1996, p. 937-945.

Research output: Contribution to journalArticle

TY - JOUR

T1 - General gaussian priors for improved generalisation

AU - Saad, David

N1 - NOTICE: this is the author’s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Saad, David (1996). General gaussian priors for improved generalisation. Neural Networks, 9 (6), pp. 937-945. DOI 10.1016/0893-6080(95)00133-6

PY - 1996/8/1

Y1 - 1996/8/1

N2 - We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.

AB - We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.

KW - learning and generalization

KW - regularizers

KW - priors

UR - http://www.scopus.com/inward/record.url?scp=0030221481&partnerID=8YFLogxK

UR - https://www.sciencedirect.com/science/article/pii/0893608095001336?via%3Dihub

U2 - 10.1016/0893-6080(95)00133-6

DO - 10.1016/0893-6080(95)00133-6

M3 - Article

VL - 9

SP - 937

EP - 945

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 6

ER -