Bayesian invariant measurements of generalisation for discrete distributions

Huaiyu Zhu, Richard Rohwer

    Research output: Preprint or Working paperTechnical report

    Abstract

    Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences are invariant measurements of the divergence between two distributions. The posterior average information divergence is used to measure the generalisation ability of a network. The optimal estimators for multinomial distributions with Dirichlet priors are studied in detail. This confirms that the definition is compatible with intuition. The results also show that many commonly used methods can be put under this unified framework, by assume special priors and special divergences.
    Original languageEnglish
    Place of PublicationBirmingham, UK
    PublisherAston University
    Number of pages23
    ISBN (Print)NCRG/4351
    Publication statusUnpublished - 31 Aug 1995

    Keywords

    • Neural network
    • learning rules
    • Bayesian framework
    • distribution

    Fingerprint

    Dive into the research topics of 'Bayesian invariant measurements of generalisation for discrete distributions'. Together they form a unique fingerprint.

    Cite this