Bayesian invariant measurements of generalisation for continuous distributions

Huaiyu Zhu, Richard Rohwer

    Research output: Preprint or Working paperTechnical report

    Abstract

    A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.
    Original languageEnglish
    Place of PublicationBirmingham B4 7ET, UK
    PublisherAston University
    Number of pages26
    ISBN (Print)NCRG/95/004
    Publication statusPublished - 1995

    Keywords

    • neural network
    • Bayesian decision method
    • geometric loss function
    • optimal estimator
    • Gaussian distributions

    Fingerprint

    Dive into the research topics of 'Bayesian invariant measurements of generalisation for continuous distributions'. Together they form a unique fingerprint.

    Cite this