Bayesian invariant measurements of generalisation for continuous distributions

Huaiyu Zhu, Richard Rohwer

    Research output: Working paperTechnical report

    Abstract

    A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.
    Original languageEnglish
    Place of PublicationBirmingham B4 7ET, UK
    PublisherAston University
    Number of pages26
    ISBN (Print)NCRG/95/004
    Publication statusPublished - 1995

    Fingerprint

    Neural networks
    Gaussian distribution
    Invariance

    Keywords

    • neural network
    • Bayesian decision method
    • geometric loss function
    • optimal estimator
    • Gaussian distributions

    Cite this

    Zhu, H., & Rohwer, R. (1995). Bayesian invariant measurements of generalisation for continuous distributions. Birmingham B4 7ET, UK: Aston University.
    Zhu, Huaiyu ; Rohwer, Richard. / Bayesian invariant measurements of generalisation for continuous distributions. Birmingham B4 7ET, UK : Aston University, 1995.
    @techreport{069e6dcea61d4da0af27f887f56c79a2,
    title = "Bayesian invariant measurements of generalisation for continuous distributions",
    abstract = "A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.",
    keywords = "neural network, Bayesian decision method, geometric loss function, optimal estimator, Gaussian distributions",
    author = "Huaiyu Zhu and Richard Rohwer",
    year = "1995",
    language = "English",
    isbn = "NCRG/95/004",
    publisher = "Aston University",
    type = "WorkingPaper",
    institution = "Aston University",

    }

    Zhu, H & Rohwer, R 1995 'Bayesian invariant measurements of generalisation for continuous distributions' Aston University, Birmingham B4 7ET, UK.

    Bayesian invariant measurements of generalisation for continuous distributions. / Zhu, Huaiyu; Rohwer, Richard.

    Birmingham B4 7ET, UK : Aston University, 1995.

    Research output: Working paperTechnical report

    TY - UNPB

    T1 - Bayesian invariant measurements of generalisation for continuous distributions

    AU - Zhu, Huaiyu

    AU - Rohwer, Richard

    PY - 1995

    Y1 - 1995

    N2 - A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.

    AB - A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.

    KW - neural network

    KW - Bayesian decision method

    KW - geometric loss function

    KW - optimal estimator

    KW - Gaussian distributions

    M3 - Technical report

    SN - NCRG/95/004

    BT - Bayesian invariant measurements of generalisation for continuous distributions

    PB - Aston University

    CY - Birmingham B4 7ET, UK

    ER -

    Zhu H, Rohwer R. Bayesian invariant measurements of generalisation for continuous distributions. Birmingham B4 7ET, UK: Aston University. 1995.