An upper bound on the Bayesian error bars for generalized linear regression

C Qazaz, Christopher K. I. Williams, Christopher M. Bishop

    Research output: Chapter in Book/Published conference outputChapter

    Abstract

    In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
    Original languageEnglish
    Title of host publicationMathematics of neural networks
    Subtitle of host publicationmodels, algorithms and applications
    EditorsStephen W. Ellacott, John C. Mason, Iain J. Anderson
    PublisherKluwer
    Pages295-299
    Number of pages5
    ISBN (Print)978-0-7923-9933-9
    Publication statusPublished - 1997

    Publication series

    NameOperations Research/Computer Science Interfaces Series
    PublisherKluwer (Now part of Springer)
    Volume8

    Bibliographical note

    Awaiting for publisher permission EW 06/07/2009 (c in E)

    Keywords

    • Bayesian
    • distribution
    • predictions
    • error bars bars
    • linear regression

    Fingerprint

    Dive into the research topics of 'An upper bound on the Bayesian error bars for generalized linear regression'. Together they form a unique fingerprint.

    Cite this