hide
Free keywords:
-
Abstract:
AbstractBayesian nonparametric models are widely and successfully
used for statistical prediction. While posterior consistency properties are
well studied in quite general settings, results have been proved using abstract
concepts such as metric entropy, and they come with subtle conditions
which are hard to validate and not intuitive when applied to concrete
models. Furthermore, convergence rates are difficult to obtain.
By focussing on the concept of information consistency for Bayesian
Gaussian process (GP)models, consistency results and convergence rates
are obtained via a regret bound on cumulative log loss. These results
depend strongly on the covariance function of the prior process, thereby
giving a novel interpretation to penalization with reproducing kernel
Hilbert space norms and to commonly used covariance function classes
and their parameters. The proof of the main result employs elementary
convexity arguments only. A theorem of Widom is used in order to obtain
precise convergence rates for several covariance functions widely used in
practice.