Bayesian Inference in Neural Networks
Approximate marginal Bayesian computation and inference are developed for neural network models. The marginal considerations include determination of approximate Bayes factors for model choice about the number of nonlinear sigmoid terms, approximate predictive density computation for a future observable and determination of approximate Bayes estimates for the nonlinear regression function. Standard conjugate analysis applied to the linear parameters leads to an explicit posterior on the nonlinear parameters. Further marginalisation is performed using Laplace approximations. The choice of prior and the use of an alternative sigmoid lead to posterior invariance in the nonlinear parameter which is discussed in connection with the lack of sigmoid identifiability. A principal finding is that parsimonious model choice is best determined from the list of modal estimates used in the Laplace approximation of the Bayes factors for various numbers of sigmoids. By comparison, the values of the various Bayes factors are of only secondary importance. The proposed methods are illustrated in the context of two nonlinear datasets that involve respectively univariate and multivariate nonlinear regression models.
R. Paige and R. W. Butler, "Bayesian Inference in Neural Networks," Biometrika, Oxford University Press, Jan 2001.
The definitive version is available at https://doi.org/10.1093/biomet/88.3.623
Mathematics and Statistics
Keywords and Phrases
bayesian computation; Laplace approximation; model choice; neural network; predi
International Standard Serial Number (ISSN)
Article - Journal
© 2001 Oxford University Press, All rights reserved.
01 Jan 2001