Skip to content

How is the hyperpriors included? #800

Answered by Balandat
CooolLaserPlasma asked this question in Q&A
Discussion options

You must be logged in to vote

In all of the included examples we use a MAP estimate, so this does the following:

is it actually the marginal likelihood multiplied with the hyperprior that is optimized?

You're correct about the fully Bayesian treatment. We have been doing some work that uses a fully Bayesian treatment using pyro's NUTS MCMC sampler - essentially we run MCMC inference to draw samples from the hyperparameter posterior (instead of finding the/a mode), then load these samples into a batched GP model (as in this gpytorch tutorial), compute the acquisition function in a batched fashion and then marginalize across the hyperparameter samples.

@sdaulton do we have any concrete plans for open-sourcing / making…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@CooolLaserPlasma
Comment options

Answer selected by CooolLaserPlasma
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants