Explanation of SingleTaskGP hyperparameter priors #1261
Unanswered
nathanohara
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I wanted to ask if there are resources anywhere that give an explanation for the hyperparameter priors set on the
SingleTaskGP
. I have seen threads that mention they're designed for robust performance when inputs are normalized and outputs are standardized, but I'm wondering if there is a thorough explanation anywhere as to how the choices of priors improve the results (or if anyone wants to take a crack at it here :)). In particular, I'm interested in the purpose of the outputscale prior -- how does scaling the kernel by a constant value impact the end model fit / inference, and what advantages are gained using the Gamma prior specified by botorch?Thanks for any insights!
Beta Was this translation helpful? Give feedback.
All reactions