Constraints on model prediction? #2790
-
Is there a way to place constraints on the model prediction? This is helpful in the case where I know the bound(s) or even the optimal value of my fitness function, akin to this paper: Knowing The What But Not The Where in Bayesian Optimization. The paper formulates a new GP assuming you know the optimal value ahead of time, although I would have figured that there is a simpler method of just clipping the model predictions when optimizing over the acquisition function. As a side note, I thought this would be addressed by Outcome Constraints, although the example looks like it places constraints on the input rather than the prediction. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
BoTorch doesn't currently provide a way to do this out of the box. You could probably hack this into an Another approach would be to give the model an Yet another modeling approach would be to modify the likelihood to e.g. a BernoulliLikelihood (if you knew outcomes were between 0 and 1), but then some nice analytic properties would be lost and model-fitting would be more difficult. Outcome constraints denote which outcomes are unacceptable rather than impossible. |
Beta Was this translation helpful? Give feedback.
BoTorch doesn't currently provide a way to do this out of the box.
You could probably hack this into an
MCAcquisitionFunction
(e.g.qLogExpectedImprovement
) by clipping the samples drawn from the model'sposterior
, by modifying the posterior'srsample
method. I find that approach a little weird since you're modifying the model only for acquisition function optimization and not when fitting it, but it might work.Another approach would be to give the model an
OutcomeTransform
that can only produce values in some range. For example, theLog
outcome transform provided in BoTorch makes positive predictions. Or you could easily define alog(y - f^* + small_number)
outcome transform if you know…