Plotting the acquisition values for knowledgegradient #815
-
Hi, The one_shot knowledge gradient implementation appears to work very well. I'm attempting to migrate my research over to BoTorch and as a sanity check would like to check the results of my old KG code with the new. To get an insight into how the KG is working I was thinking I could plot the values, but I'm hitting a few barriers. Assuming my objective is defined over 1 input dimension [0,1]
and I train the model:
Can I get the KG valuation at some point X (candidate), given some randomly sampled points (rand_points) like this:
If I plot this for a discretisation of my input domain the peaks do not always align with the optim_acq results, though I would expect this as I'm not enforcing they use the same random sampling to generate the values. Is there a way to enforce this? and/or am I abusing/misunderstanding the code here? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 10 replies
-
Hi @kentwar. You should check out the Side note: I see that you're using 512 fantasies. Particularly for low dimensional problems, you can get away with much fewer fantasies, such as 16 / 64. Just noting this since it will help a lot with runtime and memory use. |
Beta Was this translation helpful? Give feedback.
Hi @kentwar. You should check out the
evaluate
method ofqKnowledgeGradient
.qKG
is very sensitive to the solutions of the inner optimization problems, for which you're passingtorch.random
here. This would undoubtedly lead to some plots that don't make much sense. When you useoptimize_acqf
, these get optimized along with the candidate point but that doesn't happen when plotting. For this exact reason, we implemented theevalute
method, to which you only pass the candidate point. It will solve the inner optimization problems under the hood, leading to a much more reliableqKG
estimate.Side note: I see that you're using 512 fantasies. Particularly for low dimensional problems, you can ge…