BoTorch optimisation using GPUs: tuning the optimisation based on hardware #1704
-
I am currently studying how 5 parameters can be optimised within 100-200 optimisation iterations using Knowledge Gradient (
So in gist, I want to be able to run a series of rigorous tests to quantify the increasing memory usage and computational work in an NVIDIA GPU for qKG and qEI methods as I increase the number of iterations from 1 to 200. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 11 replies
-
Hi @sambitmishra98. Sorry for the late response.
The deviations beyond the expected increasing pattern are mostly due to variations in when the optimizer terminates. By default, we use
BoTorch uses the GPU through PyTorch, so you can use PyTorch utilities to check the memory usage. You can expect roughly quadratic (based on # of training inputs) increase in memory usage between iterations. To reduce the memory usage, you can limit the batch evaluation within the optimizer by passing in
This is partly answered in the previous point. Increasing each of these will lead to additional computation, and possibly additional memory requirements. For |
Beta Was this translation helpful? Give feedback.
-
I updated my BoTorch version and set
It didn't seem to work as well for whatever number I set there. I get the following error with similar time-out of order of 10^-3.
|
Beta Was this translation helpful? Give feedback.
Hi @sambitmishra98. Sorry for the late response.
The deviations beyond the expected increasing pattern are mostly due to variations in when the optimizer terminates. By default, we use
scipy.minimize
to optimize the acquisition functions. It determines when to terminate optimization based on 2 main criteria:maxiter/maxfun
(# of iterations of the optimizer / function evaluations) &ftol/gtol
(convergence tolerance …