Replies: 2 comments
-
Interesting, this sounds like a cool application! Mind sharing in what context that comes up? If you do want to use MC sampling then, as you suggested, you need to compute the gradients on the sample level. I haven't thought much about this, but I do recall that getting sample-level gradients in torch can be a little bit of a pain. Let me think some more about how this could work (efficiently). Do you have partial derivative observations? If so, an alternative that would circumvent the sample acrobatics would be to model the partial derivatives directly (since the gradient of a GP is a GP the posteriors end up being jointly Gaussian), as in this gpytorch example. The downside of this is that it will be much more computationally expensive. |
Beta Was this translation helpful? Give feedback.
-
In hydraulic design optimization, various aspects of the hydraulic efficiency and the Net Positive Suction Head (NPSH) are usually of interest. The NPSH determines cavitation properties of a propeller (or impeller). Low-fidelity fluid simulations cannot resolve the cavitation bubble cloud, and thus one must estimate the true cavitation behavior by calculating the pressure difference between the inlet of the hydraulic component and the pressure field on the blade surface. When (artificially) lowering the available total inlet pressure (or NPSH_Available), the area fraction of blade surface below vapor pressure increases monotonically; these area fractions are my observations (thus no partial derivative information here). The idea is to create a GP which models this behaviour, with the NPSH_Available and the blade design variables as model inputs, and then do MOO with the hydraulic efficiency and the partial derivative of "cavitating" blade area w.r.t. NPSH_Available as objectives and see what happens. Another idea is to maximize hydraulic efficiency while keeping the efficiency curve (efficiency as a function of the flow rate, i.e. the operating conditions) flat; that is, I would like to minimize its curvature at some pre-specified flow rate (simulations are usually performed at many flow rates, and in this case the flow rate should be an input to the model as well).
I'd appreciate that :) You may close this issue for now if these investigations are low priority. Br, |
Beta Was this translation helpful? Give feedback.
-
Hello,
Thanks for an awesome package! Been using it since the release.
Recently I've found applications where it would be interesting to do MOO where at least one objective is a function of the partial derivatives of a (GPyTorch) model with respect to some of the input parameters.
I guess I could just use the posterior mean of the model and do something like
within a custom objective - and let the other objectives be handled in the usual BoTorch way with quasi MC-sampling.
But since I want to leverage quasi-MC sampling for the gradient objective as well, I wonder if there is any better way than doing
and then reshape the
gradient_samples
. Note: herenum_samples
is the number of samples used by the (quasi) MC samplersampler
. Not even sure if this is correct though, but you get the idea: the inputs toautograd.grad
needs to be repeated so that I get gradient samples - not the sum of the gradient samples as would be the case if I would have followed the above approach.Any ideas?
Br,
Jimmy
Beta Was this translation helpful? Give feedback.
All reactions