Abstract:
This presentation discusses the problem of optimization under uncertainty in a high-dimensional, numerically expensive setting and it's solution with limited computational resources. To this end we show how the problem of stochastic optimization can elegantly be rephrased as one of Bayesian inference, where the intractable posterior can be resolved by means of state-of-the-art techniques such as Variational Bayes or Sequential Monte Carlo. Having successfully obtained a solution to our stochastic optimization problem by employing black box variational inference and stochastic backpropagation learning, we then consider the question of probabilistic inference using approximate solvers. The motivation is that even sophisticated inference algorithms can quickly incur prohibitively large computational cost given a sufficiently complex forward model. To this end we propose a novel and very general framework based on the introduction of an extended probability space which permits to consistently incorporate epistemic uncertainty introduced by approximate solvers in our posterior belief. We show how both Bayesian regression and probabilistic multi- fidelity models suitable for high-dimensional problems allow to define a data restricted posterior which expresses the posterior belief given limited number of forward solver evaluations. Finally, we evaluate the data restricted posterior and the probabilistic multi-fidelity approach by applying it to a now doubly- stochastic optimization problem, considering in addition to the inherent aleatory uncertainty now also the epistemic uncertainty introduced by the approximate solver.