Understanding High-Dimensional Bayesian Optimization
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This paper investigates challenges in high-dimensional Bayesian optimization (HDBO), particularly focusing on vanishing gradients that hinder the effectiveness of standard methods in high dimensions. It highlights how initialization schemes and acquisition function optimization contribute to these issues, and explains why recent methods succeed by promoting local search behaviors and mitigating vanishing gradients. The authors propose a simple maximum likelihood estimation (MLE) based approach called MSR, which leverages these insights to achieve state-of-the-art performance on real-world benchmarks. Empirical evidence is provided to support their findings regarding vanishing gradients and the role of local search induced by techniques like RAASP sampling in HDBO.