Communications on Applied Mathematics and Computation ›› 2024, Vol. 6 ›› Issue (2): 1241-1269.doi: 10.1007/s42967-023-00324-3

• ORIGINAL PAPERS • Previous Articles     Next Articles

Adaptive State-Dependent Diffusion for Derivative-Free Optimization

Bj?rn Engquist1, Kui Ren2, Yunan Yang3   

  1. 1. Department of Mathematics and the Oden Institute, The University of Texas, Austin, TX 78712, USA;
    2. Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027, USA;
    3. Department of Mathematics, Cornell University, Ithaca, NY 14853, USA
  • Received:2023-02-09 Revised:2023-09-14 Accepted:2023-09-17 Online:2024-01-09 Published:2024-01-09
  • Contact: Yunan Yang,E-mail:yunan.yang@cornell.edu;Bj?rn Engquist,E-mail:engquist@oden.utexas.edu;Kui Ren,E-mail:kr2002@columbia.edu E-mail:yunan.yang@cornell.edu;engquist@oden.utexas.edu;kr2002@columbia.edu
  • Supported by:
    This work is partially supported by the National Science Foundation through grants DMS-2208504 (BE), DMS-1913309 (KR), DMS-1937254 (KR), and DMS-1913129 (YY).

Abstract: This paper develops and analyzes a stochastic derivative-free optimization strategy. A key feature is the state-dependent adaptive variance. We prove global convergence in probability with algebraic rate and give the quantitative results in numerical examples. A striking fact is that convergence is achieved without explicit information of the gradient and even without comparing different objective function values as in established methods such as the simplex method and simulated annealing. It can otherwise be compared to annealing with state-dependent temperature.

Key words: Derivative-free optimization, Global optimization, Adaptive diffusion, Stationary distribution, Fokker-Planck theory