Communications on Applied Mathematics and Computation ›› 2025, Vol. 7 ›› Issue (3): 827-864.doi: 10.1007/s42967-024-00398-7

    Next Articles

Overview Frequency Principle/Spectral Bias in Deep Learning

Zhi-Qin John Xu1,2, Yaoyu Zhang1,2, Tao Luo1,2,3,4   

  1. 1 Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai 200240, China;
    2 School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China;
    3 CMA-Shanghai, Shanghai Jiao Tong University, Shanghai 200240, China;
    4 Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
  • Received:2023-08-02 Revised:2024-02-26 Accepted:2024-03-04 Online:2025-09-20 Published:2025-05-23
  • Supported by:
    This work is sponsored by the National Key R&D Program of China Grant No. 2022YFA1008200 (Z. X., Y. Z., T. L.), the National Natural Science Foundation of China Grant Nos. 92270001 (Z. X.), 12371511 (Z. X.), 12101402 (Y. Z.), 12101401 (T. L.), the Lingang Laboratory Grant No. LG-QS-202202-08 (Y. Z.), the Shanghai Municipal Science and Technology Key Project No. 22JC1401500 (T. L.), the Shanghai Municipal of Science and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center, and the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University.

Abstract: Understanding deep learning is increasingly emergent as it penetrates more and more into industry and science. In recent years, a research line from Fourier analysis sheds light on this magical “black box” by showing a Frequency principle (F-Principle or spectral bias) of the training behavior of deep neural networks (DNNs)—DNNs often fit functions from low to highfrequencies during the training. The F-Principle is first demonstrated by one-dimensional (1D) synthetic data followed by the verification in high-dimensional real datasets. A series of works subsequently enhance the validity of the F-Principle. This low-frequency implicit bias reveals the strength of neural networks in learning low-frequency functions as well as its deficiency in learning high-frequency functions. Such understanding inspires the design of DNN-based algorithms in practical problems, explains experimental phenomena emerging in various scenarios, and further advances the study of deep learning from the frequency perspective. Although incomplete, we provide an overview of the F-Principle and propose some open problems for future research.

Key words: Neural network, Frequency principle (F-Principle), Deep learning, Generalization, Training, Optimization

CLC Number: