[1] Abbott, B.P., et al.: Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116, 061102 (2016) [2] Auger, F., Flandrin, P., Lin, Y.-T., McLaughlin, S., Meignen, S., Oberlin, T., Wu, H.-T.: Time-frequency reassignment and synchrosqueezing: an overview. IEEE Sig. Process. Mag. 30(6), 32-41 (2013) [3] Bach, F.: On the equivalence between kernel quadrature rules and random feature expansions. J. Mach. Learn. Res. 18(21), 1-38 (2017) [4] Bertsimas, D., Van Parys, B.: Sparse high-dimensional regression: exact scalable algorithms and phase transitions. Ann. Statist. 48(1), 300-323 (2020) [5] Block, H.-D.: The perceptron: a model for brain functioning. I. Rev. Mod. Phys. 34(1), 123 (1962) [6] Cai, T.T., Xu, G., Zhang, J.: On recovery of sparse signals via l1 minimization. IEEE Trans. Inf. Theory 55(7), 3388-3397 (2009) [7] Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406-5425 (2006) [8] Carvalho, V.R., Moraes, M.F., Braga, A.P., Mendes, E.M.: Evaluating five different adaptive decomposition methods for EEG signal seizure detection and classification. Biomed. Sig. Process. Control 62, 102073 (2020) [9] Chen, Z., Schaeffer, H.: Conditioning of random feature matrices: double descent and generalization error. arXiv:2110.11477 (2021) [10] Daubechies, I., Lu, J., Wu, H.-T.: Synchrosqueezed wavelet transforms: an empirical mode decomposition-like tool. Appl. Comput. Harmon. Anal. 30(2), 243-261 (2011) [11] Dragomiretskiy, K., Zosso, D.: Variational mode decomposition. IEEE Trans. Sig. Process. 62(3), 531-544 (2013) [12] E, W., Ma, C., Wojtowytsch, S., Wu, L.: Towards a mathematical understanding of neural network-based machine learning: what we know and what we don’t. arXiv:2009.10713 (2020) [13] Flandrin, P., Rilling, G., Goncalves, P.: Empirical mode decomposition as a filter bank. IEEE Sig. Process. Lett. 11(2), 112-114 (2004) [14] Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013) [15] Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv:1803.03635 (2018) [16] Gilles, J.: Empirical wavelet transform. IEEE Trans. Sig. Process. 61(16), 3999-4010 (2013) [17] Gilles, J., Heal, K.: A parameterless scale-space approach to find meaningful modes in histograms-application to image and spectrum segmentation. Int. J. Wavelets Multiresolution Inf. Process. 12(6), 2456-2464 (2014) [18] Gilles, J., Tran, G., Osher, S.: 2D empirical transforms, wavelets, ridgelets, and curvelets revisited. SIAM J. Imaging Sci. 7(1), 157-186 (2014) [19] Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2(2), 323-343 (2009) [20] Hashemi, A., Schaeffer, H., Shi, R., Topcu, U., Tran, G., Ward, R.: Generalization bounds for sparse random feature expansions. arXiv:2103.03191 (2021) [21] Hastie, T., Tibshirani, R., Wainwright, M.: Statistical Learning with Sparsity: the Lasso and Generalizations. Chapman and Hall/CRC, USA (2019) [22] Hazimeh, H., Mazumder, R.: Fast best subset selection: coordinate descent and local combinatorial optimization algorithms. Oper. Res. 68(5), 1517-1537 (2020) [23] Hou, T.Y., Shi, Z.: Adaptive data analysis via sparse time-frequency representation. Adv. Adapt. Data Anal. 3(1/2), 1-28 (2011) [24] Huang, N.E., Shen, Z., Long, S.R., Wu, M.C., Shih, H.H., Zheng, Q., Yen, N.C., Tung, C.C., Liu, H.H.: The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. 454(1971), 903-995 (1998) [25] Huang, Z., Zhang, J., Zhao, T., Sun, Y.: Synchrosqueezing S-transform and its application in seismic spectral decomposition. IEEE Trans. Geosci. Remote Sens. 54(2), 817-825 (2015) [26] Li, Z., Ton, J.-F., Oglic, D., Sejdinovic, D.: Towards a unified analysis of random Fourier features. J. Mach. Learn. Res. 22(108), 108 (2021) [27] Liu, W., Chen, W.: Recent advancements in empirical wavelet transform and its applications. IEEE Access 7, 103770-103780 (2019) [28] Luedtke, J.: A branch-and-cut decomposition algorithm for solving chance-constrained mathematical programs with finite support. Math. Program. 146(1), 219-244 (2014) [29] Maass, W., Markram, H.: On the computational power of circuits of spiking neurons. J. Comput. System Sci. 69(4), 593-616 (2004) [30] Mazumder, R., Radchenko, P., Dedieu, A.: Subset selection with shrinkage: sparse linear modeling when the SNR is low. arXiv:1708.03288 (2017) [31] Mei, S., Misiakiewicz, T., Montanari, A.: Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration. arXiv:2101.10588 (2021) [32] Moosmann, F., Triggs, B., Jurie, F.: Randomized clustering forests for building fast and discriminative visual vocabularies. In: NIPS. NIPS (2006) [33] Muradeli, J.: ssqueezepy. GitHub Repository. https://github.com/OverLordGoldDragon/ssqueezepy/ (2020) [34] Pele, O., Werman, M.: A linear time histogram metric for improved sift matching. In: Forsyth, D., Torr, P., Zisserman, A. (eds) Computer Vision - ECCV 2008, vol. 5304, pp. 495-508. Springer, Berlin, Heidelberg (2008) [35] Pele, O., Werman, M.: Fast and robust earth mover’s distances. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 460-467. IEEE (2009) [36] Rahimi, A., Recht, B.: Random features for large-scale kernel machines. In: NIPS, vol. 3, pp. 5. Citeseer (2007) [37] Rahimi, A., Recht, B.: Uniform approximation of functions with random bases. In: 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pp. 555-561. IEEE (2008) [38] Rahimi, A., Recht, B.: Weighted sums of random kitchen sinks: replacing minimization with randomization in learning. Adv. Neural Inf. Process. Syst. 21, 1313-1320 (2008) [39] Rudi, A, Rosasco, L.: Generalization properties of learning with random features. In: NIPS, pp. 3215-3225 (2017) [40] Saha, E., Schaeffer, H., Tran, G.: HARFE: hard-ridge random feature expansion. arXiv:2202.02877 (2022) [41] Sriperumbudur, B.K., Szabo, Z.: Optimal rates for random Fourier features. In: NIPS'15: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 1, pp. 1144-1152. ACM (2015) [42] Thakur, G., Brevdo, E., Fučkar, N.S., Wu, H.-T.: The synchrosqueezing algorithm for time-varying spectral analysis: robustness properties and new paleoclimate applications. Sig. Process. 93(5), 1079-1094 (2013) [43] Thakur, G., Wu, H.-T.: Synchrosqueezing-based recovery of instantaneous frequency from nonuniform samples. SIAM J. Math. Anal. 43(5), 2078-2095 (2011) [44] Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58(1), 267-288 (1996) [45] Torres, M.E., Colominas, M.A., Schlotthauer, G., Flandrin, P.: A complete ensemble empirical mode decomposition with adaptive noise. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4144-4147. IEEE (2011) [46] Wu, Z., Huang, N.E.: Ensemble empirical mode decomposition: a noise-assisted data analysis method. Adv. Adapt. Data Anal. 1(1), 1-41 (2009) [47] Xie, W., Deng, X.: Scalable algorithms for the sparse ridge regression. SIAM J. Optimiz. 30(4), 3359-3386 (2020) [48] Xie, Y., Shi, B., Schaeffer, H., Ward, R.: SHRIMP: sparser random feature models via iterative magnitude pruning. arXiv:2112.04002 (2021) [49] Yang, H.: Synchrosqueezed wave packet transforms and diffeomorphism based spectral analysis for 1D general mode decompositions. Appl. Comput. Harmon. Anal. 39(1), 33-66 (2015) [50] Yen, I.E.-H., Lin, T.-W., Lin, S.-D., Ravikumar, P.K., Dhillon, I.S.: Sparse random feature algorithm as coordinate descent in Hilbert space. Adv. Neural Inf. Process. Syst. 2, 2456-2464 (2014) |