[1] Bai, Z.-Z., Pan, J.-Y.: Matrix Analysis and Computations. SIAM, Philadelphia (2021)
[2] Bai, Z.-Z., Wang, L., Wu, W.-T.: On convergence rata of the randomized Gauss-Seidel method. Linear Algebra Appl. 611, 237-252 (2021)
[3] Bai, Z.-Z., Wu, W.-T.: On convergence of the randomized Kaczmarz method. Linear Algebra Appl. 553, 252-269 (2018)
[4] Bai, Z.-Z., Wu, W.-T.: On greedy randomized coordinate descent methods for solving large linear least-squares problems. Numer. Linear Algebra Appl. 26, 1-15 (2019)
[5] Candès, E.J., Plan, Y.: Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Transactions on Information Theory 57, 2342-2359 (2011)
[6] Chen, E.-Y., Chen, R.: Modeling dynamic transport network with matrix factor models: with an application to international trade flow. arXiv:1901.00769 (2019)
[7] Chen, H., Raskutti, G., Yuan, M.: Non-convex projected gradient descent for generalized low-rank tensor regression. J. Mach. Learn. Res. 20, 1-37 (2019)
[8] Chen, Z., Jiang, H., Yu, G., Qi, L.: Low-rank tensor train decomposition using tensor Sketch. arXiv: 2309.08093 (2023)
[9] Cichocki, A., Mandic, D., De Lathauwer, L., Zhou, G.-X., Zhao, Q.-B., Caiafa, C., Phan, H.A.: Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Proc. Mag. 32, 145-163 (2015)
[10] Fama, E.F., French, K.R.: A five-factor asset pricing model. J. Financ. Econ. 116, 1-22 (2015)
[11] French, K.R.: Data library: U.S. research returns data. Available at http://mba.tuck.darmouth.edu/pages/faculty/ken.french/data_library.html (2020)
[12] Gazagnadou, N., Ibrahim, M., Gower, R.M.: RidgeSketch: a fast sketching based solver for large scale ridge regression. SIAM J. Matrix Anal. Appl. 43, 1440-1468 (2022)
[13] Huang, H.-Y., Liu, Y.-P., Liu, J.-N., Zhu, C.: Provable tensor ring completion. Signal Process. 171, 107-486 (2020)
[14] Huang, H.-Y., Liu, Y.-P., Long, Z., Zhu, C.: Robust low-rank tensor ring completion. IEEE Transactions on Computational Imaging 6, 1117-1126 (2020)
[15] Kong, D.-H., An, B.-G., Zhang, J.-W., Zhu, H.-T.: L\begin{document}$ 2 $\end{document}RM: low-rank linear regression models for high-dimensional matrix responses. J. Am. Stat. Assoc. 115(529), 403-424 (2020)
[16] Li, X.-S., Xu, D., Zhou, H., Li, L.-X.: Tucker tensor regression and neuroimaging analysis. Stat. Biosci. 10(3), 520-545 (2018)
[17] Liu, Y.-P.: Tensors for Data Processing Theory, Methods and Applications. Academic Press, New York (2021)
[18] Liu, Y.-P., Liu, J.-N., Long, Z., Zhu, C.: Tensor Computation for Data Analysis. Springer, Berlin (2022)
[19] Liu, Y.-P., Liu, J.-N., Zhu, C.: Low-rank tensor train coefficient array estimation for tensor-on-tensor regression. IEEE Transaction on Neural Networks and Learning Systems 31(12), 5402-5411 (2020)
[20] Lock, E.F.: Tensor-on-tensor regression. J. Comput. Graph. Stat. 27(3), 638-647 (2018)
[21] Ma, L.-J., Solomonik, E.: Fast and accurate randomized algorithms for low-rank tensor decompositions. arXiv: 2104.01101 (2021)
[22] Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295-2317 (2011)
[23] Pagh, R.: Compressed matrix multiplication. ACM Transactions on Computation Theory 5, 1-17 (2013)
[24] Rigollet, P., Hütter, J.C.: High dimensional statistics. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu (2015)
[25] Rudelson, M., Vershynin, R.: Hanson-Wright inequality and sub-Gaussian concentration. Electron. Commun. Prob. 18, 1-9 (2013)
[26] Si, Y.-F., Zhang, Y.-Y., Li, G.-D.: An efficient tensor regression for high-dimensional data. arXiv: 2205.13734 (2022)
[27] Tang, L., Yu, Y.-J., Zhang, Y.-J., Li, H.-Y.: Sketch-and-project methods for tensor linear systems. Numer. Linear Algebra Appl. 30(2), e2470 (2023)
[28] Virta, J., Li, B., Nordhausen, K., Oja, H.: Independent component analysis for tensor-valued data. J. Multivariate Anal. 162, 172-192 (2017)
[29] Wainwright, M.J.: High-Dimensional Statistics: a Non-asymptotic Viewpoint. Cambridge University Press, Cambridge (2019)
[30] Walden, A.T., Serroukh, A.: Wavelet analysis of matrix-valued time-series. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 458, 157-179 (2002)
[31] Wang, D., Zheng, Y., Li, G.-D.: High-dimensional low-rank tensor autoregressive time series modeling. J. Econometr. 238(1), 105544 (2024)
[32] Wang, D., Zheng, Y., Lian, H., Li, G.-D.: High-dimensional vector autoregressive time series modeling via tensor decomposition. J. Am. Stat Assoc. 117, 1338-1356 (2022)
[33] Yu, D., Deng, L., Seide, F.: The deep tensor neural network with applications to large vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing 21(2), 388-396 (2013)
[34] Yu, Y.-J., Li, H.-Y.: Practical sketching-based randomized tensor ring decomposition. arXiv: 2209.05647 (2022)
[35] Zhang, A.-R., Xia, D.: Tensor SVD: statistical and computational limits. IEEE Transactions on Information Theory 64, 7311-7338 (2018)
[36] Zhao, Q.-B., Sugiyama, M., Yuan, L.-H., Cichocki, A.: Learning efficient tensor representations with ring structure networks. ICASSP 8608-8612 (2019)
[37] Zhao, Q.-B., Zhou, G.-X., Xie, S.-L., Zhang, L.-Q., Cichocki, A.: Tensor Ring Decomposition. arXiv:1606.05535 (2016)
[38] Zhou, H., Li, L.-X., Zhu, H.-T.: Tensor regression with applications in neuroimaging data analysis. J. Am. Stat. Assoc. 108, 540-552 (2013)