[1] Bai, Z.-Z., Liu, X.-G.: On the Meany inequality with applications to convergence analysis of several row-action iteration methods. Numer. Math. 124, 215-236 (2013) [2] Bai, Z.-Z., Pan, J.-Y.: Matrix Analysis and Computations. SIAM, Philadelphia, PA (2021) [3] Bai, Z.-Z., Wang, L.: On convergence rates of Kaczmarz-type methods with different selection rules of working rows. Appl. Numer. Math. 186, 289-319 (2023) [4] Bai, Z.-Z., Wang, L., Wu, W.-T.: On convergence rate of the randomized Gauss-Seidel method. Linear Algebra Appl. 611, 237-252 (2021) [5] Bai, Z.-Z., Wu, W.-T.: On greedy randomized Kaczmarz method for solving large sparse linear systems. SIAM J. Sci. Comput. 40, A592-A606 (2018) [6] Bai, Z.-Z., Wu, W.-T.: On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems. Appl. Math. Lett. 83, 21-26 (2018) [7] Bai, Z.-Z., Wu, W.-T.: On greedy randomized coordinate descent methods for solving large linear least-squares problems. Numer. Linear Algebra Appl. 26(4), 1-15 (2019) [8] Bai, Z.-Z., Wu, W.-T.: Randomized Kaczmarz iteration methods: algorithmic extensions and convergence theory. Jpn. J. Indust. Appl. Math. 40(3), 1421-1443 (2023) [9] Björck, Å.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996) [10] Bouman, C.A., Sauer, K.: A unified approach to statistical tomography using coordinate descent optimization. IEEE Trans. Image Process. 5(3), 480-492 (1996) [11] Canutescu, A.A., Dunbrack, R.L.: Cyclic coordinate descent: a robotics algorithm for protein loop closure. Protein Sci. 12(5), 963-972 (2003) [12] Chang, K.W., Hsieh, C.J., Lin, C.J.: Coordinate descent method for large-scale L2-loss linear support vector machines. J. Mach. Learn. Res. 9, 1369-1398 (2008) [13] Chen, J.-Q., Huang, Z.-D.: A fast block coordinate descent method for solving linear least squares problems. East Asian J. Appl. Math. 12, 406-420 (2022) [14] Davis, T.A., Hu, Y.: The University of Florida sparse matrix collection. ACM Trans. Math. Softw. 38(1), 1-25 (2011) [15] Demmel, J.W.: Applied Numerical Linear Algebra. Tsinghua University Press, Beijing (1997) [16] Drineas, P., Mahoney, M.W., Muthukrishnan, S.: Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. A. 30(2), 844-881 (2008) [17] Du, K., Ruan, C.-C., Sun, X.-H.: On the convergence of a randomized block coordinate descent algorithm for a matrix least-squares problem. Appl. Math. Lett. 124, 107689 (2022) [18] Duan, L.-X., Zhang, G.-F.: Variant of greedy randomized Gauss-Seidel method for ridge regression. Numer. Math. Theor. Meth. Appl. 14, 714-737 (2021) [19] Golub, G.: Numerical methods for solving linear least-squares problems. Numer. Math. 7(3), 206-216 (1965) [20] Griebel, M., Oswald, P.: Greedy and randomzied versions of the multiplicative Schwarz method. Linear Algebra Appl. 437, 1596-1610 (2012) [21] Jin, L.-L., Li, H.-B.: Greedy double subspaces coordinate descent method via orthogonalization. arXiv:2203.02153v2 (2022) [22] Leventhal, D., Lewis, A.S.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35, 641-654 (2010) [23] Li, H.-Y., Zhang, Y.-J.: Greedy block Gauss-Seidel methods for solving large least-squares problem. arXiv: 2004.02476v1 (2020) [24] Lin, Q., Lu, Z., Xiao, L.: An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization. SIAM J. Optim. 25, 2244-2273 (2015) [25] Liu, Y., Jiang, X.-L., Gu, C.-Q.: On maximum residual block and two-step Gauss-Seidel algorithms for linear least-squares problems. Calcolo. 58(2), 1-32 (2021) [26] Ma, A.N., Needell, D., Ramdas, A.: Convergence properties of the randomized extended Gauss-Seidel and Kaczmarz methods. SIAM J. Matrix Anal. Appl. 36, 1590-1604 (2015) [27] Niu, Y.-Q., Zheng, B.: A new randomzied Gauss-Seidel method for solving linear least-squares problems. Appl. Math. Lett. 116, 107057 (2021) [28] Nutini, J., Sepehry, B., Laradji, I., Virani, A., Schmidt, M., Koepke, H.: Convergence rates for greedy Kaczmarz algorithms, and faster randomized Kaczmarz rules using the orthogonality graph. arXiv:1612.07838 (2016) [29] Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics. Springer-Verlag, New York (2002) [30] Ruhe, A.: Numerical aspects of Gram-Schmidt orthogonalization of vectors. Linear Algebra Appl. 52(1), 591-601 (1983) [31] Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2003) [32] Sorensen, D.C., Embree, M.: A DEIM induced CUR factorization. SIAM J. Sci. Comput. 38(3), A1454-A1482 (2016) [33] Thoppe, G., Borkar, V.S., Garg, D.: Greedy block coordinate descent (GBCD) method for high dimensional quadratic programs. arXiv:1404.6635v3 (2014) [34] Wright, S.J.: Coordinate descent algorithms. Math. Program. 151(1), 3-34 (2015) [35] Wu, W.-M.: Convergence of the randomized block Gauss-Seidel method. Los Angeles, Claremont Colleges (2018) [36] Ye, J.C., Webb, K.J., Bouman, C.A., Millane, R.P.: Optical diffusion tomography by iterative-coordinate-descent optimization in a Bayesian framework. J. Opt. Soc. Am. A 16(10), 2400-2412 (1999) [37] Zhang, J.-H., Guo, J.-H.: On relaxed greedy randomized coordinate descent methods for solving large linear least-squares problems. Appl. Numer. Math. 157, 372-384 (2020) [38] Zhang, Y.-J., Li, H.-Y.: A novel greedy Gauss-Seidel method for solving large linear least-squares problem. arXiv:2004.03692v1 (2020) |