Communications on Applied Mathematics and Computation ›› 2021, Vol. 3 ›› Issue (2): 221-241.doi: 10.1007/s42967-020-00063-9

• ORIGINAL PAPER • 上一篇    下一篇

Parallel Active Subspace Decomposition for Tensor Robust Principal Component Analysis

Michael K. Ng1, Xue-Zhong Wang2   

  1. 1 Department of Mathematics, The University of Hong Kong, Hong Kong, China;
    2 School of Mathematics and Statistics, Hexi University, 734000 Zhangye, Gansu Province, China
  • 收稿日期:2019-08-20 修回日期:2020-01-01 出版日期:2021-06-20 发布日期:2021-05-26
  • 通讯作者: Michael K. Ng, Xue-Zhong Wang E-mail:mng@maths.hku.hk;xuezhongwang77@126.com
  • 基金资助:
    Michael K. Ng:Research supported in part by the HKRGC GRF 12306616, 12200317, 12300218 and 12300519, and HKU Grant 104005583.

Parallel Active Subspace Decomposition for Tensor Robust Principal Component Analysis

Michael K. Ng1, Xue-Zhong Wang2   

  1. 1 Department of Mathematics, The University of Hong Kong, Hong Kong, China;
    2 School of Mathematics and Statistics, Hexi University, 734000 Zhangye, Gansu Province, China
  • Received:2019-08-20 Revised:2020-01-01 Online:2021-06-20 Published:2021-05-26
  • Contact: Michael K. Ng, Xue-Zhong Wang E-mail:mng@maths.hku.hk;xuezhongwang77@126.com
  • Supported by:
    Michael K. Ng:Research supported in part by the HKRGC GRF 12306616, 12200317, 12300218 and 12300519, and HKU Grant 104005583.

摘要: Tensor robust principal component analysis has received a substantial amount of attention in various felds. Most existing methods, normally relying on tensor nuclear norm minimization, need to pay an expensive computational cost due to multiple singular value decompositions at each iteration. To overcome the drawback, we propose a scalable and efcient method, named parallel active subspace decomposition, which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix (active subspace) and another small-size matrix in parallel. Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem. We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound. Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.

关键词: Principal component analysis, Low-rank tensors, Nuclear norm minimization, Active subspace decomposition, Matrix factorization

Abstract: Tensor robust principal component analysis has received a substantial amount of attention in various felds. Most existing methods, normally relying on tensor nuclear norm minimization, need to pay an expensive computational cost due to multiple singular value decompositions at each iteration. To overcome the drawback, we propose a scalable and efcient method, named parallel active subspace decomposition, which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix (active subspace) and another small-size matrix in parallel. Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem. We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound. Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.

Key words: Principal component analysis, Low-rank tensors, Nuclear norm minimization, Active subspace decomposition, Matrix factorization

中图分类号: