Communications on Applied Mathematics and Computation ›› 2021, Vol. 3 ›› Issue (2): 257-279.doi: 10.1007/s42967-020-00090-6

• ORIGINAL PAPER • 上一篇    下一篇

MERACLE: Constructive Layer-Wise Conversion of a Tensor Train into a MERA

Kim Batselier1, Andrzej Cichocki2, Ngai Wong3   

  1. 1 Delft Center for Systems and Control, Delft University of Technology, Delft, the Netherlands;
    2 Skolkovo Institute of Science and Technology(SKOLTECH), 121205 Moscow, Russia;
    3 The Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
  • 收稿日期:2019-12-20 修回日期:2020-07-17 出版日期:2021-06-20 发布日期:2021-05-26
  • 通讯作者: Kim Batselier E-mail:k.batselier@tudelft.nl
  • 基金资助:
    This research was partially supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001).

MERACLE: Constructive Layer-Wise Conversion of a Tensor Train into a MERA

Kim Batselier1, Andrzej Cichocki2, Ngai Wong3   

  1. 1 Delft Center for Systems and Control, Delft University of Technology, Delft, the Netherlands;
    2 Skolkovo Institute of Science and Technology(SKOLTECH), 121205 Moscow, Russia;
    3 The Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
  • Received:2019-12-20 Revised:2020-07-17 Online:2021-06-20 Published:2021-05-26
  • Contact: Kim Batselier E-mail:k.batselier@tudelft.nl
  • Supported by:
    This research was partially supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001).

摘要: In this article, two new algorithms are presented that convert a given data tensor train into either a Tucker decomposition with orthogonal matrix factors or a multi-scale entanglement renormalization ansatz (MERA). The Tucker core tensor is never explicitly computed but stored as a tensor train instead, resulting in both computationally and storage efcient algorithms. Both the multilinear Tucker-ranks as well as the MERA-ranks are automatically determined by the algorithm for a given upper bound on the relative approximation error. In addition, an iterative algorithm with low computational complexity based on solving an orthogonal Procrustes problem is proposed for the frst time to retrieve optimal rank-lowering disentangler tensors, which are a crucial component in the construction of a low-rank MERA. Numerical experiments demonstrate the efectiveness of the proposed algorithms together with the potential storage beneft of a low-rank MERA over a tensor train.

关键词: Tensors, Tensor train, Tucker decomposition, HOSVD, MERA, Disentangler

Abstract: In this article, two new algorithms are presented that convert a given data tensor train into either a Tucker decomposition with orthogonal matrix factors or a multi-scale entanglement renormalization ansatz (MERA). The Tucker core tensor is never explicitly computed but stored as a tensor train instead, resulting in both computationally and storage efcient algorithms. Both the multilinear Tucker-ranks as well as the MERA-ranks are automatically determined by the algorithm for a given upper bound on the relative approximation error. In addition, an iterative algorithm with low computational complexity based on solving an orthogonal Procrustes problem is proposed for the frst time to retrieve optimal rank-lowering disentangler tensors, which are a crucial component in the construction of a low-rank MERA. Numerical experiments demonstrate the efectiveness of the proposed algorithms together with the potential storage beneft of a low-rank MERA over a tensor train.

Key words: Tensors, Tensor train, Tucker decomposition, HOSVD, MERA, Disentangler

中图分类号: