Learning with tensors: a framework based on convex optimization and spectral regularization
From MaRDI portal
Publication:2251466
DOI10.1007/s10994-013-5366-3zbMath1319.68191OpenAlexW2030628896MaRDI QIDQ2251466
Johan A. K. Suykens, Lieven De Lathauwer, Marco Signoretto, Quoc Tran Dinh
Publication date: 14 July 2014
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-013-5366-3
multi-task learningTucker decompositionspectral regularizationmultilinear rankmatrix and tensor completiontransductive and inductive learning
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Variants of Alternating Least Squares Tensor Completion in the Tensor Train Format, Minimum \( n\)-rank approximation via iterative hard thresholding, Rank-1 Tensor Properties with Applications to a Class of Tensor Optimization Problems, Quantum machine learning: a classical perspective, Parallel active subspace decomposition for tensor robust principal component analysis, Low Tucker rank tensor recovery via ADMM based on exact and inexact iteratively reweighted algorithms, Low-rank tensor completion by Riemannian optimization, Half-quadratic alternating direction method of multipliers for robust orthogonal tensor approximation, Tensor completion by multi-rank via unitary transformation, Low-rank tensor methods for partial differential equations, Learning with optimal interpolation norms, Auto-weighted robust low-rank tensor completion via tensor-train, A Corrected Tensor Nuclear Norm Minimization Method for Noisy Low-Rank Tensor Completion, Characterization of sampling patterns for low-tt-rank tensor retrieval, A randomized singular value decomposition for third-order oriented tensors, Iterative \(p\)-shrinkage thresholding algorithm for low Tucker rank tensor recovery, Theoretical and Experimental Analyses of Tensor-Based Regression and Classification, Nonconvex optimization for robust tensor completion from grossly sparse observations, Geometric Methods on Low-Rank Matrix and Tensor Manifolds, Convex Coupled Matrix and Tensor Completion, Least square support tensor regression machine based on submatrix of the tensor, Tensor completion based on triple tubal nuclear norm, Fundamental conditions on the sampling pattern for union of low-rank subspaces retrieval, Riemannian Optimization for High-Dimensional Tensor Completion, Informative goodness-of-fit for multivariate distributions, Incremental CP Tensor Decomposition by Alternating Minimization Method, A Splitting Augmented Lagrangian Method for Low Multilinear-Rank Tensor Recovery, Stable als approximation in the TT-format for rank-adaptive tensor completion, Nonlocal robust tensor recovery with nonconvex regularization *, Tensor Manifold with Tucker Rank Constraints
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Tensor Decompositions and Applications
- Solving semidefinite-quadratic-linear programs using SDPT3
- An Analysis of the Total Least Squares Problem
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A kernel-based framework to tensorial data analysis
- Fixed point and Bregman iterative methods for matrix rank minimization
- Partial inverse of a monotone operator
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Convex functions, monotone operators and differentiability.
- Problems of efficient application and development of a network under capital investments and credits
- Convex multi-task feature learning
- The composite absolute penalties family for grouped and hierarchical variable selection
- Monotone (nonlinear) operators in Hilbert space
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- On the optimality of the simple Bayesian classifier under zero-one loss
- Introductory lectures on convex optimization. A basic course.
- Templates for convex cone problems with applications to sparse signal recovery
- On the maximal monotonicity of subdifferential mappings
- Handling missing values in support vector machine classifiers
- Shannon sampling. II: Connections to learning theory
- Exact matrix completion via convex optimization
- Atomic Decomposition by Basis Pursuit
- Proximal Splitting Methods in Signal Processing
- Robust principal component analysis?
- A Singular Value Thresholding Algorithm for Matrix Completion
- Tensor completion and low-n-rank tensor recovery via convex optimization
- Tensor rank is NP-complete
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- A proximal decomposition method for solving convex variational inverse problems
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- On the Goldstein-Levitin-Polyak gradient projection method
- Monotone Operators and the Proximal Point Algorithm
- Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones
- A Multilinear Singular Value Decomposition
- Semidefinite Programming
- Most Tensor Problems Are NP-Hard
- Applied Multiway Data Analysis
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Theory of Reproducing Kernels
- Convex analysis and monotone operator theory in Hilbert spaces
- Compressed sensing