Learning Markov Models Via Low-Rank Optimization
From MaRDI portal
Publication:5106374
DOI10.1287/opre.2021.2115zbMath1500.90083arXiv1907.00113OpenAlexW3217025848MaRDI QIDQ5106374
Anru Zhang, Mengdi Wang, Ziwei Zhu, Xudong Li
Publication date: 19 September 2022
Published in: Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1907.00113
Nonconvex programming, global optimization (90C26) Markov and semi-Markov decision processes (90C40)
Related Items (1)
Cites Work
- Unnamed Item
- Fast Algorithms for Large-Scale Generalized Distance Weighted Discrimination
- Exact penalty and error bounds in DC programming
- A spectral algorithm for learning hidden Markov models
- A partial proximal point algorithm for nuclear norm regularized matrix least squares problems
- An efficient inexact symmetric Gauss-Seidel based majorized ADMM for high-dimensional convex composite conic programming
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Freedman's inequality for matrix martingales
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Eigenvalue bounds on convergence to stationarity for nonreversible Markov chains, with an application to the exclusion process
- On matrix approximation problems with Ky Fan \(k\) norms
- Convex analysis approach to d. c. programming: Theory, algorithms and applications
- A proximal difference-of-convex algorithm with extrapolation
- Semidefinite programming approach for the quadratic assignment problem with a sparse graph
- DC programming and DCA: thirty years of developments
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- Concentration inequalities for Markov chains by Marton couplings and spectral methods
- Concentration inequalities for dependent random variables via the martingale method
- Bounding \(\bar d\)-distance by informational divergence: A method to prove measure concentration
- A Majorized ADMM with Indefinite Proximal Terms for Linearly Constrained Convex Composite Optimization
- Tensor decompositions for learning latent variable models
- Minimax Estimation of Discrete Distributions Under <inline-formula> <tex-math notation="LaTeX">$\ell _{1}$ </tex-math></inline-formula> Loss
- Minimax Optimal Rates for Poisson Inverse Problems With Physical Constraints
- Diffusion Maps, Reduction Coordinates, and Low Dimensional Representation of Stochastic Systems
- The Problem of Estimation
- Markov Chains
- Exact and ordinary lumpability in finite Markov chains
- Another Look at Distance-Weighted Discrimination
- Poisson Matrix Recovery and Completion
- Spectral State Compression of Markov Processes
- Matrix Completion From a Few Entries
- Optimal Kullback-Leibler Aggregation via Spectral Theory of Markov Chains
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- The Spacey Random Walk: A Stochastic Process for Higher-Order Data
- Rank Centrality: Ranking from Pairwise Comparisons
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions
This page was built for publication: Learning Markov Models Via Low-Rank Optimization