The following pages link to Yuejie Chi (Q1990965):
Displayed 47 items.
- Stable separation and super-resolution of mixture models (Q1990966) (← links)
- Subspace estimation from unbalanced and incomplete data matrices: \({\ell_{2,\infty}}\) statistical guarantees (Q2039795) (← links)
- Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition (Q2173914) (← links)
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution (Q2189396) (← links)
- Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval (Q2425162) (← links)
- Exact and Stable Covariance Estimation From Quadratic Sampling via Convex Programming (Q2977389) (← links)
- Robust Spectral Compressed Sensing via Structured Matrix Completion (Q2986119) (← links)
- Median-Truncated Gradient Descent: A Robust and Scalable Nonconvex Approach for Signal Estimation (Q3296183) (← links)
- Compressed Sensing, Sparse Inversion, and Model Mismatch (Q3460830) (← links)
- Median-Truncated Nonconvex Approach for Phase Retrieval With Outliers (Q4562321) (← links)
- Sensitivity to Basis Mismatch in Compressed Sensing (Q4572926) (← links)
- PETRELS: Parallel Subspace Estimation and Tracking by Recursive Least Squares From Partial Observations (Q4578846) (← links)
- Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization (Q4579754) (← links)
- Off-the-Grid Line Spectrum Denoising and Estimation With Multiple Measurement Vectors (Q4618225) (← links)
- Low-Rank Positive Semidefinite Matrix Recovery From Corrupted Rank-One Measurements (Q4620545) (← links)
- Stochastic Approximation and Memory-Limited Subspace Tracking for Poisson Streaming Data (Q4621617) (← links)
- Subspace Learning From Bits (Q4621823) (← links)
- Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms (Q4622216) (← links)
- (Q4637073) (← links)
- Nonconvex Matrix Factorization From Rank-One Measurements (Q5001494) (← links)
- Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently (Q5001827) (← links)
- Non-convex low-rank matrix recovery with arbitrary outliers via median-truncated gradient descent (Q5006521) (← links)
- Spectral Methods for Data Science: A Statistical Perspective (Q5015834) (← links)
- Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction (Q5030298) (← links)
- DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization (Q5095229) (← links)
- Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy (Q5102859) (← links)
- Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data (Q5102939) (← links)
- Beyond Procrustes: Balancing-Free Gradient Descent for Asymmetric Low-Rank Matrix Sensing (Q5103351) (← links)
- Low-Rank Matrix Recovery With Scaled Subgradient Methods: Fast and Robust Convergence Without the Condition Number (Q5103515) (← links)
- Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization (Q5106383) (← links)
- Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization (Q5131966) (← links)
- On the Stable Resolution Limit of Total Variation Regularization for Spike Deconvolution (Q5138889) (← links)
- (Q5149230) (← links)
- (Q5159422) (← links)
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview (Q5240484) (← links)
- Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning (Q6039766) (← links)
- Softmax policy gradient methods can take exponential time to converge (Q6110457) (← links)
- Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence (Q6161312) (← links)
- Settling the sample complexity of model-based offline reinforcement learning (Q6192326) (← links)
- Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model (Q6198737) (← links)
- Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis (Q6198738) (← links)
- Coherence-Based Performance Guarantees of Orthogonal Matching Pursuit (Q6235969) (← links)
- Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data (Q6319612) (← links)
- Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction (Q6325192) (← links)
- Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent (Q6340951) (← links)
- Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and Robust Convergence Without the Condition Number (Q6352228) (← links)
- Fast and Provable Tensor Robust Principal Component Analysis via Scaled Gradient Descent (Q6402443) (← links)