High-dimensional model recovery from random sketched data by exploring intrinsic sparsity (Q782446): Difference between revisions

From MaRDI portal
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
 
(4 intermediate revisions by 4 users not shown)
Property / describes a project that uses
 
Property / describes a project that uses: Saga / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10994-019-05865-4 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W3000071498 / rank
 
Normal rank
Property / Wikidata QID
 
Property / Wikidata QID: Q126399604 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Database-friendly random projections: Johnson-Lindenstrauss with binary coins. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Fast Johnson–Lindenstrauss Transform and Approximate Nearest Neighbors / rank
 
Normal rank
Property / cites work
 
Property / cites work: Algorithmic Learning Theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Simultaneous analysis of Lasso and Dantzig selector / rank
 
Normal rank
Property / cites work
 
Property / cites work: Improved Matrix Algorithms via the Subsampled Randomized Hadamard Transform / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4633910 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4821526 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder). / rank
 
Normal rank
Property / cites work
 
Property / cites work: A sparse Johnson / rank
 
Normal rank
Property / cites work
 
Property / cites work: An elementary proof of a theorem of Johnson and Lindenstrauss / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sampling algorithms for <i>l</i><sub>2</sub> regression and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Relative-Error $CUR$ Matrix Decompositions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Faster least squares approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Gene selection for cancer classification using support vector machines / rank
 
Normal rank
Property / cites work
 
Property / cites work: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Preconditioning the Lasso for sign consistency / rank
 
Normal rank
Property / cites work
 
Property / cites work: Extensions of Lipschitz mappings into a Hilbert space / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sparser Johnson-Lindenstrauss Transforms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5502137 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Randomized Algorithms for Matrices and Data / rank
 
Normal rank
Property / cites work
 
Property / cites work: CUR matrix decompositions for improved data analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning to decode cognitive states from brain images / rank
 
Normal rank
Property / cites work
 
Property / cites work: ``Preconditioning'' for feature selection and regression in high-dimensional problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Randomized Sketches of Convex Programs With Sharp Guarantees / rank
 
Normal rank
Property / cites work
 
Property / cites work: Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares / rank
 
Normal rank
Property / cites work
 
Property / cites work: One-Bit Compressed Sensing by Linear Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Image classification with the Fisher vector: theory and practice / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2880988 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4864293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: IMPROVED ANALYSIS OF THE SUBSAMPLED RANDOMIZED HADAMARD TRANSFORM / rank
 
Normal rank
Property / cites work
 
Property / cites work: Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Proximal Stochastic Gradient Method with Progressive Variance Reduction / rank
 
Normal rank
Property / cites work
 
Property / cites work: Random Projections for Classification: A Recovery Approach / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3174050 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularization and Variable Selection Via the Elastic Net / rank
 
Normal rank

Latest revision as of 04:39, 23 July 2024

scientific article
Language Label Description Also known as
English
High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
scientific article

    Statements

    High-dimensional model recovery from random sketched data by exploring intrinsic sparsity (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    27 July 2020
    0 references
    This paper proposes randomized reduction methods to solve large-scale and high-dimensional machine learning problems, which can greatly speed up the modeling process by reducing either the dimensionality or the scale of the data. Furthermore, the authors theoretically show that the developed methods can recover well the optimal models built from the original data. This model recovery is achieved by using the intrinsic sparsity of optimal solutions and does not rely on any stringent assumption. Empirical results are also included to support both the method and the theory.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    classification
    0 references
    regression
    0 references
    high dimension
    0 references
    sparsity
    0 references
    randomized reduction
    0 references
    JL-transform
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references