Conducting sparse feature selection on arbitrarily long phrases in text corpora with a focus on interpretability
From MaRDI portal
Publication:4970223
DOI10.1002/sam.11323OpenAlexW2964089330MaRDI QIDQ4970223
Robin Ackerman, Luke W. Miratrix
Publication date: 14 October 2020
Published in: Statistical Analysis and Data Mining: The ASA Data Science Journal (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1511.06798
text miningtext classificationLassoregularized regressiontext summarizationhigh-dimensional analysissparse classificationL2 normalizationconcise comparative summarizationkey-phrase extraction
Related Items (1)
Uses Software
Cites Work
- The Adaptive Lasso and Its Oracle Properties
- Applied Bayesian and classical inference. The case of the Federalist papers. 2nd ed. of: Inference and disputed authorship: The Federalist
- Model selection and prediction: Normal regression
- On the convergence of the coordinate descent method for convex differentiable minimization
- BoosTexter: A boosting-based system for text categorization
- Least angle regression. (With discussion)
- Support-vector networks
- Concise comparative summaries (CCS) of large text corpora with a human experiment
- Coordinate descent algorithms for lasso penalized regression
- 10.1162/jmlr.2003.3.4-5.993
- 10.1162/153244303322753670
- Safe Feature Elimination in Sparse Supervised Learning
- Regularization and Variable Selection Via the Elastic Net
- Who wrote Ronald Reagan's radio addresses?
- Multinomial Inverse Regression for Text Analysis
- Unnamed Item
This page was built for publication: Conducting sparse feature selection on arbitrarily long phrases in text corpora with a focus on interpretability