Penalized Maximum Tangent Likelihood Estimation and Robust Variable Selection
From MaRDI portal
Publication:65848
DOI10.48550/ARXIV.1708.05439arXiv1708.05439MaRDI QIDQ65848FDOQ65848
Yan Yu, Yichen Qin, Yang Li, Shaobo Li
Publication date: 17 August 2017
Abstract: We introduce a new class of mean regression estimators -- penalized maximum tangent likelihood estimation -- for high-dimensional regression estimation and variable selection. We first explain the motivations for the key ingredient, maximum tangent likelihood estimation (MTE), and establish its asymptotic properties. We further propose a penalized MTE for variable selection and show that it is -consistent, enjoys the oracle property. The proposed class of estimators consists penalized distance, penalized exponential squared loss, penalized least trimmed square and penalized least square as special cases and can be regarded as a mixture of minimum Kullback-Leibler distance estimation and minimum distance estimation. Furthermore, we consider the proposed class of estimators under the high-dimensional setting when the number of variables can grow exponentially with the sample size , and show that the entire class of estimators (including the aforementioned special cases) can achieve the optimal rate of convergence in the order of . Finally, simulation studies and real data analysis demonstrate the advantages of the penalized MTE.
Cited In (1)
This page was built for publication: Penalized Maximum Tangent Likelihood Estimation and Robust Variable Selection
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q65848)