A non-convex regularization approach for stable estimation of loss development factors
From MaRDI portal
Publication:5014498
DOI10.1080/03461238.2021.1882550zbMath1479.91329arXiv2004.08032OpenAlexW3131433241MaRDI QIDQ5014498
Hyunwoong Chang, Himchan Jeong, Emiliano A. Valdez
Publication date: 8 December 2021
Published in: Scandinavian Actuarial Journal (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.08032
variable selectionrobust estimationloss developmentnon-convex penalizationinsurance reservinglog-adjusted absolute deviation (LAAD) penalty
Related Items
Loss amount prediction from textual data using a double GLM with shrinkage and selection ⋮ Mixture Composite Regression Models with Multi-type Feature Selection
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- leaps
- Estimation of the mean of a multivariate normal distribution
- On the convergence of the coordinate descent method for convex differentiable minimization
- Asymptotics for Lasso-type estimators.
- A dependent frequency-severity approach to modeling longitudinal insurance claims
- On the ``degrees of freedom of the lasso
- Penalized regression, standard errors, and Bayesian Lassos
- Sparse regression with multi-type regularized feature modeling
- The Standard Error of Chain Ladder Reserve Estimates: Recursive Calculation and Inclusion of a Tail Factor
- SparseNet: Coordinate Descent With Nonconvex Penalties
- From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
- The Bayesian Lasso
- How Biased is the Apparent Error Rate of a Prediction Rule?
- Properties and modifications of Whittaker-Henderson graduation
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- EFFICIENT ESTIMATION OF ERLANG MIXTURES USING iSCAD PENALTY WITH INSURANCE APPLICATION
- Generalized double Pareto shrinkage
- An Introduction to Statistical Learning
- TESTING FOR RANDOM EFFECTS IN COMPOUND RISK MODELS VIA BREGMAN DIVERGENCE
- A Bayesian Log-Normal Model for Multivariate Loss Reserving
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Convergence of a block coordinate descent method for nondifferentiable minimization