A new Bayesian approach to robustness against outliers in linear regression

From MaRDI portal
Publication:2226686

DOI10.1214/19-BA1157zbMATH Open1459.62133arXiv1612.06198OpenAlexW2896249633WikidataQ127822604 ScholiaQ127822604MaRDI QIDQ2226686FDOQ2226686


Authors: Philippe Gagnon, Alain Desgagné, M. Bédard Edit this on Wikidata


Publication date: 9 February 2021

Published in: Bayesian Analysis (Search for Journal in Brave)

Abstract: Linear regression is ubiquitous in statistical analysis. It is well understood that conflicting sources of information may contaminate the inference when the classical normality of errors is assumed. The contamination caused by the light normal tails follows from an undesirable effect: the posterior concentrates in an area in between the different sources with a large enough scaling to incorporate them all. The theory of conflict resolution in Bayesian statistics (O'Hagan and Pericchi (2012)) recommends to address this problem by limiting the impact of outliers to obtain conclusions consistent with the bulk of the data. In this paper, we propose a model with super heavy-tailed errors to achieve this. We prove that it is wholly robust, meaning that the impact of outliers gradually vanishes as they move further and further away form the general trend. The super heavy-tailed density is similar to the normal outside of the tails, which gives rise to an efficient estimation procedure. In addition, estimates are easily computed. This is highlighted via a detailed user guide, where all steps are explained through a simulated case study. The performance is shown using simulation. All required code is given.


Full work available at URL: https://arxiv.org/abs/1612.06198




Recommendations




Cites Work


Cited In (16)





This page was built for publication: A new Bayesian approach to robustness against outliers in linear regression

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2226686)