Multiple linear regressions by maximizing the likelihood under assumption of generalized Gauss-Laplace distribution of the error (Q2013958): Difference between revisions
From MaRDI portal
Created claim: Wikidata QID (P12): Q39019251, #quickstatements; #temporary_batch_1714629631253 |
ReferenceBot (talk | contribs) Changed an Item |
||
Property / cites work | |||
Property / cites work: Inside of the linear relation between dependent and independent variables / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Gauss and the invention of least squares / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: On the existence and uniqueness of the maximum likelihood estimate of a vector-valued parameter in fixed-size samples / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Characterization of the \(p\)-generalized normal distribution / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3412466 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Efficient estimation of Banach parameters in semiparametric models / rank | |||
Normal rank |
Revision as of 05:28, 14 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Multiple linear regressions by maximizing the likelihood under assumption of generalized Gauss-Laplace distribution of the error |
scientific article |
Statements
Multiple linear regressions by maximizing the likelihood under assumption of generalized Gauss-Laplace distribution of the error (English)
0 references
10 August 2017
0 references
Summary: Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.
0 references
multiple linear regression
0 references
Gauss-Laplace distribution
0 references
0 references