Exploring the impact of post-training rounding in regression models.
From MaRDI portal
Publication:6584366
DOI10.21136/AM.2024.0090-23MaRDI QIDQ6584366FDOQ6584366
Authors: Jan Kalina
Publication date: 7 August 2024
Published in: Applications of Mathematics (Search for Journal in Brave)
Recommendations
- Universal sieve-based strategies for efficient estimation using machine learning tools
- Analysis of rounded data from dependent sequences
- Topological Regularization via Persistence-Sensitive Optimization
- Inverse problem approach to regularized regression models with application to predicting recovery after stroke
- Post-training Quantization for Neural Networks with Provable Guarantees
Estimation in multivariate analysis (62H12) Neural nets and related approaches to inference from stochastic processes (62M45) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87)
Cites Work
- Robust nonlinear regression: with applications using R
- Foundations of linear and generalized linear models
- Measurement Error in Nonlinear Models
- Title not available (Why is that?)
- System identification using kernel-based regularization: new insights on stability and consistency issues
- R-estimation of the parameters of a multiple regression model with measurement errors
- Consistency of the least weighted squares under heteroscedasticity
- Robust and sparse estimators for linear regression models
- Title not available (Why is that?)
- Multidimensional sum-up rounding for integer programming in optimal experimental design
- Computational Methods for Deep Learning
This page was built for publication: Exploring the impact of post-training rounding in regression models.
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6584366)