Exploring the impact of post-training rounding in regression models.
From MaRDI portal
Publication:6584366
Recommendations
- Universal sieve-based strategies for efficient estimation using machine learning tools
- Analysis of rounded data from dependent sequences
- Topological Regularization via Persistence-Sensitive Optimization
- Inverse problem approach to regularized regression models with application to predicting recovery after stroke
- Post-training Quantization for Neural Networks with Provable Guarantees
Cites work
- scientific article; zbMATH DE number 7626756 (Why is no real title available?)
- scientific article; zbMATH DE number 6438182 (Why is no real title available?)
- Computational Methods for Deep Learning
- Consistency of the least weighted squares under heteroscedasticity
- Foundations of linear and generalized linear models
- Measurement Error in Nonlinear Models
- Multidimensional sum-up rounding for integer programming in optimal experimental design
- R-estimation of the parameters of a multiple regression model with measurement errors
- Robust and sparse estimators for linear regression models
- Robust nonlinear regression: with applications using R
- System identification using kernel-based regularization: new insights on stability and consistency issues
This page was built for publication: Exploring the impact of post-training rounding in regression models.
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6584366)