Pages that link to "Item:Q5218544"
From MaRDI portal
The following pages link to Reconciling modern machine-learning practice and the classical bias–variance trade-off (Q5218544):
Displaying 50 items.
- Machine learning from a continuous viewpoint. I (Q829085) (← links)
- Over-parametrized deep neural networks minimizing the empirical risk do not generalize well (Q1983625) (← links)
- A generic physics-informed neural network-based constitutive model for soft biological tissues (Q2021025) (← links)
- A selective overview of deep learning (Q2038303) (← links)
- Linearized two-layers neural networks in high dimension (Q2039801) (← links)
- High-dimensional dynamics of generalization error in neural networks (Q2057778) (← links)
- Dimension independent excess risk by stochastic gradient descent (Q2084455) (← links)
- Precise statistical analysis of classification accuracies for adversarial training (Q2091832) (← links)
- On the robustness of minimum norm interpolators and regularized empirical risk minimizers (Q2091842) (← links)
- AdaBoost and robust one-bit compressed sensing (Q2102435) (← links)
- Bayesian learning via neural Schrödinger-Föllmer flows (Q2104005) (← links)
- Understanding neural networks with reproducing kernel Banach spaces (Q2105111) (← links)
- The interpolation phase transition in neural networks: memorization and generalization under lazy training (Q2105197) (← links)
- A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids (Q2105198) (← links)
- Deep learning for inverse problems. Abstracts from the workshop held March 7--13, 2021 (hybrid meeting) (Q2131206) (← links)
- Surprises in high-dimensional ridgeless least squares interpolation (Q2131262) (← links)
- Counterfactual inference with latent variable and its application in mental health care (Q2134063) (← links)
- Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration (Q2134105) (← links)
- Loss landscapes and optimization in over-parameterized non-linear systems and neural networks (Q2134108) (← links)
- Neural network training using \(\ell_1\)-regularization and bi-fidelity data (Q2138992) (← links)
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers (Q2148995) (← links)
- Scientific machine learning through physics-informed neural networks: where we are and what's next (Q2162315) (← links)
- Discussion of: ``Nonparametric regression using deep neural networks with ReLU activation function'' (Q2215716) (← links)
- Optimization for deep learning: an overview (Q2218095) (← links)
- Landscape and training regimes in deep learning (Q2231925) (← links)
- A statistician teaches deep learning (Q2241468) (← links)
- Free dynamics of feature learning processes (Q2679634) (← links)
- Learning algebraic models of quantum entanglement (Q2681640) (← links)
- On the influence of over-parameterization in manifold based surrogates and deep neural operators (Q2687573) (← links)
- On the properties of bias-variance decomposition for kNN regression (Q2700517) (← links)
- The Random Feature Model for Input-Output Maps between Banach Spaces (Q3382802) (← links)
- (Q4999109) (← links)
- Generalization Error of Minimum Weighted Norm and Kernel Interpolation (Q4999364) (← links)
- Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction (Q5004318) (← links)
- (Q5011560) (← links)
- A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent* (Q5020045) (← links)
- Generalisation error in learning with random features and the hidden manifold model* (Q5020057) (← links)
- Two Models of Double Descent for Weak Features (Q5027013) (← links)
- (Q5053192) (← links)
- (Q5053228) (← links)
- (Q5053282) (← links)
- (Q5054595) (← links)
- Learning curves of generic features maps for realistic datasets with a teacher-student model* (Q5055409) (← links)
- Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry* (Q5055419) (← links)
- Dimensionality Reduction, Regularization, and Generalization in Overparameterized Regressions (Q5065466) (← links)
- Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization (Q5065474) (← links)
- Prevalence of neural collapse during the terminal phase of deep learning training (Q5073172) (← links)
- Overparameterized neural networks implement associative memory (Q5073192) (← links)
- Benign overfitting in linear regression (Q5073215) (← links)
- The inverse variance–flatness relation in stochastic gradient descent is critical for finding flat minima (Q5073270) (← links)