Pages that link to "Item:Q4689165"
From MaRDI portal
The following pages link to Optimal Adaptation for Early Stopping in Statistical Inverse Problems (Q4689165):
Displaying 16 items.
- Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution (Q777510) (← links)
- Early stopping for statistical inverse problems via truncated SVD estimation (Q1616307) (← links)
- Towards adaptivity via a new discrepancy principle for Poisson inverse problems (Q2044369) (← links)
- Nonparametric estimation of accelerated failure-time models with unobservable confounders and random censoring (Q2074294) (← links)
- A probabilistic oracle inequality and quantification of uncertainty of a modified discrepancy principle for statistical inverse problems (Q2153955) (← links)
- Empirical risk minimization as parameter choice rule for general linear regularization methods (Q2179243) (← links)
- Smoothed residual stopping for statistical inverse problems via truncated SVD estimation (Q2209816) (← links)
- Adaptivity and Oracle Inequalities in Linear Statistical Inverse Problems: A (Numerical) Survey (Q4554185) (← links)
- (Q4998979) (← links)
- Optimal Convergence of the Discrepancy Principle for Polynomially and Exponentially Ill-Posed Operators under White Noise (Q5073867) (← links)
- A note on confidence intervals for deblurred images (Q5106723) (← links)
- Regularization parameter selection in indirect regression by residual based bootstrap (Q5134476) (← links)
- On the Asymptotical Regularization for Linear Inverse Problems in Presence of White Noise (Q5149777) (← links)
- Mini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022 (Q6095403) (← links)
- Weighted discrepancy principle and optimal adaptivity in Poisson inverse problems (Q6634800) (← links)
- Statistics and learning theory in the era of artificial intelligence. Abstracts from the workshop held June 23--28, 2024 (Q6671627) (← links)