Pages that link to "Item:Q1884602"
From MaRDI portal
The following pages link to On the Bayes-risk consistency of regularized boosting methods. (Q1884602):
Displaying 50 items.
- Universally consistent vertex classification for latent positions graphs (Q366983) (← links)
- Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels (Q390534) (← links)
- A simple extension of boosting for asymmetric mislabeled data (Q419240) (← links)
- Further results on the margin explanation of boosting: new algorithm and experiments (Q439827) (← links)
- Boosting algorithms: regularization, prediction and model fitting (Q449780) (← links)
- Cox process functional learning (Q500875) (← links)
- Learning rates for multi-kernel linear programming classifiers (Q537615) (← links)
- Density estimation by the penalized combinatorial method (Q558002) (← links)
- Component-wisely sparse boosting (Q743775) (← links)
- On the accuracy of cross-validation in the classification problem (Q832978) (← links)
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint (Q869974) (← links)
- Multi-kernel regularized classifiers (Q870343) (← links)
- Regularization in statistics (Q882931) (← links)
- Boosting and instability for regression trees (Q959181) (← links)
- New multicategory boosting algorithms based on multicategory Fisher-consistent losses (Q999662) (← links)
- On surrogate loss functions and \(f\)-divergences (Q1020983) (← links)
- Convergence rates of generalization errors for margin-based classification (Q1021988) (← links)
- On boosting kernel regression (Q1031760) (← links)
- Two-step sparse boosting for high-dimensional longitudinal data with varying coefficients (Q1615281) (← links)
- Bootstrap -- an exploration (Q1731214) (← links)
- Deformation of log-likelihood loss function for multiclass boosting (Q1784701) (← links)
- Population theory for boosting ensembles. (Q1884600) (← links)
- Statistical behavior and consistency of classification methods based on convex risk minimization. (Q1884603) (← links)
- Calibrated asymmetric surrogate losses (Q1950846) (← links)
- Classification with minimax fast rates for classes of Bayes rules with sparse representation (Q1951772) (← links)
- Random classification noise defeats all convex potential boosters (Q1959553) (← links)
- Multiclass classification, information, divergence and surrogate risk (Q1990579) (← links)
- SVM-boosting based on Markov resampling: theory and algorithm (Q2057733) (← links)
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses (Q2091840) (← links)
- AdaBoost and robust one-bit compressed sensing (Q2102435) (← links)
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers (Q2148995) (← links)
- Aggregation of estimators and stochastic optimization (Q2197367) (← links)
- Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect (Q2305992) (← links)
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder) (Q2373576) (← links)
- Fully online classification by regularization (Q2381648) (← links)
- Accelerated gradient boosting (Q2425242) (← links)
- Ranking and empirical minimization of \(U\)-statistics (Q2426626) (← links)
- Recursive aggregation of estimators by the mirror descent algorithm with averaging (Q2432961) (← links)
- Simultaneous adaptation to the margin and to complexity in classification (Q2456017) (← links)
- Optimal rates of aggregation in classification under low noise assumption (Q2469663) (← links)
- On the rate of convergence for multi-category classification based on convex losses (Q2475308) (← links)
- Learning gradients by a gradient descent algorithm (Q2480334) (← links)
- Boosting for high-dimensional linear models (Q2497175) (← links)
- A boosting method with asymmetric mislabeling probabilities which depend on covariates (Q2512782) (← links)
- Complexities of convex combinations and bounding the generalization error in classification (Q2583410) (← links)
- Boosting with early stopping: convergence and consistency (Q2583412) (← links)
- On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer (Q3085585) (← links)
- Theory of Classification: a Survey of Some Recent Advances (Q3373749) (← links)
- (Q5214207) (← links)
- Optimization by Gradient Boosting (Q5870986) (← links)