Pages that link to "Item:Q5754926"
From MaRDI portal
The following pages link to Convexity, Classification, and Risk Bounds (Q5754926):
Displaying 50 items.
- Regularization of case-specific parameters for robustness and efficiency (Q252778) (← links)
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning (Q285946) (← links)
- Classification with asymmetric label noise: consistency and maximal denoising (Q315419) (← links)
- Supervised classification and mathematical optimization (Q339559) (← links)
- Universally consistent vertex classification for latent positions graphs (Q366983) (← links)
- Rejoinder of ``Dynamic treatment regimes: technical challenges and applications'' (Q405352) (← links)
- Learning noisy linear classifiers via adaptive and selective sampling (Q413852) (← links)
- Linear classifiers are nearly optimal when hidden variables have diverse effects (Q420914) (← links)
- Statistical analysis of kernel-based least-squares density-ratio estimation (Q420923) (← links)
- Oracle properties of SCAD-penalized support vector machine (Q433741) (← links)
- Further results on the margin explanation of boosting: new algorithm and experiments (Q439827) (← links)
- Mirror averaging with sparsity priors (Q442083) (← links)
- Boosting algorithms: regularization, prediction and model fitting (Q449780) (← links)
- Margin-adaptive model selection in statistical learning (Q453298) (← links)
- Generalization ability of fractional polynomial models (Q461189) (← links)
- Learning rate of support vector machine for ranking (Q468458) (← links)
- Does modeling lead to more accurate classification? A study of relative efficiency in linear classification (Q476236) (← links)
- Optimal exponential bounds on the accuracy of classification (Q485316) (← links)
- Comment on ``Hypothesis testing by convex optimization'' (Q491382) (← links)
- Cox process functional learning (Q500875) (← links)
- Unregularized online learning algorithms with general loss functions (Q504379) (← links)
- Support vector machines based on convex risk functions and general norms (Q513637) (← links)
- Logistic classification with varying gaussians (Q534984) (← links)
- Learning rates for multi-kernel linear programming classifiers (Q537615) (← links)
- On qualitative robustness of support vector machines (Q538179) (← links)
- Node harvest (Q542973) (← links)
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions (Q547325) (← links)
- Performance guarantees for individualized treatment rules (Q548554) (← links)
- Approximation of frame based missing data recovery (Q550489) (← links)
- Estimating conditional quantiles with the help of the pinball loss (Q637098) (← links)
- Classification with non-i.i.d. sampling (Q652859) (← links)
- The risk of trivial solutions in bipartite top ranking (Q669314) (← links)
- The structured elastic net for quantile regression and support vector classification (Q746191) (← links)
- Soft-max boosting (Q747255) (← links)
- Angle-based cost-sensitive multicategory classification (Q830425) (← links)
- Aggregation via empirical risk minimization (Q842390) (← links)
- Tutorial series on brain-inspired computing. VI: Geometrical structure of boosting algorithm (Q857990) (← links)
- On regularization algorithms in learning theory (Q870339) (← links)
- Multi-kernel regularized classifiers (Q870343) (← links)
- A Fisher consistent multiclass loss function with variable margin on positive examples (Q887268) (← links)
- The C-loss function for pattern classification (Q898303) (← links)
- Multiway spectral clustering: a margin-based perspective (Q908150) (← links)
- Kernel methods in machine learning (Q930651) (← links)
- Parzen windows for multi-class classification (Q958247) (← links)
- Learning from dependent observations (Q958916) (← links)
- Regularized margin-based conditional log-likelihood loss for prototype learning (Q969081) (← links)
- Statistical inference of minimum BD estimators and classifiers for varying-dimensional models (Q972890) (← links)
- Fast rates for support vector machines using Gaussian kernels (Q995417) (← links)
- Fast learning rates for plug-in classifiers (Q995418) (← links)
- Robust learning from bites for data mining (Q1020821) (← links)