A decision-theoretic generalization of on-line learning and an application to boosting

From MaRDI portal
Publication:1370863

DOI10.1006/jcss.1997.1504zbMath0880.68103OpenAlexW1988790447WikidataQ56386811 ScholiaQ56386811MaRDI QIDQ1370863

Robert E. Schapire, Yoav Freund

Publication date: 16 February 1998

Published in: Journal of Computer and System Sciences (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/4ba566223e426677d12a9a18418c023a4deec77e



Related Items

Comments on: ``A random forest guided tour, Interpreting uninterpretable predictors: kernel methods, Shtarkov solutions, and random forests, AdaBoost Semiparametric Model Averaging Prediction for Multiple Categories, What are the Most Important Statistical Ideas of the Past 50 Years?, Deep learning: a statistical viewpoint, Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation, Robust ranking by ensembling of diverse models and assessment metrics, Transforming examples for multiclass boosting, Unnamed Item, Unnamed Item, Unnamed Item, A Tree-Based Semi-Varying Coefficient Model for the COM-Poisson Distribution, Properties of Bagged Nearest Neighbour Classifiers, CatBoost — An Ensemble Machine Learning Model for Prediction and Classification of Student Academic Performance, Contrast trees and distribution boosting, A new boosting-based software reliability growth model, Online Learning with (Multiple) Kernels: A Review, Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning, Online Prediction with <scp>History‐Dependent</scp> Experts: The General Case, Improved algorithms for bandit with graph feedback via regret decomposition, Estimating propensity scores using neural networks and traditional methods: a comparative simulation study, Boosted-oriented probabilistic smoothing-spline clustering of series, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, Online Metric Algorithms with Untrusted Predictions, Estimating Tree-Based Dynamic Treatment Regimes Using Observational Data with Restricted Treatment Sequences, The improved AdaBoost algorithms for imbalanced data classification, Explainable subgradient tree boosting for prescriptive analytics in operations management, Dynamic Resource Allocation in the Cloud with Near-Optimal Efficiency, Random forest pruning techniques: a recent review, A systematic literature review on the use of machine learning in code clone research, Is there a role for statistics in artificial intelligence?, Machine truth serum: a surprisingly popular approach to improving ensemble methods, Tests and classification methods in adaptive designs with applications, Re-sampling of multi-class imbalanced data using belief function theory and ensemble learning, Model averaging for support vector classifier by cross-validation, Adaptive robust adaboost-based twin support vector machine with universum data, Two efficient selection methods for high‐dimensional <scp>CD‐CAT</scp> utilizing max‐marginals factor from <scp>MAP</scp> query and ensemble learning approach, FAC-fed: federated adaptation for fairness and concept drift aware stream classification, Universal regression with adversarial responses, A new taxonomy of global optimization algorithms, Geometry of EM and related iterative algorithms, AdaBoost-based transfer learning with privileged information, Merging components in linear Gaussian cluster-weighted models, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Model averaging prediction by \(K\)-fold cross-validation, Data-driven decision model based on local two-stage weighted ensemble learning, An instance-dependent simulation framework for learning with label noise, Robust estimation in regression and classification methods for large dimensional data, Cost-sensitive thresholding over a two-dimensional decision region for fraud detection, Ensemble learning for the partial label ranking problem, Relaxing the i.i.d. assumption: adaptively minimax optimal regret via root-entropic regularization, An instance-oriented performance measure for classification, Optimal Exploration–Exploitation in a Multi-armed Bandit Problem with Non-stationary Rewards, Empirical likelihood ratio tests for non-nested model selection based on predictive losses, An empirical bias–variance analysis of DECORATE ensemble method at different training sample sizes, Bandits with Global Convex Constraints and Objective, Semi-supervised learning using autodidactic interpolation on sparse representation-based multiple one-dimensional embedding, AN EFFECTIVE BIAS-CORRECTED BAGGING METHOD FOR THE VALUATION OF LARGE VARIABLE ANNUITY PORTFOLIOS, The synergy between PAV and AdaBoost, The synergy between PAV and AdaBoost, Integration of gene functional diversity for effective cancer detection, FE-CIDIM: fast ensemble of CIDIM classifiers, Predicting nearly as well as the best pruning of a decision tree through dynamic programming scheme, Selection of Binary Variables and Classification by Boosting, Robust Loss Functions for Boosting, Cost-Minimising Strategies for Data Labelling: Optimal Stopping and Active Learning, Boosting over non-deterministic ZDDs, Looking for lumps: boosting and bagging for density estimation., Improving nonparametric regression methods by bagging and boosting., Listwise approaches based on feature ranking discovery, Stochastic boosting algorithms, Recent developments in bootstrap methodology, Nonparametric multiple expectile regression via ER-Boost, A Learning Algorithm to Select Consistent Reactions to Human Movements, Unnamed Item, Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression, l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies, Boosting in the presence of noise, Unnamed Item, The AdaBoost Flow, Veridical data science, Stochastic boosting algorithms, Large scale analysis of generalization error in learning using margin based classification methods, A Markov-modulated tree-based gradient boosting model for auto-insurance risk premium pricing, Appropriate machine learning techniques for credit scoring and bankruptcy prediction in banking and finance: A comparative study, Large dimensional analysis of general margin based classification methods, A boosting inspired personalized threshold method for sepsis screening, Small-Loss Bounds for Online Learning with Partial Information, Learning quantum models from quantum or classical data, Optimization by Gradient Boosting, On the Effect and Remedies of Shrinkage on Classification Probability Estimation, Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins., Generalization error of combined classifiers., Least angle regression. (With discussion), Generalization bounds for averaged classifiers, On approximating weighted sums with exponentially many terms, Random average shifted histograms, Accurate ensemble pruning with PL-bagging, Kernel-based nonlinear discriminant analysis for face recognition, Process consistency for AdaBoost., On the Bayes-risk consistency of regularized boosting methods., Statistical behavior and consistency of classification methods based on convex risk minimization., On domain-partitioning induction criteria: worst-case bounds for the worst-case based, Online learning in online auctions, Stable feature selection for biomarker discovery, Noise peeling methods to improve boosting algorithms, On minimaxity of follow the leader strategy in the stochastic setting, Online multikernel learning based on a triple-norm regularizer for semantic image classification, Automatic emergence detection in complex systems, Probability estimation for multi-class classification using adaboost, An efficient membership-query algorithm for learning DNF with respect to the uniform distribution, Learning rotations with little regret, Context-based unsupervised ensemble learning and feature ranking, Analysis of web visit histories. II: Predicting navigation by nested STUMP regression trees, Growing support vector classifiers with controlled complexity., Robust regression using biased objectives, Online multiple kernel classification, Relational networks of conditional preferences, Sex with no regrets: how sexual reproduction uses a no regret learning algorithm for evolutionary advantage, Scale-free online learning, Hierarchical design of fast minimum disagreement algorithms, LPiTrack: eye movement pattern recognition algorithm and application to biometric identification, Multiple-view multiple-learner active learning, Random classification noise defeats all convex potential boosters, Extracting certainty from uncertainty: regret bounded by variation in costs, gBoost: a mathematical programming approach to graph classification and regression, Adaptive linear and normalized combination of radial basis function networks for function approximation and regression, Vote counting measures for ensemble classifiers., Constructing support vector machine ensemble., On learning multicategory classification with sample queries., A conversation with Leo Breiman., A concrete statistical realization of Kleinberg's stochastic dicrimination for pattern recognition. I: Two-class classification, Algorithms for drug sensitivity prediction, Learning customized and optimized lists of rules with mathematical programming, Categorization of text documents taking into account some structural features, Predicting the effective mechanical property of heterogeneous materials by image based modeling and deep learning, Assessing robustness of classification using an angular breakdown point, Robust multicategory support vector machines using difference convex algorithm, A comparative study of the leading machine learning techniques and two new optimization algorithms, Membership-margin based feature selection for mixed type and high-dimensional data: theory and applications, Deep neural networks, gradient-boosted trees, random forests: statistical arbitrage on the S\&P 500, Multi-vehicle detection algorithm through combining Harr and HOG features, Boosting-based sequential output prediction, Surrogate losses in passive and active learning, Nonlinear models for ground-level ozone forecasting, The reliability of classification of terminal nodes in GUIDE decision tree to predict the nonalcoholic fatty liver disease, Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications, Rotation Forests for regression, Learning with continuous experts using drifting games, Boosting GARCH and neural networks for the prediction of heteroskedastic time series, Cost-sensitive learning and decision making revisited, Learning causal effect using machine learning with application to China's typhoon, Analyzing cognitive processes from complex neuro-physiologically based data: some lessons, Nonparametric bootstrap prediction, Deformation of log-likelihood loss function for multiclass boosting, SVM-FuzCoC: A novel SVM-based feature selection method using a fuzzy complementary criterion, Learn\(^{++}\).MF: A random subspace approach for the missing feature problem, Information theoretic combination of pattern classifiers, Generalized re-weighting local sampling mean discriminant analysis, Forecasting corporate failure using ensemble of self-organizing neural networks, Cost-sensitive boosting for classification of imbalanced data, Sharpness estimation of combinatorial generalization ability bounds for threshold decision rules, A novel margin-based measure for directed hill climbing ensemble pruning, New multicategory boosting algorithms based on multicategory Fisher-consistent losses, Isotonic boosting classification rules, Batch mode active learning framework and its application on valuing large variable annuity portfolios, Stochastic approximation: from statistical origin to big-data, multidisciplinary applications, Prediction of Alzheimer's diagnosis using semi-supervised distance metric learning with label propagation, View independent face detection based on horizontal rectangular features and accuracy improvement using combination kernel of various sizes, Exact bootstrap \(k\)-nearest neighbor learners, Surrogate maximization/minimization algorithms and extensions, Modeling churn using customer lifetime value, Arcing classifiers. (With discussion), Boosting the margin: a new explanation for the effectiveness of voting methods, Parallelizing AdaBoost by weights dynamics, Classification by ensembles from random partitions of high-dimensional data, A stochastic approximation view of boosting, Efficient exploration of unknown indoor environments using a team of mobile robots, A local boosting algorithm for solving classification problems, Negative correlation in incremental learning, A \(\mathbb R\)eal generalization of discrete AdaBoost, Risk-sensitive loss functions for sparse multi-category classification problems, On generalization performance and non-convex optimization of extended \(\nu \)-support vector machine, Some challenges for statistics, A reference model for customer-centric data mining with support vector machines, The composite absolute penalties family for grouped and hierarchical variable selection, Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors), On weak base hypotheses and their implications for boosting regression and classification, A geometric approach to leveraging weak learners, Drifting games and Brownian motion, Top-down decision tree learning as information based boosting, Evidential calibration of binary SVM classifiers, Mathematical optimization in classification and regression trees, Inducing wavelets into random fields via generative boosting, Angle-based cost-sensitive multicategory classification, Cost-sensitive ensemble learning: a unifying framework, Classification in general finite dimensional spaces with the \(k\)-nearest neighbor rule, Information-theoretic bounded rationality and \(\epsilon\)-optimality, Representation in the (artificial) immune system, Finding causative genes from high-dimensional data: an appraisal of statistical and machine learning approaches, Feature selection filter for classification of power system operating states, An empirical comparison of classification algorithms for mortgage default prediction: evidence from a distressed mortgage market, Identifying the interacting positions of a protein using Boolean learning and support vector machines, Cost-sensitive boosting algorithms: do we really need them?, Analysis of web visit histories. I: Distance-based visualization of sequence rules, PBoostGA: pseudo-boosting genetic algorithm for variable ranking and selection, Tutorial series on brain-inspired computing. VI: Geometrical structure of boosting algorithm, Supervised classification and mathematical optimization, Mean and quantile boosting for partially linear additive models, Empirical models based on features ranking techniques for corporate financial distress prediction, Boosting of granular models, Improved customer choice predictions using ensemble methods, Breast cancer prediction using the isotonic separation technique, Conditional validity of inductive conformal predictors, \(L_{2}\) boosting in kernel regression, Knee joint vibration signal analysis with matching pursuit decomposition and dynamic weighted classifier fusion, ML-KNN: A lazy learning approach to multi-label learning, Face detection with boosted Gaussian features, Knowledge acquisition and development of accurate rules for predicting protein stability changes, Self-improved gaps almost everywhere for the agnostic approximation of monomials, Combining initial segments of lists, A Fisher consistent multiclass loss function with variable margin on positive examples, Small area estimation of the homeless in Los Angeles: an application of cost-sensitive stochastic gradient boosting, Performance improvement of classifier fusion for batch samples based on upper integral, Probabilistic combination of classification rules and its application to medical diagnosis, Survey on speech emotion recognition: features, classification schemes, and databases, Multi-label classification and extracting predicted class hierarchies, Blasso for object categorization and retrieval: towards interpretable visual models, Predicate logic based image grammars for complex pattern recognition, A time-series modeling method based on the boosting gradient-descent theory, Ternary Bradley-Terry model-based decoding for multi-class classification and its extensions, A simple extension of boosting for asymmetric mislabeled data, Online variance minimization, Online learning from local features for video-based face recognition, Variable selection for nonparametric Gaussian process priors: Models and computational strategies, Sparse weighted voting classifier selection and its linear programming relaxations, Risk bounds for CART classifiers under a margin condition, Further results on the margin explanation of boosting: new algorithm and experiments, Survival ensembles by the sum of pairwise differences with application to lung cancer microarray studies, Comment on: ``Support vector machines with applications, Boosting algorithms: regularization, prediction and model fitting, Comment on: Boosting algorithms: regularization, prediction and model fitting, A boosting method for maximization of the area under the ROC curve, Representing and recognizing objects with massive local image patches, A boosting approach for supervised Mahalanobis distance metric learning, Functional gradient ascent for probit regression, A noise-detection based AdaBoost algorithm for mislabeled data, Practical speech emotion recognition based on online learning: from acted data to elicited data, Nonstochastic bandits: Countable decision set, unbounded costs and reactive environments, Boosting multi-features with prior knowledge for mini unmanned helicopter landmark detection, The value of agreement a new boosting algorithm, Method for quickly inferring the mechanisms of large-scale complex networks based on the census of subgraph concentrations, Weight-selected attribute bagging for credit scoring, Does modeling lead to more accurate classification? A study of relative efficiency in linear classification, Sample-weighted clustering methods, Unsupervised weight parameter estimation method for ensemble learning, A testing based extraction algorithm for identifying significant communities in networks, A lazy bagging approach to classification, Concept drift detection via competence models, Online aggregation of unbounded losses using shifting experts with confidence, Cox process functional learning, Boosting conditional probability estimators, Support vector machines based on convex risk functions and general norms, An extensive comparison of recent classification tools applied to microarray data, Boosting and instability for regression trees, Multiple kernel boosting framework based on information measure for classification, Boosting additive models using component-wise P-splines, Using boosting to prune double-bagging ensembles, Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap, The Bayesian additive classification tree applied to credit risk modelling, BART: Bayesian additive regression trees, Non-crossing large-margin probability estimation and its application to robust SVM via pre\-condi\-tion\-ing, A study on iris localization and recognition on mobile phones, Heterogeneous stacking for classification-driven watershed segmentation, Least angle and \(\ell _{1}\) penalized regression: a review, From cluster ensemble to structure ensemble, Tree models for difference and change detection in a complex environment, Subsemble: an ensemble method for combining subset-specific algorithm fits, Reducing forgeries in writer-independent off-line signature verification through ensemble of classifiers, Projective morphoogies and their application in structural analysis of digital images, Machine learning approaches for discrimination of extracellular matrix proteins using hybrid feature space, Optimal learning for sequential sampling with non-parametric beliefs, Component-wisely sparse boosting, Machine learning feature selection methods for landslide susceptibility mapping, Fast pedestrian detection system with a two layer cascade of classifiers, Soft-max boosting, AdaBoost.MH, Hybrid cluster ensemble framework based on the random combination of data transformation operators, Variable selection using penalized empirical likelihood, Regularization of case-specific parameters for robustness and efficiency, Accurate tree-based missing data imputation and data fusion within the statistical learning paradigm, An empirical study of on-line models for relational data streams, A review on instance ranking problems in statistical learning, Recovering the time-dependent volatility in jump-diffusion models from nonlocal price observations, A comparative study of machine learning models for predicting the state of reactive mixing, Mathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting), Classification by evolutionary ensembles, Inference in Bayesian additive vector autoregressive tree models, Multi-class boosting with asymmetric binary weak-learners, Utilizing adaptive boosting to detect quantum steerability, On PAC learning algorithms for rich Boolean function classes, Improved second-order bounds for prediction with expert advice, Multi-class learning by smoothed boosting, Quadratic boosting, Joint face and head tracking inside multi-camera smart rooms, Hedge algorithm and dual averaging schemes, Polymorphic uncertainty quantification for engineering structures via a hyperplane modelling technique, A viral protein identifying framework based on temporal convolutional network, Invariant pattern recognition using contourlets and adaboost, A data mining approach to face detection, A novel margin based algorithm for feature extraction, Online learning for min-max discrete problems, Asymptotically optimal strategies for online prediction with history-dependent experts, Multi-label optimal margin distribution machine, Classification optimization for training a large dataset with naïve Bayes, Machine learning applied to asteroid dynamics, Hierarchical mixing linear support vector machines for nonlinear classification, Data science applications to string theory, Goal scoring, coherent loss and applications to machine learning, Quantitative convergence analysis of kernel based large-margin unified machines, Accelerated gradient boosting, Using LogitBoost classifier to predict protein structural classes, On a robust gradient boosting scheme based on aggregation functions insensitive to outliers, A robust approach to model-based classification based on trimming and constraints. Semi-supervised learning in presence of outliers and label noise, Propositionalization and embeddings: two sides of the same coin, Double random forest, Dynamic recursive tree-based partitioning for malignant melanoma identification in skin lesion dermoscopic images, Step decision rules for multistage stochastic programming: a heuristic approach, A model-free Bayesian classifier, Fast construction of correcting ensembles for legacy artificial intelligence systems: algorithms and a case study, Fast greedy \(\mathcal{C} \)-bound minimization with guarantees, On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning, A review on distance based time series classification, C443: a methodology to see a forest for the trees, Assessing the stability of classification trees using Florida birth data, Simultaneous adaptation to the margin and to complexity in classification, Enhancing techniques for learning decision trees from imbalanced data, Online linear optimization and adaptive routing, Optimal rates of aggregation in classification under low noise assumption, An empirical study of using Rotation Forest to improve regressors, Analysis of boosting algorithms using the smooth margin function, Maximizing the area under the ROC curve by pairwise feature combination, Tune and mix: learning to rank using ensembles of calibrated multi-class classifiers, BoostingTree: parallel selection of weak learners in boosting, with application to ranking, Multi-group support vector machines with measurement costs: A biobjective approach, An efficient modified boosting method for solving classification problems, Ensemble Gaussian mixture models for probability density estimation, A combination selection algorithm on forecasting, Canonical forest, A multi-loss super regression learner (MSRL) with application to survival prediction using proteomics, Classification of pulmonary nodules by using hybrid features, Modular learning models in forecasting natural phenomena., Feature representation and discrimination based on Gaussian mixture model probability densities -- practices and algorithms, Coronal loop detection from solar images, Supervised projection approach for boosting classifiers, Induction of multiclass multifeature split decision trees from distributed data, Using a VOM model for reconstructing potential coding regions in EST sequences, ADtreesLogit model for customer churn prediction, Diversification for better classification trees, Boosted multi-class semi-supervised learning for human action recognition, A probabilistic model of classifier competence for dynamic ensemble selection, CORES: fusion of supervised and unsupervised training methods for a multi-class classification problem, Bagging of density estimators, Multicategory large margin classification methods: hinge losses vs. coherence functions, Forecasting financial and macroeconomic variables using data reduction methods: new empirical evidence, A boosting method with asymmetric mislabeling probabilities which depend on covariates, Efficient temporal pattern recognition by means of dissimilarity space embedding with discriminative prototypes, Ensemble quantile classifier, Robustness of learning algorithms using hinge loss with outlier indicators, Boosting high dimensional predictive regressions with time varying parameters, A hierarchy of sum-product networks using robustness, A survey on semi-supervised learning, The \(\delta \)-machine: classification based on distances towards prototypes, SVM-boosting based on Markov resampling: theory and algorithm, Price probabilities: a class of Bayesian and non-Bayesian prediction rules, Adaboost-based ensemble of polynomial chaos expansion with adaptive sampling, Nonparametric screening under conditional strictly convex loss for ultrahigh dimensional sparse data, Optimal classification scores based on multivariate marker transformations, Interpretable machine learning: fundamental principles and 10 grand challenges, EMD and GNN-adaboost fault diagnosis for urban rail train rolling bearings, Achieving fairness with a simple ridge penalty, Boosting as a kernel-based method, Prediction of presynaptic and postsynaptic neurotoxins based on feature extraction, Regression trees and forests for non-homogeneous Poisson processes, AdaBoost and robust one-bit compressed sensing, Boosting with early stopping: convergence and consistency, A new accelerated proximal boosting machine with convergence rate \(O(1/t^2)\), Boosting for quantum weak learners, Cross-conformal predictors, Multilabel classification through random graph ensembles, Order scoring, bandit learning and order cancellations, Improving random forest algorithm by Lasso method, Diverse classifier ensemble creation based on heuristic dataset modification, The relative performance of ensemble methods with deep convolutional neural networks for image classification, A novel bagging approach for variable ranking and selection via a mixed importance measure, A new approach of subgroup identification for high-dimensional longitudinal data, Conditional sparse boosting for high-dimensional instrumental variable estimation, No Regret Learning in Oligopolies: Cournot vs. Bertrand, Tweedie gradient boosting for extremely unbalanced zero-inflated data, Software Defect Prediction by Strong Machine Learning Classifier, Learning to Recognize Three-Dimensional Objects, A New Feature Selection Method for Text Categorization of Customer Reviews, Boosting the partial least square algorithm for regression modelling, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Hierarchical Total Variations and Doubly Penalized ANOVA Modeling for Multivariate Nonparametric Regression, Evolution of the Viola-Jones Object Detection Method: A Survey, Learning Where to Attend with Deep Architectures for Image Tracking, Robustifying AdaBoost by Adding the Naive Error Rate, Boosting with Noisy Data: Some Views from Statistical Theory, FUSION OF EXTREME LEARNING MACHINE WITH FUZZY INTEGRAL, Combining biomarkers to optimize patient treatment recommendations, Unnamed Item, Competitive On-line Statistics, MetaBayes: Bayesian Meta-Interpretative Learning Using Higher-Order Stochastic Refinement, Intelligent Object Detection Using Trees, A Multiclass Classification Method Based on Decoding of Binary Classifiers, From information scaling of natural images to regimes of statistical models, Discriminative Reranking for Natural Language Parsing, Unnamed Item, FEATURE SELECTION VIA LEAST SQUARES SUPPORT FEATURE MACHINE, Robust Boosting Algorithm Against Mislabeling in Multiclass Problems, Nonstochastic Multi-Armed Bandits with Graph-Structured Feedback, Unnamed Item, Unnamed Item, Unnamed Item, Chasing Ghosts: Competing with Stateful Policies, Following the Perturbed Leader to Gamble at Multi-armed Bandits, Multiclass Boosting Algorithms for Shrinkage Estimators of Class Probability, Uncertainty and forecasts of U.S. recessions, On Linguistic Variables and Sparse Representations, Quantum AdaBoost algorithm via cluster state, A comparison of classification models to identify the Fragile X Syndrome, Can a corporate network and news sentiment improve portfolio optimization using the Black–Litterman model?, Boosting Method for Local Learning in Statistical Pattern Recognition, Locating Infinite Discontinuities in Computer Experiments, Analysis of regression in game theory approach, Boosting method for nonlinear transformation models with censored survival data, Boosting with missing predictors, Automatic appearance‐based loop detection from three‐dimensional laser data using the normal distributions transform, Linear Programming in the Semi-streaming Model with Application to the Maximum Matching Problem, Unnamed Item, Automated trading with boosting and expert weighting, A Bregman extension of quasi-Newton updates I: an information geometrical framework, Efficient Algorithms for Discovering Frequent and Maximal Substructures from Large Semistructured Data, Aggregating classifiers with ordinal response structure, EFFICIENT UNSUPERVISED MINING FROM NOISY CO-OCCURRENCE DATA, Delta Boosting Machine with Application to General Insurance, UNSUPERVISED LEARNING BASED DISTRIBUTED DETECTION OF GLOBAL ANOMALIES, Prototype Classification: Insights from Machine Learning, ONE-TO-MANY NODE MATCHING BETWEEN COMPLEX NETWORKS, New Bootstrap Applications in Supervised Learning, Unified Algorithms for Online Learning and Competitive Analysis, Theory of Classification: a Survey of Some Recent Advances, Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions, On Reject and Refine Options in Multicategory Classification, Incremental Hybrid Intrusion Detection Using Ensemble of Weak Classifiers, Topological Descriptors for 3D Surface Analysis, An Improved Branch-and-Bound Method for Maximum Monomial Agreement, A variance reduction framework for stable feature selection, Randomized Gradient Boosting Machine, Subject-specific Bradley–Terry–Luce models with implicit variable selection, FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION, Supervised Deep Learning in High Energy Phenomenology: a Mini Review*, New aspects of Bregman divergence in regression and classification with parametric and nonparametric estimation, Variable Selection in Nonparametric Classification Via Measurement Error Model Selection Likelihoods, A new adaptive multiple modelling approach for non-linear and non-stationary systems, Heteroscedastic BART via Multiplicative Regression Trees, Ensemble Learning for Multi-source Information Fusion, Hybrid classification algorithms based on boosting and support vector machines, Structural Online Learning, Learning Volatility of Discrete Time Series Using Prediction with Expert Advice, Another Look at Distance-Weighted Discrimination, A Combinatorial Metrical Task System Problem Under the Uniform Metric, Hierarchical Design of Fast Minimum Disagreement Algorithms, Confidence sets with expected sizes for Multiclass Classification, Online Learning over a Finite Action Set with Limited Switching, An Extension of the Receiver Operating Characteristic Curve and AUC-Optimal Classification, Detection of differential item functioning in Rasch models by boosting techniques, Supervised t-Distributed Stochastic Neighbor Embedding for Data Visualization and Classification, BOOSTING-BASED FRAMEWORK FOR PORTFOLIO STRATEGY DISCOVERY AND OPTIMIZATION, A tale of three probabilistic families: Discriminative, descriptive, and generative models, THEORETICAL FOUNDATIONS AND EXPERIMENTAL RESULTS FOR A HIERARCHICAL CLASSIFIER WITH OVERLAPPING CLUSTERS, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Unnamed Item, Reinforcement Learning Based Interactive Agent for Personalized Mathematical Skill Enhancement, Hybrid Classification of High-Dimensional Biomedical Tumour Datasets, AN EMPIRICAL STUDY OF BOOSTED NEURAL NETWORK FOR PARTICLE CLASSIFICATION IN HIGH ENERGY COLLISIONS, COST-SENSITIVE MULTI-CLASS ADABOOST FOR UNDERSTANDING DRIVING BEHAVIOR BASED ON TELEMATICS, Study of Multi-Class Classification Algorithms’ Performance on Highly Imbalanced Network Intrusion Datasets, Superlinear Integrality Gaps for the Minimum Majority Problem, IRUSRT: A Novel Imbalanced Learning Technique by Combining Inverse Random Under Sampling and Random Tree



Cites Work