Greedy function approximation: A gradient boosting machine.
From MaRDI portal
Publication:127532
DOI10.1214/aos/1013203451zbMath1043.62034OpenAlexW1678356000WikidataQ57532752 ScholiaQ57532752MaRDI QIDQ127532
Jerome H. Friedman, Jerome H. Friedman
Publication date: 1 October 2001
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1214/aos/1013203451
Related Items
AdaBoost Semiparametric Model Averaging Prediction for Multiple Categories, Improving random forest algorithm by Lasso method, What are the Most Important Statistical Ideas of the Past 50 Years?, The relative performance of ensemble methods with deep convolutional neural networks for image classification, A Statistical Approach to Crime Linkage, Identification of biomarker‐by‐treatment interactions in randomized clinical trials with survival outcomes and high‐dimensional spaces, Deep learning: a statistical viewpoint, Conditional sparse boosting for high-dimensional instrumental variable estimation, Tweedie gradient boosting for extremely unbalanced zero-inflated data, Inference on moderation effect with third-variable effect analysis – application to explore the trend of racial disparity in oncotype dx test for breast cancer treatment, TREE-BASED MACHINE LEARNING METHODS FOR MODELING AND FORECASTING MORTALITY, Linear Aggregation in Tree-Based Estimators, Survival Regression with Accelerated Failure Time Model in XGBoost, Variable selection by ensembles for the Cox model, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Boosting Random Forests to Reduce Bias; One-Step Boosted Forest and Its Variance Estimate, Local Linear Forests, Model Interpretation Through Lower-Dimensional Posterior Summarization, Evolution of the Viola-Jones Object Detection Method: A Survey, A Tree-Based Semi-Varying Coefficient Model for the COM-Poisson Distribution, DISCRIMINATION-FREE INSURANCE PRICING, Contrast trees and distribution boosting, Machine learning based on extended generalized linear model applied in mixture experiments, Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded, Iterative Prediction-and-Optimization for E-Logistics Distribution Network Design, Estimating the Size of Branch-and-Bound Trees, Ensemble of fast learning stochastic gradient boosting, Signal approximations based on nonlinear and optimal piecewise affine functions, A machine learning approach to construct quarterly data on intangible investment for Eurozone, Nonparametric variable importance assessment using machine learning techniques, Iteratively reweighted least square for kernel expectile regression with random features, A hybrid ensemble method with negative correlation learning for regression, Adaptive Bayesian Sum of Trees Model for Covariate-Dependent Spectral Analysis, Local interpretation of supervised learning models based on high dimensional model representation, Local bias adjustment, duration-weighted probabilities, and automatic construction of tariff cells, Hierarchical fuzzy regression tree: a new gradient boosting approach to design a TSK fuzzy model, Estimating propensity scores using neural networks and traditional methods: a comparative simulation study, Lasso regularization within the LocalGLMnet architecture, Margin optimal classification trees, Supervised Machine Learning Techniques: An Overview with Applications to Banking, Interpretable machine learning for imbalanced credit scoring datasets, GP-BART: a novel Bayesian additive regression trees approach using Gaussian processes, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, A General Framework for Inference on Algorithm-Agnostic Variable Importance, Boosting Distributional Copula Regression, Stochastic local search and parameters recommendation: a case study on flowshop problems, Explainable subgradient tree boosting for prescriptive analytics in operations management, Accelerated Componentwise Gradient Boosting Using Efficient Data Representation and Momentum-Based Optimization, A sequential modeling approach for predicting clinical outcomes with repeated measures, Gradient boosting with extreme-value theory for wildfire prediction, Heterogeneities among credit risk parameter distributions: the modality defines the best estimation method, Comparison of various machine learning algorithms for estimating generalized propensity score, Stochastic Tree Ensembles for Regularized Nonlinear Regression, Tests and classification methods in adaptive designs with applications, Adaptive solution prediction for combinatorial optimization, Day-ahead aircraft routing with data-driven primary delay predictions, \(R^{\ast}\): a robust MCMC convergence diagnostic with uncertainty using decision tree classifiers, Infinitesimal gradient boosting, Unnamed Item, Causal inference in data analysis with applications to fairness and explanations, SETAR-Tree: a novel and accurate tree algorithm for global time series forecasting, Mixture of inhomogeneous matrix models for species‐rich ecosystems, Gradient boosting for extreme quantile regression, Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link, Weighted bagging: a modification of AdaBoost from the perspective of importance sampling, Large-Scale Linear RankSVM, RandGA: injecting randomness into parallel genetic algorithm for variable selection, Performance Comparison of Machine Learning Platforms, AN EFFECTIVE BIAS-CORRECTED BAGGING METHOD FOR THE VALUATION OF LARGE VARIABLE ANNUITY PORTFOLIOS, Unnamed Item, ADDRESSING IMBALANCED INSURANCE DATA THROUGH ZERO-INFLATED POISSON REGRESSION WITH BOOSTING, Unnamed Item, A Data-Driven Random Subfeature Ensemble Learning Algorithm for Weather Forecasting, Boosting Insights in Insurance Tariff Plans with Tree-Based Machine Learning Methods, The residual‐based predictiveness curve: A visual tool to assess the performance of prediction models, Unnamed Item, Interactive Slice Visualization for Exploring Machine Learning Models, Stochastic gradient boosting., Looking for lumps: boosting and bagging for density estimation., Improving nonparametric regression methods by bagging and boosting., Boosted Regression Trees with Errors in Variables, A CLASS OF MIXTURE OF EXPERTS MODELS FOR GENERAL INSURANCE: APPLICATION TO CORRELATED CLAIM FREQUENCIES, Generalized Sobol sensitivity indices for dependent variables: numerical methods, Nonparametric multiple expectile regression via ER-Boost, Random gradient boosting for predicting conditional quantiles, Dimension reduction boosting, Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression, Online Adaptive Decision Trees: Pattern Classification and Function Approximation, Log-Linear Bayesian Additive Regression Trees for Multinomial Logistic and Count Regression Models, The AdaBoost Flow, Optimal Individualized Decision Rules Using Instrumental Variable Methods, A Markov-modulated tree-based gradient boosting model for auto-insurance risk premium pricing, A multivariate multiple third-variable effect analysis with an application to explore racial and ethnic disparities in obesity, Survey-Based Forecasting: To Average or Not to Average, APPLYING ECONOMIC MEASURES TO LAPSE RISK MANAGEMENT WITH MACHINE LEARNING APPROACHES, Study of Multi-Class Classification Algorithms’ Performance on Highly Imbalanced Network Intrusion Datasets, General Sparse Boosting: Improving Feature Selection of L2Boosting by Correlation-Based Penalty Family, Optimization by Gradient Boosting, LocalGLMnet: interpretable deep learning for tabular data, Two-step sparse boosting for high-dimensional longitudinal data with varying coefficients, Power system parameters forecasting using Hilbert-Huang transform and machine learning, Cultural consensus theory for the evaluation of patients' mental health scores in forensic psychiatric hospitals, Mathematical optimization in classification and regression trees, A convex version of multivariate adaptive regression splines, Deep distribution regression, Inducing wavelets into random fields via generative boosting, Angle-based cost-sensitive multicategory classification, Embedding and learning with signatures, The Delaunay triangulation learner and its ensembles, Cost-sensitive ensemble learning: a unifying framework, Representation in the (artificial) immune system, Boosting techniques for nonlinear time series models, Finding causative genes from high-dimensional data: an appraisal of statistical and machine learning approaches, Using social media for classifying actionable insights in disaster scenario, An empirical study on classification methods for alarms from a bug-finding static C analyzer, CRM in social media: predicting increases in Facebook usage frequency, NucPosPred: predicting species-specific genomic nucleosome positioning via four different modes of general PseKNC, An empirical comparison of classification algorithms for mortgage default prediction: evidence from a distressed mortgage market, Factorizing LambdaMART for cold start recommendations, PBoostGA: pseudo-boosting genetic algorithm for variable ranking and selection, Mean and quantile boosting for partially linear additive models, Logitboost autoregressive networks, Gradient boosting for high-dimensional prediction of rare events, Improved customer choice predictions using ensemble methods, Conditional validity of inductive conformal predictors, Exploiting symmetries for scaling loopy belief propagation and relational training, \(L_{2}\) boosting in kernel regression, Conditional estimation for dependent functional data, Small area estimation of the homeless in Los Angeles: an application of cost-sensitive stochastic gradient boosting, Boosting kernel-based dimension reduction for jointly propagating spatial variability and parameter uncertainty in long-running flow simulators, Variable selection for generalized linear mixed models by \(L_1\)-penalized estimation, Gradient-based boosting for statistical relational learning: the Markov logic network and missing data cases, A time-series modeling method based on the boosting gradient-descent theory, Approximation of centroid end-points and switch points for replacing type reduction algorithms, A simple extension of boosting for asymmetric mislabeled data, Ensemble classification of paired data, A review of boosting methods for imbalanced data classification, General formulation of HDMR component functions with independent and correlated variables, Boosting local quasi-likelihood estimators, Bootstrap model selection for possibly dependent and heterogeneous data, Marginal integration for nonparametric causal inference, Gradient-based boosting for statistical relational learning: the relational dependency network case, Survival ensembles by the sum of pairwise differences with application to lung cancer microarray studies, Characterizing \(L_{2}\)Boosting, Boosting algorithms: regularization, prediction and model fitting, Comment on: Boosting algorithms: regularization, prediction and model fitting, Rejoinder: Boosting algorithms: regularization, prediction and model fitting, Functional gradient ascent for probit regression, Fully corrective boosting with arbitrary loss and regularization, On a method for constructing ensembles of regression models, Multiple instance learning via Gaussian processes, Generalized random forests, A dynamic ensemble approach to robust classification in the presence of missing data, Inverse boosting for monotone regression functions, Predictive learning via rule ensembles, Boosting and instability for regression trees, Boosting additive models using component-wise P-splines, Standard errors for bagged and random forest estimators, Using boosting to prune double-bagging ensembles, The Bayesian additive classification tree applied to credit risk modelling, Boosted coefficient models, BART: Bayesian additive regression trees, MARS: selecting basis functions and knots with an empirical Bayes method, Model-based boosting in R: a hands-on tutorial using the R package mboost, Remembrance of Leo Breiman, Quadratic Majorization for Nonconvex Loss with Applications to the Boosting Algorithm, Robust boosting with truncated loss functions, Navigating random forests and related advances in algorithmic modeling, Boosting GARCH and neural networks for the prediction of heteroskedastic time series, Boosted multivariate trees for longitudinal data, On the choice and influence of the number of boosting steps for high-dimensional linear Cox-models, Improving corporate bond recovery rate prediction using multi-factor support vector regressions, Functional dissipation microarrays for classification, New multicategory boosting algorithms based on multicategory Fisher-consistent losses, Robust boosting for regression problems, RRBoost, Tree-structured modelling of categorical predictors in generalized additive regression, Boosting in Cox regression: a comparison between the likelihood-based and the model-based approaches with focus on the R-packages \textit{CoxBoost} and \textit{mboost}, xgb2sql, Sparse recovery via differential inclusions, A dynamic model of expected bond returns: A functional gradient descent approach, Component-wisely sparse boosting, Machine learning feature selection methods for landslide susceptibility mapping, Iterative bias reduction: a comparative study, Poisson dependency networks: gradient boosted models for multivariate count data, Knot selection by boosting techniques, Boosting ridge regression, A stochastic approximation view of boosting, Robust learning from bites for data mining, A local boosting algorithm for solving classification problems, Logitboost with errors-in-variables, Robustified \(L_2\) boosting, flashlight, On boosting kernel regression, Locally linear ensemble for regression, A conversation with Jerry Friedman, Evaluating the impact of a grouping variable on job satisfaction drivers, Wavelet-based gradient boosting, Significant vector learning to construct sparse kernel regression models, Splines for Financial Volatility, Boosting with Noisy Data: Some Views from Statistical Theory, Unnamed Item, Unnamed Item, An integrated approach of data envelopment analysis and boosted generalized linear mixed models for efficiency assessment, A survey of deep network techniques all classifiers can adopt, Validating game-theoretic models of terrorism: insights from machine learning, Artificial intelligence in healthcare operations to enhance treatment outcomes: a framework to predict lung cancer prognosis, Surface warping incorporating machine learning assisted domain likelihood estimation: a new paradigm in mine geology modeling and automation, Power comparison for propensity score methods, Model transparency and interpretability: survey and application to the insurance industry, Rationalizing predictions by adversarial information calibration, Accelerated gradient boosting, Growing axons: greedy learning of neural networks with application to function approximation, On a robust gradient boosting scheme based on aggregation functions insensitive to outliers, Efficient homomorphic comparison methods with optimal complexity, Uncertainty and forecasts of U.S. recessions, Evolution of high-frequency systematic trading: a performance-driven gradient boosting model, Early stopping in \(L_{2}\)Boosting, Invariance, causality and robustness, A probabilistic classifier ensemble weighting scheme based on cross-validated accuracy estimates, A recommendation system for car insurance, Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models, Jobs runtime forecast for JSCC RAS supercomputers using machine learning methods, Application of “Aggregated Classifiers” in Survival Time Studies, Neural network ensembles: evaluation of aggregation algorithms, On the Convergence of a Greedy Algorithm for the Solution of the Problem for the Construction of Monotone Regression, Adaptive index models for marker-based risk stratification, Prediction and Inference With Missing Data in Patient Alert Systems, Boosting method for nonlinear transformation models with censored survival data, Boosting with missing predictors, Structure learning for relational logistic regression: an ensemble approach, Forecasting bankruptcy using biclustering and neural network-based ensembles, A deep multitask learning approach for air quality prediction, To imprison or not to imprison: an analytics model for drug courts, An adaptive sampling scheme guided by BART—with an application to predict processor performance, Instance-dependent cost-sensitive learning for detecting transfer fraud, An empirical study of using Rotation Forest to improve regressors, Additive stacking for disaggregate electricity demand forecasting, Machine learning approach for higher-order interactions detection to ecological communities management, Predicting the Geographic Distribution of a Species from Presence‐Only Data Subject to Detection Errors, Extending models via gradient boosting: an application to Mendelian models, DD-Classifier: Nonparametric Classification Procedure Based onDD-Plot, Semiparametric estimation of a class of generalized linear models without smoothing, An efficient modified boosting method for solving classification problems, Variable selection and model choice in structured survival models, Detecting the impact area of BP deepwater horizon oil discharge: an analysis by time varying coefficient logistic models and boosted trees, Prediction and classification in nonlinear data analysis: something old, something new, something borrowed, something blue, Embedding black-box regression techniques into hierarchical Bayesian models, Constructing a speculative kernel machine for pattern classification, TESTS OF THE MARTINGALE DIFFERENCE HYPOTHESIS USING BOOSTING AND RBF NEURAL NETWORK APPROXIMATIONS, Nonparametric Modeling of Neural Point Processes via Stochastic Gradient Boosting Regression, Aggregating classifiers with ordinal response structure, Boosting for high-dimensional linear models, Unnamed Item, Forward regression for Cox models with high-dimensional covariates, Oblique random survival forests, Predicting missing values: a comparative study on non-parametric approaches for imputation, Delta Boosting Machine with Application to General Insurance, Seeing Inside the Black Box: Using Diffusion Index Methodology to Construct Factor Proxies in Large Scale Macroeconomic Time Series Environments, Predictive analytics of insurance claims using multivariate decision trees, Forecasting financial and macroeconomic variables using data reduction methods: new empirical evidence, A boosting method with asymmetric mislabeling probabilities which depend on covariates, On the differences between \(L_2\) boosting and the Lasso, An improved multiclass LogitBoost using adaptive-one-vs-one, QoRank: A query-dependent ranking model using LSE-based weighted multiple hyperplanes aggregation for information retrieval, Pitfalls of hypothesis tests and model selection on bootstrap samples: Causes and consequences in biometrical applications, Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions, Prediction of transfers to tertiary care and hospital mortality: A gradient boosting decision tree approach, Model guided adaptive design and analysis in computer experiment, Boosted sparse nonlinear distance metric learning, Skills in demand for ICT and statistical occupations: Evidence from web‐based job vacancies, Inference for \(L_2\)-boosting, Sharpening P-spline signal regression, Randomized Gradient Boosting Machine, The functional linear array model, Subject-specific Bradley–Terry–Luce models with implicit variable selection, The \(\delta \)-machine: classification based on distances towards prototypes, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, Presence‐Only Data and the EM Algorithm, Variable Selection and Model Choice in Geoadditive Regression Models, Novel Aggregate Deletion/Substitution/Addition Learning Algorithms for Recursive Partitioning, Measuring the Stability of Results From Supervised Statistical Learning, Heteroscedastic BART via Multiplicative Regression Trees, Modelling Price Paths in On-Line Auctions: Smoothing Sparse and Unevenly Sampled Curves by Using Semiparametric Mixed Models, Non-parametric learning of lifted restricted Boltzmann machines, Modeling binary time series using Gaussian processes with application to predicting sleep states, Nonlinear predictive models for multiple mediation analysis: with an application to explore ethnic disparities in anxiety and depression among cancer survivors, Sparse kernel deep stacking networks, Prediction of aptamer-protein interacting pairs based on sparse autoencoder feature extraction and an ensemble classifier, Duality gap estimates for weak Chebyshev greedy algorithms in Banach spaces, Unnamed Item, Regression trees and forests for non-homogeneous Poisson processes, Unnamed Item, Boosting with early stopping: convergence and consistency, A Bayesian regression tree approach to identify the effect of nanoparticles' properties on toxicity profiles, Cross-conformal predictors, Large Scale Prediction with Decision Trees, Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making, On the robustness of sparse counterfactual explanations to adverse perturbations, Handling high-dimensional data with missing values by modern machine learning techniques, Downscaling shallow water simulations using artificial neural networks and boosted trees, Gradient boosting for convex cone predict and optimize problems, A comparison of two dissimilarity functions for mixed-type predictor variables in the \(\delta\)-machine, Estimating global and country-specific excess mortality during the COVID-19 pandemic, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, 2-step gradient boosting approach to selectivity bias correction in tax audit: an application to the VAT gap in Italy, The fraud loss for selecting the model complexity in fraud detection, Reconstructing production networks using machine learning, Model-Assisted Estimation Through Random Forests in Finite Population Sampling, CatBoost-Based Framework for Intelligent Prediction and Reaction Condition Analysis of Coupling Reaction, Spatial performance analysis in basketball with CART, random forest and extremely randomized trees, Logistic regression model with TreeNet and association rules analysis: applications with medical datasets, Different Types of Constitutive Parameters Red Blood Cell Membrane Based on Machine Learning and FEM, Bridging the gap between pricing and reserving with an occurrence and development model for non-life insurance claims, Tail index partition-based rules extraction with application to tornado damage insurance, Data-driven state-of-charge prediction of a storage cell using ABC/GBRT, ABC/MLP and Lasso machine learning techniques, Considerations when learning additive explanations for black-box models, Interpreting machine-learning models in transformed feature space with an application to remote-sensing classification, Data-adaptive discriminative feature localization with statistically guaranteed interpretation, Gibbs Priors for Bayesian Nonparametric Variable Selection with Weak Learners, Meta Clustering for Collaborative Learning, Interpretable Architecture Neural Networks for Function Visualization, Robust estimation of heterogeneous treatment effects: an algorithm-based approach, Unbiased Boosting Estimation for Censored Survival Data, Modeling Postoperative Mortality in Older Patients by Boosting Discrete-Time Competing Risks Models, Bayesian projection pursuit regression, Climate-dependent effectiveness of nonpharmaceutical interventions on COVID-19 mitigation, Empirical likelihood ratio tests for non-nested model selection based on predictive losses, Causal survival analysis under competing risks using longitudinal modified treatment policies, InfoGram and admissible machine learning, A review on instance ranking problems in statistical learning, Assessing the value of data for prediction policies: the case of antibiotic prescribing, Least angle regression. (With discussion), A hypothesis-free bridging of disease dynamics and non-pharmaceutical policies, Recovering the time-dependent volatility in jump-diffusion models from nonlocal price observations, Mutual information for explainable deep learning of multiscale systems, Inference in Bayesian additive vector autoregressive tree models, Post-model-selection inference in linear regression models: an integrated review, A hierarchical reserving model for reported non-life insurance claims, Modelling and forecasting based on recursive incomplete pseudoinverse matrices, Explainable models of credit losses, A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers, The added value of dynamically updating motor insurance prices with telematics collected driving behavior data, Uniform approximation rates and metric entropy of shallow neural networks, Variational inference with NoFAS: normalizing flow with adaptive surrogate for computationally expensive models, One-stage tree: end-to-end tree builder and pruner, Techniques to improve ecological interpretability of black-box machine learning models. Case study on biological health of streams in the United States with gradient boosted trees, Noise peeling methods to improve boosting algorithms, A spatial-temporal-semantic neural network algorithm for location prediction on moving objects, An update on statistical boosting in biomedicine, Nonconvex regularization for sparse neural networks, A unified definition of mutual information with applications in machine learning, Random forest with adaptive local template for pedestrian detection, Global sensitivity analysis in epidemiological modeling, Actuarial intelligence in auto insurance: claim frequency modeling with driving behavior features and improved boosted trees, Interpreting deep learning models with marginal attribution by conditioning on quantiles, Grouped feature importance and combined features effect plot, Conclusive local interpretation rules for random forests, Learned-loss boosting, Nonlinear multi-output regression on unknown input manifold, Banzhaf random forests: cooperative game theory based random forests with consistency, Optimizing predictive precision in imbalanced datasets for actionable revenue change prediction, Optimal nonlinear signal approximations based on piecewise constant functions, Forecasting with many predictors: is boosting a viable alternative?, Transfer learning by mapping and revising boosted relational dependency networks, Random forest with acceptance-rejection trees, Boosting flexible functional regression models with a high number of functional historical effects, Bayesian additive regression trees using Bayesian model averaging, Pathway-based kernel boosting for the analysis of genome-wide association studies, LPiTrack: eye movement pattern recognition algorithm and application to biometric identification, \textsc{Treant}: training evasion-aware decision trees, Interpretable regularized class association rules algorithm for classification in a categorical data space, Double-slicing assisted sufficient dimension reduction for high-dimensional censored data, Fast greedy \(\mathcal{C} \)-bound minimization with guarantees, Regression with stagewise minimization on risk function, Tree ensembles with rule structured horseshoe regularization, Improving the prediction performance of the Lasso by subtracting the additive structural noises, Covariate balancing propensity score by tailored loss functions, Bootstrap -- an exploration, Credit spread approximation and improvement using random forest regression, Machine learning based classification of normal, slow and fast walking by extracting multimodal features from stride interval time series, Copula theory and probabilistic sensitivity analysis: is there a connection?, Universal sieve-based strategies for efficient estimation using machine learning tools, Student and school performance across countries: a machine learning approach, A nearest neighbour extension to project duration forecasting with artificial intelligence, KLERC: kernel Lagrangian expectile regression calculator, Analysis of a two-layer neural network via displacement convexity, Utilizing data mining techniques to predict expected freeway travel time from experienced travel time, A data-driven newsvendor problem: from data to decision, Evaluating the impact of a HIV low-risk express care task-shifting program: a case study of the targeted learning roadmap, Direct cellularity estimation on breast cancer histopathology images using transfer learning, Using machine learning algorithms to predict hepatitis B surface antigen seroclearance, Learning causal effect using machine learning with application to China's typhoon, Analytics for labor planning in systems with load-dependent service times, Retail sales forecasting with meta-learning, Regularizing axis-aligned ensembles via data rotations that favor simpler learners, Predicting mortgage early delinquency with machine learning methods, Machine learning based multiscale calibration of mesoscopic constitutive models for composite materials: application to brain white matter, Stochastic approximation: from statistical origin to big-data, multidisciplinary applications, Boosting high dimensional predictive regressions with time varying parameters, Estimation of a density using an improved surrogate model, Consistent regression using data-dependent coverings, Adaptive covariate acquisition for minimizing total cost of classification, Boosted nonparametric hazards with time-dependent covariates, A deeper look at machine learning-based cryptanalysis, Bayesian additive regression trees with model trees, Temporal mixture ensemble models for probabilistic forecasting of intraday cryptocurrency volume, Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance, Toward an explainable machine learning model for claim frequency: a use case in car insurance pricing with telematics data, RADE: resource-efficient supervised anomaly detection using decision tree-based ensemble methods, Multi-fidelity regression using artificial neural networks: efficient approximation of parameter-dependent output quantities, Classifying sleep states using persistent homology and Markov chains: a pilot study, Two-level monotonic multistage recommender systems, Interpretable machine learning: fundamental principles and 10 grand challenges, A likelihood-based boosting algorithm for factor analysis models with binary data, Inventory -- forecasting: mind the gap, Screening: from tornado diagrams to effective dimensions, iEnhancer-MFGBDT: Identifying enhancers and their strength by fusing multiple features and gradient boosting decision tree, Predicting S-nitrosylation proteins and sites by fusing multiple features, Local greedy approximation for nonlinear regression and neural network training., Adaptive step-length selection in gradient boosting for Gaussian location and scale models, TCMI: a non-parametric mutual-dependence estimator for multivariate continuous distributions, Optimal policy trees, Wasserstein-based fairness interpretability framework for machine learning models, Machine learning for corporate default risk: multi-period prediction, frailty correlation, loan portfolios, and tail probabilities, A new accelerated proximal boosting machine with convergence rate \(O(1/t^2)\), A unified neural network framework for extended redundancy analysis, A multi-element non-intrusive polynomial chaos method using agglomerative clustering based on the derivatives to study irregular and discontinuous quantities of interest, Duality gap estimates for a class of greedy optimization algorithms in Banach spaces
Uses Software
Cites Work
- Multivariate adaptive regression splines
- A geometric approach to leveraging weak learners
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Improved boosting algorithms using confidence-rated predictions
- Matching pursuits with time-frequency dictionaries
- Learning representations by back-propagating errors
- Robust Estimation of a Location Parameter
- Soft margins for AdaBoost
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item