The following pages link to Nathan Srebro (Q245508):
Displayed 27 items.
- (Q589502) (redirect page) (← links)
- Pegasos: primal estimated sub-gradient solver for SVM (Q633112) (← links)
- A theory of learning with similarity functions (Q1009272) (← links)
- Sketching meets random projection in the dual: a provable recovery algorithm for big and high-dimensional data (Q1688973) (← links)
- Maximum likelihood bounded tree-width Markov networks (Q1853683) (← links)
- (Q2768324) (← links)
- (Q2896159) (← links)
- Tight Sample Complexity of Large-Margin Learning (Q2933877) (← links)
- Clustering, Hamming Embedding, Generalized LSH and the Max Norm (Q2938741) (← links)
- Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints (Q3083309) (← links)
- (Q4410168) (← links)
- Data-Dependent Convergence for Consensus Stochastic Optimization (Q4566830) (← links)
- (Q4614113) (← links)
- (Q4617641) (← links)
- (Q5214264) (← links)
- Learning Bounds for Support Vector Machines with Learned Kernels (Q5307567) (← links)
- How Good Is a Kernel When Used as a Similarity Measure? (Q5434059) (← links)
- ℓ1 Regularization in Infinite Dimensional Feature Spaces (Q5434074) (← links)
- Are There Local Maxima in the Infinite-Sample Likelihood of Gaussian Mixture Estimation? (Q5434083) (← links)
- Learning Theory (Q5473636) (← links)
- (Q5744804) (← links)
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning (Q5859008) (← links)
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm (Q5962728) (← links)
- Lower bounds for non-convex stochastic optimization (Q6038643) (← links)
- Fast-rate and optimistic-rate error bounds for L1-regularized regression (Q6226883) (← links)
- On Data Dependence in Distributed Stochastic Optimization (Q6271452) (← links)
- Lower Bound for Randomized First Order Convex Optimization (Q6291154) (← links)