Online local learning via semidefinite programming
From MaRDI portal
Publication:5259582
DOI10.1145/2591796.2591880zbMATH Open1315.68205arXiv1403.5287OpenAlexW1977223703MaRDI QIDQ5259582FDOQ5259582
Publication date: 26 June 2015
Published in: Proceedings of the forty-sixth annual ACM symposium on Theory of computing (Search for Journal in Brave)
Abstract: In many online learning problems we are interested in predicting local information about some universe of items. For example, we may want to know whether two items are in the same cluster rather than computing an assignment of items to clusters; we may want to know which of two teams will win a game rather than computing a ranking of teams. Although finding the optimal clustering or ranking is typically intractable, it may be possible to predict the relationships between items as well as if you could solve the global optimization problem exactly. Formally, we consider an online learning problem in which a learner repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial payoff depending on those labels. The learner's goal is to receive a payoff nearly as good as the best fixed labeling of the items. We show that a simple algorithm based on semidefinite programming can obtain asymptotically optimal regret in the case where the number of possible labels is O(1), resolving an open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical contribution is a novel use and analysis of the log determinant regularizer, exploiting the observation that log det(A + I) upper bounds the entropy of any distribution with covariance matrix A.
Full work available at URL: https://arxiv.org/abs/1403.5287
Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27) Semidefinite programming (90C22)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Theory of Cryptography
- The geometry of differential privacy: the small database and approximate cases
- On the geometry of differential privacy
- Interactive privacy via the median mechanism
- The price of privately releasing contingency tables and the spectra of random matrices with correlated rows
- Lower Bounds in Differential Privacy
- Iterative Constructions and Private Data Release
- Differential Privacy and the Fat-Shattering Dimension of Linear Queries
- Our Data, Ourselves: Privacy Via Distributed Noise Generation
- On the complexity of differentially private data release
- Collusion-secure fingerprinting for digital data
- Advances in Cryptology β CRYPTO 2004
- New Efficient Attacks on Statistical Disclosure Control Mechanisms
- Bounds on the Sample Complexity for Private Learning and Private Data Release
- Answering n {2+o(1)} counting queries with differential privacy is hard
- Characterizing the sample complexity of private learners
- Faster Algorithms for Privately Releasing Marginals
- Faster private release of marginals on small databases
- Private Learning and Sanitization: Pure vs. Approximate Differential Privacy
- Privately releasing conjunctions and the statistical query barrier
- Efficient algorithms for privately releasing marginals via convex relaxations
Uses Software
Recommendations
- Title not available (Why is that?) π π
- Title not available (Why is that?) π π
- Online learning over a decentralized network through ADMM π π
- Online Learning and Online Convex Optimization π π
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence π π
- Online Learning With Inexact Proximal Online Gradient Descent Algorithms π π
- Online Semidefinite Programming. π π
- One-pass online learning: a local approach π π
- Online pairwise learning algorithms with convex loss functions π π
- Online learning for supervised dimension reduction π π
This page was built for publication: Online local learning via semidefinite programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5259582)