The following pages link to Philip M. Long (Q202116):
Displaying 50 items.
- Linear classifiers are nearly optimal when hidden variables have diverse effects (Q420914) (← links)
- On-line learning of smooth functions of a single variable (Q672387) (← links)
- Using the doubling dimension to analyze the generalization of learning algorithms (Q923877) (← links)
- An upper bound on the sample complexity of PAC-learning halfspaces with respect to the uniform distribution (Q1014429) (← links)
- Prediction, learning, uniform convergence, and scale-sensitive dimensions (Q1271550) (← links)
- Approximating hyper-rectangles: Learning and pseudorandom sets (Q1278043) (← links)
- Tracking drifting concepts by minimizing disagreements (Q1314504) (← links)
- Composite geometric concepts and polynomial predictability (Q1333260) (← links)
- Halfspace learning, linear programming, and nonmalicious distributions (Q1336756) (← links)
- On the complexity of learning from drifting distributions (Q1376422) (← links)
- PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples (Q1383190) (← links)
- A theoretical analysis of query selection for collaborative filtering (Q1394791) (← links)
- On the difficulty of approximately maximizing agreements. (Q1401958) (← links)
- Boosting and microarray data (Q1402254) (← links)
- (Q1575455) (redirect page) (← links)
- Improved bounds about on-line learning of smooth-functions of a single variable (Q1575456) (← links)
- Reinforcement learning with immediate rewards and linear hypotheses (Q1762980) (← links)
- Performance guarantees for hierarchical clustering (Q1780451) (← links)
- On-line learning of linear functions (Q1842773) (← links)
- Apple tasting. (Q1854360) (← links)
- On-line learning with linear loss constraints. (Q1854361) (← links)
- Efficient algorithms for learning functions with bounded variation (Q1887165) (← links)
- Characterizations of learnability for classes of \(\{0,\dots,n\}\)-valued functions (Q1892207) (← links)
- A generalization of Sauer's lemma (Q1899068) (← links)
- On the complexity of function learning (Q1900975) (← links)
- Fat-shattering and the learnability of real-valued functions (Q1924381) (← links)
- Random classification noise defeats all convex potential boosters (Q1959553) (← links)
- Structural results about on-line learning models with and without queries (Q1961317) (← links)
- The complexity of learning according to two models of a drifting environment (Q1969322) (← links)
- Oracle lower bounds for stochastic gradient sampling algorithms (Q2137007) (← links)
- New bounds on the price of bandit feedback for mistake-bounded online multiclass learning (Q2290693) (← links)
- Discriminative learning can succeed where generative learning fails (Q2379956) (← links)
- On the Inductive Bias of Dropout (Q2788414) (← links)
- Simulating access to hidden information while learning (Q2817617) (← links)
- (Q2880992) (← links)
- (Q2933930) (← links)
- On the Weight of Halfspaces over Hamming Balls (Q2935257) (← links)
- Low-weight halfspaces for sparse boolean vectors (Q2986855) (← links)
- (Q3148811) (← links)
- (Q3148823) (← links)
- (Q3148824) (← links)
- (Q3174156) (← links)
- The Power of Localization for Efficiently Learning Linear Separators with Noise (Q3177877) (← links)
- Improved bounds about on-line learning of smooth functions of a single variable (Q3556977) (← links)
- Learning Halfspaces with Malicious Noise (Q3638067) (← links)
- Baum’s Algorithm Learns Intersections of Halfspaces with Respect to Log-Concave Distributions (Q3638906) (← links)
- (Q4230374) (← links)
- (Q4526996) (← links)
- The one-inclusion graph algorithm is near-optimal for the prediction model of learning (Q4544566) (← links)
- Surprising properties of dropout in deep networks (Q4558527) (← links)