default search action
26th COLT 2013: Princeton University, NJ, USA
- Shai Shalev-Shwartz, Ingo Steinwart:
COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA. JMLR Workshop and Conference Proceedings 30, JMLR.org 2013
Preface
- Preface. 1-2
Regular Papers
- Ohad Shamir:
On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization. 3-24 - Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Tie-Yan Liu:
A Theoretical Analysis of NDCG Type Ranking Measures. 25-54 - Massimiliano Pontil, Andreas Maurer:
Excess risk bounds for multitask learning with trace norm regularization. 55-76 - Roi Livni, Pierre Simon:
Honest Compressions and Their Application to Compression Schemes. 77-92 - Amit Daniely, Tom Helbertal:
The price of bandit information in multiclass online classification. 93-104 - Stanislav Minsker:
Estimation of Extreme Values and Associated Level Sets of a Regression Function via Selective Sampling. 105-121 - Sébastien Bubeck, Vianney Perchet, Philippe Rigollet:
Bounded regret in stochastic multi-armed bandits. 122-134 - Lijun Zhang, Mehrdad Mahdavi, Rong Jin, Tianbao Yang, Shenghuo Zhu:
Recovering the Optimal Solution by Dual Random Projection. 135-157 - Andrey Bernstein, Shie Mannor, Nahum Shimkin:
Opportunistic Strategies for Generalized No-Regret Problems. 158-171 - Francis R. Bach:
Sharp analysis of low-rank kernel matrix approximations. 185-209 - Chao-Kai Chiang, Chia-Jung Lee, Chi-Jen Lu:
Beating Bandits in Gradually Evolving Worlds. 210-227 - Emilie Kaufmann, Shivaram Kalyanakrishnan:
Information Complexity in Bandit Subset Selection. 228-251 - Mehrdad Mahdavi, Rong Jin:
Passive Learning with Target Risk. 252-269 - Mikhail Belkin, Luis Rademacher, James R. Voss:
Blind Signal Separation in the Presence of Gaussian Noise. 270-287 - Maria-Florina Balcan, Philip M. Long:
Active and passive learning of linear separators under log-concave distributions. 288-316 - Sanjoy Dasgupta, Kaushik Sinha:
Randomized partition trees for exact nearest neighbor search. 317-337 - Shivani Agarwal:
Surrogate Regret Bounds for the Area Under the ROC Curve via Strongly Proper Losses. 338-353 - Moritz Hardt, Ankur Moitra:
Algorithms and Hardness for Robust Subspace Recovery. 354-375 - Ruth Urner, Sharon Wulff, Shai Ben-David:
PLAL: Cluster-based active learning. 376-397 - Pranjal Awasthi, Vitaly Feldman, Varun Kanade:
Learning Using Local Membership Queries. 398-431 - Marcus Hutter:
Sparse Adaptive Dirichlet-Multinomial-like Processes. 432-459 - Luc Devroye, Gábor Lugosi, Gergely Neu:
Prediction by random-walk perturbation. 460-473 - Vianney Perchet, Shie Mannor:
Approachability, fast and slow. 474-488 - Clayton Scott, Gilles Blanchard, Gregory Handy:
Classification with Asymmetric Label Noise: Consistency and Maximal Denoising. 489-511 - Cheng Li, Wenxin Jiang, Martin A. Tanner:
General Oracle Inequalities for Gibbs Posterior with Application to Ranking. 512-521 - Daniel M. Kane, Adam R. Klivans, Raghu Meka:
Learning Halfspaces Under Log-Concave Densities: Polynomial Approximations and Moment Matching. 522-545 - David P. Woodruff, Qin Zhang:
Subspace Embeddings and \(\ell_p\)-Regression Using Exponential Random Variables. 546-567 - Robert A. Vandermeulen, Clayton D. Scott:
Consistency of Robust Kernel Density Estimators. 568-591 - Yuchen Zhang, John C. Duchi, Martin J. Wainwright:
Divide and Conquer Kernel Ridge Regression. 592-617 - Eyal Gofer, Nicolò Cesa-Bianchi, Claudio Gentile, Yishay Mansour:
Regret Minimization for Branching Experts. 618-638 - Peter L. Bartlett, Peter Grünwald, Peter Harremoës, Fares Hedayati, Wojciech Kotlowski:
Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families. 639-661 - Claudio Gentile, Mark Herbster, Stephen Pasteris:
Online Similarity Prediction of Networked Data from Known and Unknown Graphs. 662-695 - Gábor Bartók:
A near-optimal algorithm for finite partial-monitoring games against adversarial opponents. 696-710 - Vitaly Feldman, Pravesh Kothari, Jan Vondrák:
Representation, Approximation and Learning of Submodular Functions Using Low-rank Decision Trees. 711-740 - Lachlan L. H. Andrew, Siddharth Barman, Katrina Ligett, Minghong Lin, Adam Meyerson, Alan Roytman, Adam Wierman:
A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret. 741-763 - Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, Ananda Theertha Suresh:
Optimal Probability Estimation with Applications to Prediction and Classification. 764-796 - Sung-Soon Choi:
Polynomial Time Optimal Query Algorithms for Finding Graphs with Arbitrary Real Weights. 797-818 - Abhradeep Thakurta, Adam D. Smith:
Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso. 819-850 - Wouter M. Koolen, Jiazhong Nie, Manfred K. Warmuth:
Learning a set of directions. 851-866 - Animashree Anandkumar, Rong Ge, Daniel J. Hsu, Sham M. Kakade:
A Tensor Spectral Approach to Learning Mixed Membership Community Models. 867-881 - Ittai Abraham, Omar Alonso, Vasilis Kandylas, Aleksandrs Slivkins:
Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem. 882-910 - Matus Telgarsky:
Boosting with the Logistic Loss is Consistent. 911-965 - Wei Han, Alexander Rakhlin, Karthik Sridharan:
Competing With Strategies. 966-992 - Alexander Rakhlin, Karthik Sridharan:
Online Learning with Predictable Sequences. 993-1019 - Joseph Anderson, Navin Goyal, Luis Rademacher:
Efficient Learning of Simplices. 1020-1045 - Quentin Berthet, Philippe Rigollet:
Complexity Theoretic Lower Bounds for Sparse Principal Component Detection. 1046-1066
Open Problems
- Yevgeny Seldin, Koby Crammer, Peter L. Bartlett:
Open Problem: Adversarial Multiarmed Bandits with Limited Advice. 1067-1072 - Tomer Koren:
Open Problem: Fast Stochastic Exp-Concave Optimization. 1073-1075 - Jiazhong Nie, Manfred K. Warmuth, S. V. N. Vishwanathan, Xinhua Zhang:
Open Problem: Lower bounds for Boosting with Hadamard Matrices. 1076-1079
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.