default search action
Daniel Hsu 0001
Daniel J. Hsu
Person information
- affiliation: Columbia University, Department of Computer Science, NY, USA
- affiliation (former): Rutgers University, Rutgers University, NY, USA
- affiliation (PhD 2010): University of California, San Diego, CA, USA
Other persons with the same name
- Daniel Hsu 0002 — Shanghai University, China
- Daniel Hsu 0003 — Georgia Institute of Technology, Department of Electrical and Computer Engineering, Atlanta, GA, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [c80]Daniel Hsu, Arya Mazumdar:
On the sample complexity of parameter estimation in logistic regression with normal design. COLT 2024: 2418-2437 - [c79]Daniel Hsu, Jizhou Huang, Brendan Juba:
Distribution-Specific Auditing for Subgroup Fairness. FORC 2024: 5:1-5:20 - [c78]Samuel Deng, Daniel Hsu:
Multi-group Learning for Hierarchical Groups. ICML 2024 - [c77]Clayton Sanford, Daniel Hsu, Matus Telgarsky:
Transformers, parallel computation, and logarithmic depth. ICML 2024 - [c76]Zixuan Wang, Stanley Wei, Daniel Hsu, Jason D. Lee:
Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot. ICML 2024 - [e2]Claire Vernade, Daniel Hsu:
International Conference on Algorithmic Learning Theory, 25-28 February 2024, La Jolla, California, USA. Proceedings of Machine Learning Research 237, PMLR 2024 [contents] - [i89]Daniel Hsu, Jizhou Huang, Brendan Juba:
Polynomial time auditing of statistical subgroup fairness for Gaussian data. CoRR abs/2401.16439 (2024) - [i88]Samuel Deng, Daniel Hsu:
Multi-group Learning for Hierarchical Groups. CoRR abs/2402.00258 (2024) - [i87]Clayton Sanford, Daniel Hsu, Matus Telgarsky:
Transformers, parallel computation, and logarithmic depth. CoRR abs/2402.09268 (2024) - [i86]Eden Shaveet, Crystal Su, Daniel Hsu, Luis Gravano:
Seasonality Patterns in 311-Reported Foodborne Illness Cases and Machine Learning-Identified Indications of Foodborne Illnesses from Yelp Reviews, New York City, 2022-2023. CoRR abs/2405.06138 (2024) - [i85]Samuel Deng, Daniel Hsu, Jingwen Liu:
Group-wise oracle-efficient algorithms for online multi-group learning. CoRR abs/2406.05287 (2024) - [i84]Zixuan Wang, Stanley Wei, Daniel Hsu, Jason D. Lee:
Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot. CoRR abs/2406.06893 (2024) - [i83]Clayton Sanford, Daniel Hsu, Matus Telgarsky:
One-layer transformers fail to solve the induction heads task. CoRR abs/2408.14332 (2024) - 2023
- [c75]Navid Ardeshir, Daniel J. Hsu, Clayton Hendrick Sanford:
Intrinsic dimensionality and generalization properties of the R-norm inductive bias. COLT 2023: 3264-3303 - [c74]Clayton Sanford, Daniel J. Hsu, Matus Telgarsky:
Representational Strengths and Limitations of Transformers. NeurIPS 2023 - [i82]Clayton Sanford, Daniel Hsu, Matus Telgarsky:
Representational Strengths and Limitations of Transformers. CoRR abs/2306.02896 (2023) - [i81]Daniel Hsu, Arya Mazumdar:
On the sample complexity of estimation in logistic regression. CoRR abs/2307.04191 (2023) - [i80]Gan Yuan, Mingyue Xu, Samory Kpotufe, Daniel Hsu:
Efficient Estimation of the Central Mean Subspace via Smoothed Gradient Outer Products. CoRR abs/2312.15469 (2023) - 2022
- [j24]Michal Derezinski, Manfred K. Warmuth, Daniel Hsu:
Unbiased estimators for random design regression. J. Mach. Learn. Res. 23: 167:1-167:46 (2022) - [c73]Samuel Deng, Yilin Guo, Daniel Hsu, Debmalya Mandal:
Learning Tensor Representations for Meta-Learning. AISTATS 2022: 11550-11580 - [c72]Daniel J. Hsu, Clayton Hendrick Sanford, Rocco A. Servedio, Emmanouil-Vasileios Vlatakis-Gkaragkounis:
Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals. COLT 2022: 283-312 - [c71]Christopher J. Tosh, Daniel Hsu:
Simple and near-optimal algorithms for hidden stratification and multi-group learning. ICML 2022: 21633-21657 - [c70]Bingbin Liu, Daniel J. Hsu, Pradeep Ravikumar, Andrej Risteski:
Masked Prediction: A Parameter Identifiability View. NeurIPS 2022 - [i79]Samuel Deng, Yilin Guo, Daniel Hsu, Debmalya Mandal:
Learning Tensor Representations for Meta-Learning. CoRR abs/2201.07348 (2022) - [i78]Daniel Hsu, Clayton Sanford, Rocco A. Servedio, Emmanouil V. Vlatakis-Gkaragkounis:
Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals. CoRR abs/2202.05096 (2022) - [i77]Bingbin Liu, Daniel Hsu, Pradeep Ravikumar, Andrej Risteski:
Masked prediction tasks: a parameter identifiability view. CoRR abs/2202.09305 (2022) - [i76]Rishabh Dudeja, Daniel Hsu:
Statistical-Computational Trade-offs in Tensor PCA and Related Problems via Communication Complexity. CoRR abs/2204.07526 (2022) - [i75]Clayton Sanford, Navid Ardeshir, Daniel Hsu:
Intrinsic dimensionality and generalization properties of the R-norm inductive bias. CoRR abs/2206.05317 (2022) - 2021
- [j23]Rishabh Dudeja, Daniel Hsu:
Statistical Query Lower Bounds for Tensor PCA. J. Mach. Learn. Res. 22: 83:1-83:51 (2021) - [j22]Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, Anant Sahai:
Classification vs regression in overparameterized regimes: Does the loss function matter? J. Mach. Learn. Res. 22: 222:1-222:69 (2021) - [j21]Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu:
Contrastive Estimation Reveals Topic Posterior Information to Linear Models. J. Mach. Learn. Res. 22: 281:1-281:31 (2021) - [j20]Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu:
Consistent Risk Estimation in Moderately High-Dimensional Linear Regression. IEEE Trans. Inf. Theory 67(9): 5997-6030 (2021) - [c69]Ivy Cao, Zizhou Liu, Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Quantifying the Effects of COVID-19 on Restaurant Reviews. SocialNLP@NAACL 2021: 36-60 - [c68]Daniel Hsu, Vidya Muthukumar, Ji Xu:
On the proliferation of support vectors in high dimensions. AISTATS 2021: 91-99 - [c67]Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu:
Contrastive learning, multi-view redundancy, and linear models. ALT 2021: 1179-1206 - [c66]Daniel Hsu, Clayton Sanford, Rocco A. Servedio, Emmanouil V. Vlatakis-Gkaragkounis:
On the Approximation Power of Two-Layer Networks of Random ReLUs. COLT 2021: 2423-2461 - [c65]Daniel Hsu, Ziwei Ji, Matus Telgarsky, Lan Wang:
Generalization bounds via distillation. ICLR 2021 - [c64]Navid Ardeshir, Clayton Sanford, Daniel J. Hsu:
Support vector machines and linear regression coincide with very high-dimensional features. NeurIPS 2021: 4907-4918 - [c63]Max Simchowitz, Christopher Tosh, Akshay Krishnamurthy, Daniel J. Hsu, Thodoris Lykouris, Miroslav Dudík, Robert E. Schapire:
Bayesian decision-making under misspecified priors with applications to meta-learning. NeurIPS 2021: 26382-26394 - [i74]Daniel Hsu, Clayton Sanford, Rocco A. Servedio, Emmanouil V. Vlatakis-Gkaragkounis:
On the Approximation Power of Two-Layer Networks of Random ReLUs. CoRR abs/2102.02336 (2021) - [i73]Daniel Hsu, Ziwei Ji, Matus Telgarsky, Lan Wang:
Generalization bounds via distillation. CoRR abs/2104.05641 (2021) - [i72]Navid Ardeshir, Clayton Sanford, Daniel Hsu:
Support vector machines and linear regression coincide with very high-dimensional features. CoRR abs/2105.14084 (2021) - [i71]Max Simchowitz, Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu, Thodoris Lykouris, Miroslav Dudík, Robert E. Schapire:
Bayesian decision-making under misspecified priors with applications to meta-learning. CoRR abs/2107.01509 (2021) - [i70]Christopher Tosh, Daniel Hsu:
Simple and near-optimal algorithms for hidden stratification and multi-group learning. CoRR abs/2112.12181 (2021) - 2020
- [j19]Mikhail Belkin, Daniel Hsu, Ji Xu:
Two Models of Double Descent for Weak Features. SIAM J. Math. Data Sci. 2(4): 1167-1180 (2020) - [j18]Arushi Gupta, Daniel Hsu:
Parameter identification in Markov chain choice models. Theor. Comput. Sci. 808: 99-107 (2020) - [c62]Ziyi Liu, Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Detecting Foodborne Illness Complaints in Multiple Languages Using English Annotations Only. LOUHI@EMNLP 2020: 138-146 - [c61]Christopher Tosh, Daniel Hsu:
Diameter-based Interactive Structure Discovery. AISTATS 2020: 580-590 - [c60]Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Cross-Lingual Text Classification with Minimal Resources by Transferring a Sparse Teacher. EMNLP (Findings) 2020: 3604-3622 - [c59]Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, Daniel J. Hsu:
Ensuring Fairness Beyond the Training Data. NeurIPS 2020 - [c58]Bo Cowgill, Fabrizio Dell'Acqua, Samuel Deng, Daniel Hsu, Nakul Verma, Augustin Chaintreau:
Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics. EC 2020: 679-681 - [i69]Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu:
Contrastive estimation reveals topic posterior information to linear models. CoRR abs/2003.02234 (2020) - [i68]Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel J. Hsu, Anant Sahai:
Classification vs regression in overparameterized regimes: Does the loss function matter? CoRR abs/2005.08054 (2020) - [i67]Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, Daniel Hsu:
Ensuring Fairness Beyond the Training Data. CoRR abs/2007.06029 (2020) - [i66]Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu:
Contrastive learning, multi-view redundancy, and linear models. CoRR abs/2008.10150 (2020) - [i65]Daniel Hsu, Vidya Muthukumar, Ji Xu:
On the proliferation of support vectors in high dimensions. CoRR abs/2009.10670 (2020) - [i64]Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Cross-Lingual Text Classification with Minimal Resources by Transferring a Sparse Teacher. CoRR abs/2010.02562 (2020) - [i63]Ziyi Liu, Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Detecting Foodborne Illness Complaints in Multiple Languages Using English Annotations Only. CoRR abs/2010.05194 (2020) - [i62]Bo Cowgill, Fabrizio Dell'Acqua, Samuel Deng, Daniel Hsu, Nakul Verma, Augustin Chaintreau:
Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics. CoRR abs/2012.02394 (2020)
2010 – 2019
- 2019
- [j17]Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu, Aurélien Bellet, Linxi Fan, Michael Collins, Daniel Hsu, Brian Kingsbury, Michael Picheny, Fei Sha:
Kernel Approximation Methods for Speech Recognition. J. Mach. Learn. Res. 20: 59:1-59:36 (2019) - [j16]Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel Hsu:
Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform. ACM SIGOPS Oper. Syst. Rev. 53(1): 75-84 (2019) - [c57]Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Weakly Supervised Attention Networks for Fine-Grained Opinion Mining and Public Health. W-NUT@EMNLP 2019: 1-10 - [c56]Michal Derezinski, Manfred K. Warmuth, Daniel Hsu:
Correcting the bias in least squares regression with volume-rescaled sampling. AISTATS 2019: 944-953 - [c55]Alexandr Andoni, Rishabh Dudeja, Daniel Hsu, Kiran Vodrahalli:
Attribute-efficient learning of monomials over highly-correlated variables. ALT 2019: 127-161 - [c54]Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training. EMNLP/IJCNLP (1) 2019: 4610-4620 - [c53]Yucheng Chen, Matus Telgarsky, Chao Zhang, Bolton Bailey, Daniel Hsu, Jian Peng:
A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization. ICML 2019: 1071-1080 - [c52]Sanjoy Dasgupta, Daniel Hsu, Stefanos Poulis, Xiaojin Zhu:
Teaching a black-box learner. ICML 2019: 1547-1555 - [c51]Ji Xu, Daniel J. Hsu:
On the number of variables to use in principal component regression. NeurIPS 2019: 5095-5104 - [c50]Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel Hsu:
Privacy accounting and quality control in the sage differentially private ML platform. SOSP 2019: 181-195 - [c49]Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Suman Jana:
Certified Robustness to Adversarial Examples with Differential Privacy. IEEE Symposium on Security and Privacy 2019: 656-672 - [e1]Alina Beygelzimer, Daniel Hsu:
Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA. Proceedings of Machine Learning Research 99, PMLR 2019 [contents] - [i61]Mikhail Belkin, Daniel Hsu, Ji Xu:
Two models of double descent for weak features. CoRR abs/1903.07571 (2019) - [i60]Ji Xu, Daniel Hsu:
How many variables should be entered in a principal component regression equation? CoRR abs/1906.01139 (2019) - [i59]Christopher Tosh, Daniel Hsu:
Diameter-based Interactive Structure Search. CoRR abs/1906.02101 (2019) - [i58]Kevin Shi, Daniel Hsu, Allison Bishop:
A cryptographic approach to black box adversarial machine learning. CoRR abs/1906.03231 (2019) - [i57]Yucheng Chen, Matus Telgarsky, Chao Zhang, Bolton Bailey, Daniel Hsu, Jian Peng:
A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization. CoRR abs/1906.03471 (2019) - [i56]Michal Derezinski, Manfred K. Warmuth, Daniel Hsu:
Unbiased estimators for random design regression. CoRR abs/1907.03411 (2019) - [i55]Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training. CoRR abs/1909.00415 (2019) - [i54]Mathias Lécuyer, Riley Spahn, Kiran Vodrahalli, Roxana Geambasu, Daniel Hsu:
Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform. CoRR abs/1909.01502 (2019) - [i53]Giannis Karamanolakis, Daniel Hsu, Luis Gravano:
Weakly Supervised Attention Networks for Fine-Grained Opinion Mining and Public Health. CoRR abs/1910.00054 (2019) - 2018
- [j15]Thomas Effland, Anna Lawson, Sharon Balter, Katelynn Devinney, Vasudha Reddy, HaeNa Waechter, Luis Gravano, Daniel Hsu:
Discovering foodborne illness in online restaurant reviews. J. Am. Medical Informatics Assoc. 25(12): 1586-1592 (2018) - [c48]Rishabh Dudeja, Daniel Hsu:
Learning Single-Index Models in Gaussian Space. COLT 2018: 1887-1930 - [c47]Mikhail Belkin, Daniel J. Hsu, Partha Mitra:
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate. NeurIPS 2018: 2306-2317 - [c46]Michal Derezinski, Manfred K. Warmuth, Daniel J. Hsu:
Leveraged volume sampling for linear regression. NeurIPS 2018: 2510-2519 - [c45]Ji Xu, Daniel J. Hsu, Arian Maleki:
Benefits of over-parameterization with EM. NeurIPS 2018: 10685-10695 - [i52]Arushi Gupta, José Manuel Zorrilla Matilla, Daniel Hsu, Zoltán Haiman:
Non-Gaussian information from weak lensing data via deep learning. CoRR abs/1802.01212 (2018) - [i51]Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Suman Jana:
On the Connection between Differential Privacy and Adversarial Robustness in Machine Learning. CoRR abs/1802.03471 (2018) - [i50]Michal Derezinski, Manfred K. Warmuth, Daniel Hsu:
Tail bounds for volume sampled linear regression. CoRR abs/1802.06749 (2018) - [i49]Mikhail Belkin, Daniel Hsu, Partha Mitra:
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate. CoRR abs/1806.05161 (2018) - [i48]Michal Derezinski, Manfred K. Warmuth, Daniel Hsu:
Correcting the bias in least squares regression with volume-rescaled sampling. CoRR abs/1810.02453 (2018) - [i47]Ji Xu, Daniel Hsu, Arian Maleki:
Benefits of over-parameterization with EM. CoRR abs/1810.11344 (2018) - [i46]Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal:
Reconciling modern machine learning and the bias-variance trade-off. CoRR abs/1812.11118 (2018) - 2017
- [j14]Cun Mu, Daniel J. Hsu, Donald Goldfarb:
Greedy Approaches to Symmetric Orthogonal Tensor Decomposition. SIAM J. Matrix Anal. Appl. 38(4): 1210-1226 (2017) - [c44]Arushi Gupta, Daniel Hsu:
Parameter identification in Markov chain choice models. ALT 2017: 330-340 - [c43]Alexandr Andoni, Daniel J. Hsu, Kevin Shi, Xiaorui Sun:
Correspondence retrieval. COLT 2017: 105-126 - [c42]Florian Tramèr, Vaggelis Atlidakis, Roxana Geambasu, Daniel J. Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, Huang Lin:
FairTest: Discovering Unwarranted Associations in Data-Driven Applications. EuroS&P 2017: 401-416 - [c41]Daniel J. Hsu, Kevin Shi, Xiaorui Sun:
Linear regression without correspondence. NIPS 2017: 1531-1540 - [i45]Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu, Aurélien Bellet, Linxi Fan, Michael Collins, Daniel J. Hsu, Brian Kingsbury, Michael Picheny, Fei Sha:
Kernel Approximation Methods for Speech Recognition. CoRR abs/1701.03577 (2017) - [i44]Daniel J. Hsu, Kevin Shi, Xiaorui Sun:
Linear regression without correspondence. CoRR abs/1705.07048 (2017) - [i43]Cun Mu, Daniel J. Hsu, Donald Goldfarb:
Successive Rank-One Approximations for Nearly Orthogonally Decomposable Symmetric Tensors. CoRR abs/1705.10404 (2017) - [i42]Arushi Gupta, Daniel Hsu:
Parameter identification in Markov chain choice models. CoRR abs/1706.00729 (2017) - [i41]Cun Mu, Daniel J. Hsu, Donald Goldfarb:
Greedy Approaches to Symmetric Orthogonal Tensor Decomposition. CoRR abs/1706.01169 (2017) - [i40]Alexandr Andoni, Javad Ghaderi, Daniel J. Hsu, Dan Rubenstein, Omri Weinstein:
Coding with asymmetric prior knowledge. CoRR abs/1707.04875 (2017) - [i39]Daniel J. Hsu, Aryeh Kontorovich, David A. Levin, Yuval Peres, Csaba Szepesvári:
Mixing time estimation in reversible Markov chains from a single sample path. CoRR abs/1708.07367 (2017) - 2016
- [j13]Haris Aziz, Elias Bareinboim, Yejin Choi, Daniel J. Hsu, Shivaram Kalyanakrishnan, Reshef Meir, Suchi Saria, Gerardo I. Simari, Lirong Xia, William Yeoh:
AI's 10 to Watch. IEEE Intell. Syst. 31(1): 56-66 (2016) - [j12]Daniel J. Hsu, Sivan Sabato:
Loss Minimization and Parameter Estimation with Heavy Tails. J. Mach. Learn. Res. 17: 18:1-18:40 (2016) - [j11]Karl Stratos, Michael Collins, Daniel J. Hsu:
Unsupervised Part-Of-Speech Tagging with Anchor Hidden Markov Models. Trans. Assoc. Comput. Linguistics 4: 245-257 (2016) - [c40]Avner May, Michael Collins, Daniel J. Hsu, Brian Kingsbury:
Compact kernel models for acoustic modeling via random feature selection. ICASSP 2016: 2424-2428 - [c39]Ji Xu, Daniel J. Hsu, Arian Maleki:
Global Analysis of Expectation Maximization for Mixtures of Two Gaussians. NIPS 2016: 2676-2684 - [c38]Alina Beygelzimer, Daniel J. Hsu, John Langford, Chicheng Zhang:
Search Improves Label for Active Learning. NIPS 2016: 3342-3350 - [i38]Alina Beygelzimer, Daniel J. Hsu, John Langford, Chicheng Zhang:
Search Improves Label for Active Learning. CoRR abs/1602.07265 (2016) - [i37]Daniel J. Hsu, Matus Telgarsky:
Greedy bi-criteria approximations for k-medians and k-means. CoRR abs/1607.06203 (2016) - [i36]Ji Xu, Daniel J. Hsu, Arian Maleki:
Global analysis of Expectation Maximization for mixtures of two Gaussians. CoRR abs/1608.07630 (2016) - 2015
- [j10]Anima Anandkumar, Dean P. Foster, Daniel J. Hsu, Sham M. Kakade, Yi-Kai Liu:
A Spectral Algorithm for Latent Dirichlet Allocation. Algorithmica 72(1): 193-214 (2015) - [j9]Sivan Sabato, Shai Shalev-Shwartz, Nathan Srebro, Daniel J. Hsu, Tong Zhang:
Learning sparse low-threshold linear classifiers. J. Mach. Learn. Res. 16: 1275-1304 (2015) - [j8]Animashree Anandkumar, Daniel J. Hsu, Majid Janzamin, Sham M. Kakade:
When are overcomplete topic models identifiable? uniqueness of tensor tucker decompositions with structured sparsity. J. Mach. Learn. Res. 16: 2643-2694 (2015) - [j7]Cun Mu, Daniel J. Hsu, Donald Goldfarb:
Successive Rank-One Approximations for Nearly Orthogonally Decomposable Symmetric Tensors. SIAM J. Matrix Anal. Appl. 36(4): 1638-1659 (2015) - [c37]Karl Stratos, Michael Collins, Daniel J. Hsu:
Model-based Word Embeddings from Decompositions of Count Matrices. ACL (1) 2015: 1282-1291 - [c36]Anima Anandkumar, Rong Ge, Daniel J. Hsu, Sham M. Kakade, Matus Telgarsky:
Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT). ALT 2015: 19-38 - [c35]Mathias Lécuyer, Riley Spahn, Yannis Spiliopolous, Augustin Chaintreau, Roxana Geambasu, Daniel J. Hsu:
Sunlight: Fine-grained Targeting Detection at Scale with Statistical Confidence. CCS 2015: 554-566 - [c34]Daniel J. Hsu, Aryeh Kontorovich, Csaba Szepesvári:
Mixing Time Estimation in Reversible Markov Chains from a Single Sample Path. NIPS 2015: 1459-1467 - [c33]Tzu-Kuo Huang, Alekh Agarwal, Daniel J. Hsu, John Langford, Robert E. Schapire:
Efficient and Parsimonious Agnostic Active Learning. NIPS 2015: 2755-2763 - [c32]Y. Cem Sübakan, Johannes Traa, Paris Smaragdis, Daniel J. Hsu:
Method of moments learning for left-to-right Hidden Markov models. WASPAA 2015: 1-5 - [i35]Daniel J. Hsu, Aryeh Kontorovich, Csaba Szepesvári:
Mixing Time Estimation in Reversible Markov Chains from a Single Sample Path. CoRR abs/1506.02903 (2015) - [i34]Tzu-Kuo Huang, Alekh Agarwal, Daniel J. Hsu, John Langford, Robert E. Schapire:
Efficient and Parsimonious Agnostic Active Learning. CoRR abs/1506.08669 (2015) - [i33]Florian Tramèr, Vaggelis Atlidakis, Roxana Geambasu, Daniel J. Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, Huang Lin:
Discovering Unwarranted Associations in Data-Driven Applications with the FairTest Testing Toolkit. CoRR abs/1510.02377 (2015) - 2014
- [j6]