


Остановите войну!
for scientists:


default search action
Percy Liang
Person information

- affiliation: Stanford University, Computer Science Department
- award (2019): Presidential Early Career Award for Scientists and Engineers
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c182]Nelson F. Liu, Ananya Kumar, Percy Liang, Robin Jia:
Are Sample-Efficient NLP Models More Robust? ACL (2) 2023: 1689-1709 - [c181]Yuhui Zhang, Michihiro Yasunaga, Zhengping Zhou, Jeff Z. HaoChen, James Zou, Percy Liang, Serena Yeung:
Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models. ACL (Findings) 2023: 7479-7498 - [c180]John Hewitt, John Thickstun, Christopher D. Manning, Percy Liang:
Backpack Language Models. ACL (1) 2023: 9103-9125 - [c179]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. ACL (1) 2023: 12286-12312 - [c178]Nelson F. Liu, Tony Lee, Robin Jia, Percy Liang:
Do Question Answering Modeling Improvements Hold Across Benchmarks? ACL (1) 2023: 13186-13218 - [c177]Yuchen Cui
, Siddharth Karamcheti
, Raj Palleti
, Nidhya Shivakumar
, Percy Liang
, Dorsa Sadigh
:
No, to the Right: Online Language Corrections for Robotic Manipulation via Shared Autonomy. HRI 2023: 93-101 - [c176]Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn:
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts. ICLR 2023 - [c175]Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, Tatsunori Hashimoto:
Is a Caption Worth a Thousand Images? A Study on Representation Learning. ICLR 2023 - [c174]Steven Cao, Percy Liang, Gregory Valiant:
One-sided Matrix Completion from Two Observations Per Row. ICML 2023: 3599-3624 - [c173]Yann Dubois, Tatsunori Hashimoto, Percy Liang:
Evaluating Self-Supervised Learning via Risk Decomposition. ICML 2023: 8779-8820 - [c172]Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy Liang:
Out-of-Domain Robustness via Targeted Augmentations. ICML 2023: 10800-10834 - [c171]Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto:
Whose Opinions Do Language Models Reflect? ICML 2023: 29971-30004 - [c170]Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang:
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU. ICML 2023: 31094-31116 - [c169]Jue Wang, Yucheng Lu, Binhang Yuan, Beidi Chen, Percy Liang, Christopher De Sa, Christopher Ré, Ce Zhang:
CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks. ICML 2023: 36058-36076 - [c168]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-Tau Yih:
Retrieval-Augmented Multimodal Language Modeling. ICML 2023: 39755-39769 - [c167]Siddharth Karamcheti, Suraj Nair, Annie S. Chen, Thomas Kollar, Chelsea Finn, Dorsa Sadigh, Percy Liang:
Language-Driven Representation Learning for Robotics. Robotics: Science and Systems 2023 - [c166]Joon Sung Park
, Joseph C. O'Brien
, Carrie Jun Cai
, Meredith Ringel Morris
, Percy Liang
, Michael S. Bernstein
:
Generative Agents: Interactive Simulacra of Human Behavior. UIST 2023: 2:1-2:22 - [i184]Yuchen Cui, Siddharth Karamcheti, Raj Palleti, Nidhya Shivakumar, Percy Liang, Dorsa Sadigh:
"No, to the Right" - Online Language Corrections for Robotic Manipulation via Shared Autonomy. CoRR abs/2301.02555 (2023) - [i183]Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen R. McKeown, Tatsunori B. Hashimoto:
Benchmarking Large Language Models for News Summarization. CoRR abs/2301.13848 (2023) - [i182]Yann Dubois, Tatsunori Hashimoto, Percy Liang:
Evaluating Self-Supervised Learning via Risk Decomposition. CoRR abs/2302.03068 (2023) - [i181]Sang Michael Xie, Shibani Santurkar, Tengyu Ma, Percy Liang:
Data Selection for Language Models via Importance Resampling. CoRR abs/2302.03169 (2023) - [i180]Irena Gao, Shiori Sagawa, Pang Wei Koh, Tatsunori Hashimoto, Percy Liang:
Out-of-Domain Robustness via Targeted Augmentations. CoRR abs/2302.11861 (2023) - [i179]Siddharth Karamcheti, Suraj Nair, Annie S. Chen, Thomas Kollar, Chelsea Finn, Dorsa Sadigh, Percy Liang:
Language-Driven Representation Learning for Robotics. CoRR abs/2302.12766 (2023) - [i178]Michael Sun, Ananya Kumar, Divyam Madaan, Percy Liang:
Improving Representational Continuity via Continued Pretraining. CoRR abs/2302.13289 (2023) - [i177]Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark W. Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang:
High-throughput Generative Inference of Large Language Models with a Single GPU. CoRR abs/2303.06865 (2023) - [i176]Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang:
Foundation Models and Fair Use. CoRR abs/2303.15715 (2023) - [i175]Rishi Bommasani, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, Percy Liang:
Ecosystem Graphs: The Social Footprint of Foundation Models. CoRR abs/2303.15772 (2023) - [i174]Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto:
Whose Opinions Do Language Models Reflect? CoRR abs/2303.17548 (2023) - [i173]Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein:
Generative Agents: Interactive Simulacra of Human Behavior. CoRR abs/2304.03442 (2023) - [i172]Nelson F. Liu, Tianyi Zhang, Percy Liang:
Evaluating Verifiability in Generative Search Engines. CoRR abs/2304.09848 (2023) - [i171]Deepak Narayanan, Keshav Santhanam, Peter Henderson, Rishi Bommasani, Tony Lee, Percy Liang:
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs. CoRR abs/2305.02440 (2023) - [i170]Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu:
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining. CoRR abs/2305.10429 (2023) - [i169]Qian Huang, Hongyu Ren, Peng Chen, Gregor Krzmanc, Daniel Zeng, Percy Liang, Jure Leskovec:
PRODIGY: Enabling In-context Learning Over Graphs. CoRR abs/2305.12600 (2023) - [i168]Hong Liu, Zhiyuan Li, David Hall, Percy Liang, Tengyu Ma:
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training. CoRR abs/2305.14342 (2023) - [i167]Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, Tatsunori B. Hashimoto:
AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback. CoRR abs/2305.14387 (2023) - [i166]Qian Huang, Eric Zelikman, Sarah Li Chen, Yuhuai Wu, Gregory Valiant, Percy Liang:
Lexinvariant Language Models. CoRR abs/2305.16349 (2023) - [i165]John Hewitt, John Thickstun, Christopher D. Manning, Percy Liang:
Backpack Language Models. CoRR abs/2305.16765 (2023) - [i164]Yuhui Zhang, Michihiro Yasunaga, Zhengping Zhou, Jeff Z. HaoChen, James Zou, Percy Liang, Serena Yeung:
Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models. CoRR abs/2305.17311 (2023) - [i163]Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan:
Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment. CoRR abs/2306.03262 (2023) - [i162]Steven Cao, Percy Liang, Gregory Valiant:
One-sided Matrix Completion from Two Observations Per Row. CoRR abs/2306.04049 (2023) - [i161]John Thickstun, David Hall, Chris Donahue, Percy Liang:
Anticipatory Music Transformer. CoRR abs/2306.08620 (2023) - [i160]Eric Zelikman, Qian Huang, Percy Liang, Nick Haber, Noah D. Goodman:
Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness. CoRR abs/2306.10015 (2023) - [i159]Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang:
Lost in the Middle: How Language Models Use Long Contexts. CoRR abs/2307.03172 (2023) - [i158]Connor Toups, Rishi Bommasani, Kathleen A. Creel, Sarah H. Bana, Dan Jurafsky, Percy Liang:
Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes. CoRR abs/2307.05862 (2023) - [i157]Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, Percy Liang:
Robust Distortion-free Watermarks for Language Models. CoRR abs/2307.15593 (2023) - [i156]Scott L. Fleming, Alejandro Lozano, William J. Haberkorn, Jenelle A. Jindal, Eduardo Pontes Reis, Rahul Thapa, Louis Blankemeier, Julian Z. Genkins, Ethan Steinberg, Ashwin Nayak, Birju S. Patel, Chia-Chun Chiang, Alison Callahan, Zepeng Huo, Sergios Gatidis, Scott J. Adams, Oluseyi Fayanju, Shreya J. Shah, Thomas Savage, Ethan Goh, Akshay S. Chaudhari, Nima Aghaeepour, Christopher D. Sharp, Michael A. Pfeffer, Percy Liang, Jonathan H. Chen, Keith E. Morse, Emma P. Brunskill, Jason A. Fries, Nigam H. Shah:
MedAlign: A Clinician-Generated Dataset for Instruction Following with Electronic Medical Records. CoRR abs/2308.14089 (2023) - [i155]Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, Denny Zhou:
Large Language Models as Analogical Reasoners. CoRR abs/2310.01714 (2023) - [i154]Xiang Lisa Li, Vaishnavi Shrivastava, Siyan Li, Tatsunori Hashimoto, Percy Liang:
Benchmarking and Improving Generator-Validator Consistency of Language Models. CoRR abs/2310.01846 (2023) - [i153]Qian Huang, Jian Vora, Percy Liang, Jure Leskovec:
Benchmarking Large Language Models As AI Research Agents. CoRR abs/2310.03302 (2023) - [i152]Rishi Bommasani, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, Percy Liang:
The Foundation Model Transparency Index. CoRR abs/2310.12941 (2023) - [i151]Tony Lee, Michihiro Yasunaga, Chenlin Meng, Yifan Mai, Joon Sung Park, Agrim Gupta, Yunzhi Zhang, Deepak Narayanan, Hannah Benita Teufel, Marco Bellagente, Minguk Kang, Taesung Park, Jure Leskovec, Jun-Yan Zhu, Li Fei-Fei, Jiajun Wu, Stefano Ermon, Percy Liang:
Holistic Evaluation of Text-To-Image Models. CoRR abs/2311.04287 (2023) - [i150]Vaishnavi Shrivastava, Percy Liang, Ananya Kumar:
Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation. CoRR abs/2311.08877 (2023) - 2022
- [j10]Pang Wei Koh
, Jacob Steinhardt, Percy Liang:
Stronger data poisoning attacks break data sanitization defenses. Mach. Learn. 111(1): 1-47 (2022) - [j9]Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus:
Emergent Abilities of Large Language Models. Trans. Mach. Learn. Res. 2022 (2022) - [c165]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LinkBERT: Pretraining Language Models with Document Links. ACL (1) 2022: 8003-8016 - [c164]Mina Lee, Percy Liang, Qian Yang:
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. CHI 2022: 388:1-388:19 - [c163]John Hewitt, Christopher D. Manning, Percy Liang:
Truncation Sampling as Language Model Desmoothing. EMNLP (Findings) 2022: 3414-3427 - [c162]Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec:
GreaseLM: Graph REASoning Enhanced Language Models. ICLR 2022 - [c161]Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, Percy Liang:
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. ICLR 2022 - [c160]Xuechen Li, Florian Tramèr
, Percy Liang, Tatsunori Hashimoto:
Large Language Models Can Be Strong Differentially Private Learners. ICLR 2022 - [c159]Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo
, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang:
Extending the WILDS Benchmark for Unsupervised Adaptation. ICLR 2022 - [c158]Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma:
An Explanation of In-context Learning as Implicit Bayesian Inference. ICLR 2022 - [c157]Kendrick Shen, Robbie M. Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, Percy Liang:
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation. ICML 2022: 19847-19878 - [c156]Chris Donahue, John Thickstun, Percy Liang:
Melody transcription via generative pre-training. ISMIR 2022: 485-492 - [c155]Shivam Garg, Dimitris Tsipras, Percy Liang, Gregory Valiant:
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes. NeurIPS 2022 - [c154]Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang:
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? NeurIPS 2022 - [c153]Yann Dubois, Stefano Ermon, Tatsunori B. Hashimoto, Percy Liang:
Improving Self-Supervised Learning by Characterizing Idealized Representations. NeurIPS 2022 - [c152]Xiang Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto:
Diffusion-LM Improves Controllable Text Generation. NeurIPS 2022 - [c151]Yuhuai Wu, Felix Li, Percy Liang:
Insights into Pre-training via Simpler Synthetic Tasks. NeurIPS 2022 - [c150]Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, Jure Leskovec:
Deep Bidirectional Language-Knowledge Graph Pretraining. NeurIPS 2022 - [c149]Binhang Yuan, Yongjun He, Jared Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. NeurIPS 2022 - [c148]Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan:
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. UAI 2022: 1041-1051 - [c147]Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein
:
Social Simulacra: Creating Populated Prototypes for Social Computing Systems. UIST 2022: 74:1-74:18 - [i149]Mina Lee, Percy Liang, Qian Yang:
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. CoRR abs/2201.06796 (2022) - [i148]Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec:
GreaseLM: Graph REASoning Enhanced Language Models for Question Answering. CoRR abs/2201.08860 (2022) - [i147]Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang:
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. CoRR abs/2202.10054 (2022) - [i146]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LinkBERT: Pretraining Language Models with Document Links. CoRR abs/2203.15827 (2022) - [i145]Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, Percy Liang:
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation. CoRR abs/2204.00570 (2022) - [i144]Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto:
Diffusion-LM Improves Controllable Text Generation. CoRR abs/2205.14217 (2022) - [i143]Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Ré, Ce Zhang:
Decentralized Training of Foundation Models in Heterogeneous Environments. CoRR abs/2206.01288 (2022) - [i142]Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus:
Emergent Abilities of Large Language Models. CoRR abs/2206.07682 (2022) - [i141]Yuhuai Wu, Felix Li, Percy Liang:
Insights into Pre-training via Simpler Synthetic Tasks. CoRR abs/2206.10139 (2022) - [i140]Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, Tatsunori Hashimoto:
Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning. CoRR abs/2207.07635 (2022) - [i139]Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan:
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. CoRR abs/2207.08977 (2022) - [i138]Shivam Garg, Dimitris Tsipras, Percy Liang, Gregory Valiant:
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes. CoRR abs/2208.01066 (2022) - [i137]Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein:
Social Simulacra: Creating Populated Prototypes for Social Computing Systems. CoRR abs/2208.04024 (2022) - [i136]Yann Dubois, Tatsunori Hashimoto, Stefano Ermon, Percy Liang:
Improving Self-Supervised Learning by Characterizing Idealized Representations. CoRR abs/2209.06235 (2022) - [i135]Nelson F. Liu, Ananya Kumar, Percy Liang, Robin Jia:
Are Sample-Efficient NLP Models More Robust? CoRR abs/2210.06456 (2022) - [i134]Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, Jure Leskovec:
Deep Bidirectional Language-Knowledge Graph Pretraining. CoRR abs/2210.09338 (2022) - [i133]Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn:
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts. CoRR abs/2210.11466 (2022) - [i132]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i131]John Hewitt, Christopher D. Manning, Percy Liang:
Truncation Sampling as Language Model Desmoothing. CoRR abs/2210.15191 (2022) - [i130]Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yüksekgönül
, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang
, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda:
Holistic Evaluation of Language Models. CoRR abs/2211.09110 (2022) - [i129]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i128]Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah:
How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions? CoRR abs/2211.12966 (2022) - [i127]Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang:
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? CoRR abs/2211.13972 (2022) - [i126]Chris Donahue, John Thickstun, Percy Liang:
Melody transcription via generative pre-training. CoRR abs/2212.01884 (2022) - [i125]Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael S. Bernstein, Percy Liang:
Evaluating Human-Language Model Interaction. CoRR abs/2212.09746 (2022) - [i124]Rishi Bommasani, Percy Liang:
Trustworthy Social Bias Measurement. CoRR abs/2212.11672 (2022) - [i123]Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, Matei Zaharia:
Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP. CoRR abs/2212.14024 (2022) - 2021
- [c146]Xiang Lisa Li, Percy Liang:
Prefix-Tuning: Optimizing Continuous Prompts for Generation. ACL/IJCNLP (1) 2021: 4582-4597 - [c145]Siddharth Karamcheti, Megha Srivastava, Percy Liang, Dorsa Sadigh:
LILA: Language-Informed Latent Actions. CoRL 2021: 1379-1390 - [c144]John Hewitt, Kawin Ethayarajh, Percy Liang, Christopher D. Manning:
Conditional probing: measuring usable information beyond a baseline. EMNLP (1) 2021: 1626-1639 - [c143]Michihiro Yasunaga, Jure Leskovec, Percy Liang:
LM-Critic: Language Models for Unsupervised Grammatical Error Correction. EMNLP (1) 2021: 7752-7763 - [c142]Fereshte Khani, Percy Liang:
Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately. FAccT 2021: 196-205 - [c141]Erik Jones, Shiori Sagawa, Pang Wei Koh, Ananya Kumar, Percy Liang:
Selective Classification Can Magnify Disparities Across Groups. ICLR 2021 - [c140]Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, Percy Liang:
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness. ICLR 2021 - [c139]Jared Quincy Davis, Albert Gu, Krzysztof Choromanski, Tri Dao, Christopher Ré, Chelsea Finn, Percy Liang:
Catformer: Designing Stable Transformers via Sensitivity Analysis. ICML 2021: 2489-2499 - [c138]Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran S. Haque, Sara M. Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, Percy Liang:
WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021: 5637-5664 - [c137]Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn:
Just Train Twice: Improving Group Robustness without Training Group Information. ICML 2021: 6781-6792 - [c136]Evan Zheran Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn:
Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices. ICML 2021: 6925-6935 - [c135]John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt:
Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization. ICML 2021: 7721-7735 - [c134]Sang Michael Xie, Tengyu Ma, Percy Liang:
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization. ICML 2021: 11424-11435 - [c133]Michihiro Yasunaga, Percy Liang:
Break-It-Fix-It: Unsupervised Learning for Program Repair. ICML 2021: 11941-11952 - [c132]Rodrigo Castellon, Chris Donahue, Percy Liang:
Codified audio language modeling learns useful representations for music information retrieval. ISMIR 2021: 88-96 - [c131]