


Остановите войну!
for scientists:


default search action
Luke Zettlemoyer
Luke S. Zettlemoyer
Person information

- affiliation: University of Washington, School of Computer Science & Engineering, Seattle, WA, USA
- award (2016): Presidential Early Career Award for Scientists and Engineers
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [i162]Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer:
CiT: Curation in Training for Effective Vision-Language Data. CoRR abs/2301.02241 (2023) - [i161]Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer:
Scaling Laws for Generative Mixed-Modal Language Models. CoRR abs/2301.03728 (2023) - [i160]Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa:
XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models. CoRR abs/2301.10472 (2023) - [i159]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. CoRR abs/2301.12652 (2023) - [i158]Yu Meng, Jitin Krishnan, Sinong Wang, Qifan Wang, Yuning Mao, Han Fang, Marjan Ghazvininejad, Jiawei Han, Luke Zettlemoyer:
Representation Deficiency in Masked Language Modeling. CoRR abs/2302.02060 (2023) - [i157]Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom:
Toolformer: Language Models Can Teach Themselves to Use Tools. CoRR abs/2302.04761 (2023) - [i156]Marjan Ghazvininejad, Hila Gonen, Luke Zettlemoyer:
Dictionary-based Phrase-level Prompting of Large Language Models for Machine Translation. CoRR abs/2302.07856 (2023) - [i155]Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Túlio Ribeiro:
ART: Automatic multi-step reasoning and tool-use for large language models. CoRR abs/2303.09014 (2023) - [i154]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i153]Mitchell Wortsman, Tim Dettmers, Luke Zettlemoyer, Ari Morcos, Ali Farhadi, Ludwig Schmidt:
Stable and low-precision training for large-scale vision-language models. CoRR abs/2304.13013 (2023) - [i152]Haoqiang Kang, Terra Blevins, Luke Zettlemoyer:
Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models. CoRR abs/2304.13803 (2023) - [i151]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. CoRR abs/2305.07185 (2023) - [i150]Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023) - 2022
- [j10]Nicola De Cao
, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni:
Multilingual Autoregressive Entity Linking. Trans. Assoc. Comput. Linguistics 10: 274-290 (2022) - [c169]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. ACL (Findings) 2022: 711-728 - [c168]Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani:
Prompt-free and Efficient Few-shot Learning with Language Models. ACL (1) 2022: 3638-3652 - [c167]Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi:
FaVIQ: FAct Verification from Information-seeking Questions. ACL (1) 2022: 5154-5166 - [c166]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. ACL (1) 2022: 5316-5330 - [c165]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. EMNLP 2022: 602-631 - [c164]Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer:
M2D2: A Massively Multi-Domain Language Modeling Dataset. EMNLP 2022: 964-975 - [c163]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. EMNLP 2022: 2562-2580 - [c162]Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, Luke Zettlemoyer:
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation. EMNLP (Findings) 2022: 2964-2984 - [c161]Weijia Shi, Julian Michael, Suchin Gururangan, Luke Zettlemoyer:
Nearest Neighbor Zero-Shot Inference. EMNLP 2022: 3254-3265 - [c160]Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang:
Natural Language to Code Translation with Execution. EMNLP 2022: 3533-3546 - [c159]Terra Blevins, Luke Zettlemoyer:
Language Contamination Helps Explains the Cross-lingual Capabilities of English Pretrained Models. EMNLP 2022: 3563-3574 - [c158]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models. EMNLP 2022: 3575-3590 - [c157]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. EMNLP 2022: 3781-3797 - [c156]Mikel Artetxe, Jingfei Du, Naman Goyal, Luke Zettlemoyer, Veselin Stoyanov:
On the Role of Bidirectionality in Language Model Pre-Training. EMNLP (Findings) 2022: 3973-3985 - [c155]Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona T. Diab, Veselin Stoyanov, Xian Li:
Few-shot Learning with Multilingual Generative Language Models. EMNLP 2022: 9019-9052 - [c154]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? EMNLP 2022: 11048-11064 - [c153]Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giridharan Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeffrey Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, Veselin Stoyanov:
Efficient Large Scale Language Modeling with Mixtures of Experts. EMNLP 2022: 11699-11732 - [c152]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. ICLR 2022 - [c151]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. ICLR 2022 - [c150]Eleftheria Briakou, Sida I. Wang, Luke Zettlemoyer, Marjan Ghazvininejad:
BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation. NAACL-HLT (Findings) 2022: 1469-1485 - [c149]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. NAACL-HLT 2022: 2791-2809 - [c148]Belinda Z. Li, Jane A. Yu, Madian Khabsa, Luke Zettlemoyer, Alon Y. Halevy, Jacob Andreas:
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. NAACL-HLT 2022: 4696-4715 - [c147]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c146]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NeurIPS 2022 - [c145]Kushal Tirumala, Aram H. Markosyan, Luke Zettlemoyer, Armen Aghajanyan:
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models. NeurIPS 2022 - [c144]Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel:
Improving Policy Learning via Language Dynamics Distillation. NeurIPS 2022 - [c143]Paden Tomasello, Akshat Shrivastava, Daniel Lazar, Po-Chun Hsu, Duc Le, Adithya Sagar, Ali Elkahky, Jade Copet, Wei-Ning Hsu, Yossi Adi, Robin Algayres, Tu Ahn Nguyen, Emmanuel Dupoux, Luke Zettlemoyer, Abdelrahman Mohamed:
Stop: A Dataset for Spoken Task Oriented Semantic Parsing. SLT 2022: 991-998 - [i149]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir R. Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. CoRR abs/2201.05966 (2022) - [i148]Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer:
CM3: A Causal Masked Multimodal Model of the Internet. CoRR abs/2201.07520 (2022) - [i147]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. CoRR abs/2201.10474 (2022) - [i146]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? CoRR abs/2202.12837 (2022) - [i145]Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani:
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models. CoRR abs/2204.01172 (2022) - [i144]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. CoRR abs/2204.05999 (2022) - [i143]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. CoRR abs/2204.07496 (2022) - [i142]Terra Blevins, Luke Zettlemoyer:
Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models. CoRR abs/2204.08110 (2022) - [i141]Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang:
Natural Language to Code Translation with Execution. CoRR abs/2204.11454 (2022) - [i140]Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer:
OPT: Open Pre-trained Transformer Language Models. CoRR abs/2205.01068 (2022) - [i139]Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer:
Few-shot Mining of Naturally Occurring Inputs and Outputs. CoRR abs/2205.04050 (2022) - [i138]Kushal Tirumala, Aram H. Markosyan, Luke Zettlemoyer, Armen Aghajanyan:
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models. CoRR abs/2205.10770 (2022) - [i137]Mikel Artetxe, Jingfei Du, Naman Goyal, Luke Zettlemoyer, Ves Stoyanov:
On the Role of Bidirectionality in Language Model Pre-Training. CoRR abs/2205.11726 (2022) - [i136]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models. CoRR abs/2205.11758 (2022) - [i135]Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias:
Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI. CoRR abs/2205.12469 (2022) - [i134]Weijia Shi, Julian Michael, Suchin Gururangan, Luke Zettlemoyer:
Nearest Neighbor Zero-Shot Inference. CoRR abs/2205.13792 (2022) - [i133]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe
, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. CoRR abs/2206.03318 (2022) - [i132]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. CoRR abs/2206.10658 (2022) - [i131]Paden Tomasello, Akshat Shrivastava, Daniel Lazar, Po-Chun Hsu, Duc Le, Adithya Sagar, Ali Elkahky, Jade Copet, Wei-Ning Hsu, Yossef Mordechay, Robin Algayres, Tu Anh Nguyen, Emmanuel Dupoux, Luke Zettlemoyer, Abdelrahman Mohamed:
STOP: A dataset for Spoken Task Oriented Semantic Parsing. CoRR abs/2207.10643 (2022) - [i130]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i129]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. CoRR abs/2208.07339 (2022) - [i128]Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Selective Annotation Makes Language Models Better Few-Shot Learners. CoRR abs/2209.01975 (2022) - [i127]Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer:
Mega: Moving Average Equipped Gated Attention. CoRR abs/2209.10655 (2022) - [i126]Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel:
Improving Policy Learning via Language Dynamics Distillation. CoRR abs/2210.00066 (2022) - [i125]Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Binding Language Models in Symbolic Languages. CoRR abs/2210.02875 (2022) - [i124]Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, Luke Zettlemoyer:
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation. CoRR abs/2210.04873 (2022) - [i123]Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer:
M2D2: A Massively Multi-domain Language Modeling Dataset. CoRR abs/2210.07370 (2022) - [i122]Victor Zhong, Weijia Shi, Wen-tau Yih, Luke Zettlemoyer:
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering. CoRR abs/2210.14353 (2022) - [i121]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i120]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Prompting Language Models for Linguistic Structure. CoRR abs/2211.07830 (2022) - [i119]Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida I. Wang, Tao Yu:
DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation. CoRR abs/2211.11501 (2022) - [i118]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i117]Xinyan Velocity Yu, Sewon Min, Luke Zettlemoyer, Hannaneh Hajishirzi:
CREPE: Open-Domain Question Answering with False Presuppositions. CoRR abs/2211.17257 (2022) - [i116]Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi:
AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization. CoRR abs/2212.00921 (2022) - [i115]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. CoRR abs/2212.01349 (2022) - [i114]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. CoRR abs/2212.02437 (2022) - [i113]Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer:
Demystifying Prompts in Language Models via Perplexity Estimation. CoRR abs/2212.04037 (2022) - [i112]Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz:
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning. CoRR abs/2212.07919 (2022) - [i111]Tim Dettmers, Luke Zettlemoyer:
The case for 4-bit precision: k-bit Inference Scaling Laws. CoRR abs/2212.09720 (2022) - [i110]Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
One Embedder, Any Task: Instruction-Finetuned Text Embeddings. CoRR abs/2212.09741 (2022) - [i109]Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, Ves Stoyanov:
Training Trajectories of Language Models Across Scales. CoRR abs/2212.09803 (2022) - [i108]Xinxi Lyu, Sewon Min, Iz Beltagy, Luke Zettlemoyer, Hannaneh Hajishirzi:
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations. CoRR abs/2212.09865 (2022) - [i107]Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun:
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. CoRR abs/2212.10001 (2022) - [i106]Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer:
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too? CoRR abs/2212.10539 (2022) - [i105]Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov:
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization. CoRR abs/2212.12017 (2022) - 2021
- [c142]Weijia Shi, Mandar Joshi, Luke Zettlemoyer:
DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions. ACL/IJCNLP (1) 2021: 415-427 - [c141]Haoyue Shi, Luke Zettlemoyer, Sida I. Wang:
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment. ACL/IJCNLP (1) 2021: 813-826 - [c140]Chunting Zhou, Graham Neubig, Jiatao Gu, Mona T. Diab, Francisco Guzmán, Luke Zettlemoyer, Marjan Ghazvininejad:
Detecting Hallucinated Content in Conditional Neural Sequence Generation. ACL/IJCNLP (Findings) 2021: 1393-1404 - [c139]Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Hannaneh Hajishirzi, Luke Zettlemoyer:
Prompting Contrastive Explanations for Commonsense Reasoning Tasks. ACL/IJCNLP (Findings) 2021: 4179-4192 - [c138]Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer:
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding. ACL/IJCNLP (Findings) 2021: 4227-4239 - [c137]Julian Michael, Luke Zettlemoyer:
Inducing Semantic Roles Without Syntax. ACL/IJCNLP (Findings) 2021: 4427-4442 - [c136]Armen Aghajanyan, Sonal Gupta, Luke Zettlemoyer:
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. ACL/IJCNLP (1) 2021: 7319-7328 - [c135]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRL 2021: 1691-1701 - [c134]Terra Blevins, Mandar Joshi, Luke Zettlemoyer:
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary. EACL 2021: 455-465 - [c133]Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta:
Muppet: Massive Multi-task Representations with Pre-Finetuning. EMNLP (1) 2021: 5799-5811 - [c132]Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer:
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding. EMNLP (1) 2021: 6787-6800 - [c131]Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer:
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. EMNLP (1) 2021: 7038-7051 - [c130]Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta:
Better Fine-Tuning by Reducing Representational Collapse. ICLR 2021 - [c129]Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, Yashar Mehdad:
Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing. ICLR 2021 - [c128]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c127]Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
DeLighT: Deep and Light-weight Transformer. ICLR 2021 - [c126]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c125]Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer:
Luna: Linear Unified Nested Attention. NeurIPS 2021: 2441-2453 - [c124]Victor Zhong, Austin W. Hanjie, Sida I. Wang, Karthik Narasimhan, Luke Zettlemoyer:
SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark. NeurIPS 2021: 21505-21519 - [e1]Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. Association for Computational Linguistics 2021, ISBN 978-1-954085-46-6 [contents] - [i104]Haoyue Shi, Luke Zettlemoyer, Sida I. Wang:
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment. CoRR abs/2101.00148 (2021) - [i103]Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta:
Muppet: Massive Multi-task Representations with Pre-Finetuning. CoRR abs/2101.11038 (2021) - [i102]Terra Blevins, Mandar Joshi, Luke Zettlemoyer:
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary. CoRR abs/2102.07983 (2021) - [i101]Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni:
Multilingual Autoregressive Entity Linking. CoRR abs/2103.12528 (2021) - [i100]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i99]Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer:
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. CoRR abs/2104.08315 (2021) - [i98]Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer:
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding. CoRR abs/2105.09996 (2021) - [i97]Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer:
Luna: Linear Unified Nested Attention. CoRR abs/2106.01540 (2021) - [i96]Weijia Shi, Mandar Joshi, Luke Zettlemoyer:
DESCGEN: A Distantly Supervised Dataset for Generating Abstractive Entity Descriptions. CoRR abs/2106.05365 (2021) - [i95]Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, Hannaneh Hajishirzi:
Prompting Contrastive Explanations for Commonsense Reasoning Tasks. CoRR abs/2106.06823 (2021) - [i94]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. CoRR abs/2106.08190 (2021) - [i93]