


Остановите войну!
for scientists:
Luke Zettlemoyer
Luke S. Zettlemoyer
Person information

- affiliation: University of Washington, School of Computer Science & Engineering, Seattle, WA, USA
- award (2016): Presidential Early Career Award for Scientists and Engineers
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [c146]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. ACL (Findings) 2022: 711-728 - [c145]Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani:
Prompt-free and Efficient Few-shot Learning with Language Models. ACL (1) 2022: 3638-3652 - [c144]Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi:
FaVIQ: FAct Verification from Information-seeking Questions. ACL (1) 2022: 5154-5166 - [c143]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. ACL (1) 2022: 5316-5330 - [i115]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir R. Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. CoRR abs/2201.05966 (2022) - [i114]Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer:
CM3: A Causal Masked Multimodal Model of the Internet. CoRR abs/2201.07520 (2022) - [i113]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. CoRR abs/2201.10474 (2022) - [i112]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? CoRR abs/2202.12837 (2022) - [i111]Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani:
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models. CoRR abs/2204.01172 (2022) - [i110]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. CoRR abs/2204.05999 (2022) - [i109]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. CoRR abs/2204.07496 (2022) - [i108]Terra Blevins, Luke Zettlemoyer:
Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models. CoRR abs/2204.08110 (2022) - [i107]Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang:
Natural Language to Code Translation with Execution. CoRR abs/2204.11454 (2022) - [i106]Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer:
OPT: Open Pre-trained Transformer Language Models. CoRR abs/2205.01068 (2022) - [i105]Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer:
Few-shot Mining of Naturally Occurring Inputs and Outputs. CoRR abs/2205.04050 (2022) - 2021
- [c142]Weijia Shi, Mandar Joshi, Luke Zettlemoyer:
DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions. ACL/IJCNLP (1) 2021: 415-427 - [c141]Haoyue Shi, Luke Zettlemoyer, Sida I. Wang:
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment. ACL/IJCNLP (1) 2021: 813-826 - [c140]Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Francisco Guzmán, Luke Zettlemoyer, Marjan Ghazvininejad:
Detecting Hallucinated Content in Conditional Neural Sequence Generation. ACL/IJCNLP (Findings) 2021: 1393-1404 - [c139]Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Hannaneh Hajishirzi, Luke Zettlemoyer:
Prompting Contrastive Explanations for Commonsense Reasoning Tasks. ACL/IJCNLP (Findings) 2021: 4179-4192 - [c138]Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer:
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding. ACL/IJCNLP (Findings) 2021: 4227-4239 - [c137]Julian Michael, Luke Zettlemoyer:
Inducing Semantic Roles Without Syntax. ACL/IJCNLP (Findings) 2021: 4427-4442 - [c136]Armen Aghajanyan, Sonal Gupta, Luke Zettlemoyer:
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. ACL/IJCNLP (1) 2021: 7319-7328 - [c135]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRL 2021: 1691-1701 - [c134]Terra Blevins, Mandar Joshi, Luke Zettlemoyer:
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary. EACL 2021: 455-465 - [c133]Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta:
Muppet: Massive Multi-task Representations with Pre-Finetuning. EMNLP (1) 2021: 5799-5811 - [c132]Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer:
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding. EMNLP (1) 2021: 6787-6800 - [c131]Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer:
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. EMNLP (1) 2021: 7038-7051 - [c130]Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta:
Better Fine-Tuning by Reducing Representational Collapse. ICLR 2021 - [c129]Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, Yashar Mehdad:
Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing. ICLR 2021 - [c128]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c127]Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
DeLighT: Deep and Light-weight Transformer. ICLR 2021 - [c126]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c125]Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer:
Luna: Linear Unified Nested Attention. NeurIPS 2021: 2441-2453 - [c124]Victor Zhong, Austin W. Hanjie, Sida I. Wang, Karthik Narasimhan, Luke Zettlemoyer:
SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark. NeurIPS 2021: 21505-21519 - [e1]Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. Association for Computational Linguistics 2021, ISBN 978-1-954085-46-6 [contents] - [i104]Haoyue Shi, Luke Zettlemoyer, Sida I. Wang:
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment. CoRR abs/2101.00148 (2021) - [i103]Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta:
Muppet: Massive Multi-task Representations with Pre-Finetuning. CoRR abs/2101.11038 (2021) - [i102]Terra Blevins, Mandar Joshi, Luke Zettlemoyer:
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary. CoRR abs/2102.07983 (2021) - [i101]Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni:
Multilingual Autoregressive Entity Linking. CoRR abs/2103.12528 (2021) - [i100]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i99]Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer:
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. CoRR abs/2104.08315 (2021) - [i98]Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer:
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding. CoRR abs/2105.09996 (2021) - [i97]Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer:
Luna: Linear Unified Nested Attention. CoRR abs/2106.01540 (2021) - [i96]Weijia Shi, Mandar Joshi, Luke Zettlemoyer:
DESCGEN: A Distantly Supervised Dataset for Generating Abstractive Entity Descriptions. CoRR abs/2106.05365 (2021) - [i95]Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, Hannaneh Hajishirzi:
Prompting Contrastive Explanations for Commonsense Reasoning Tasks. CoRR abs/2106.06823 (2021) - [i94]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. CoRR abs/2106.08190 (2021) - [i93]Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi:
FaVIQ: FAct Verification from Information-seeking Questions. CoRR abs/2107.02153 (2021) - [i92]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. CoRR abs/2107.06955 (2021) - [i91]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRR abs/2107.12514 (2021) - [i90]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. CoRR abs/2108.04106 (2021) - [i89]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i88]Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer:
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding. CoRR abs/2109.14084 (2021) - [i87]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. CoRR abs/2110.02861 (2021) - [i86]Victor Zhong, Austin W. Hanjie, Sida I. Wang, Karthik Narasimhan, Luke Zettlemoyer:
SILG: The Multi-environment Symbolic Interactive Language Grounding Benchmark. CoRR abs/2110.10661 (2021) - [i85]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. CoRR abs/2110.15943 (2021) - [i84]Eleftheria Briakou, Sida I. Wang, Luke Zettlemoyer, Marjan Ghazvininejad:
BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation. CoRR abs/2111.06787 (2021) - [i83]Belinda Z. Li, Jane A. Yu, Madian Khabsa, Luke Zettlemoyer, Alon Y. Halevy, Jacob Andreas:
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. CoRR abs/2112.03204 (2021) - [i82]Darsh J. Shah, Sinong Wang, Han Fang, Hao Ma, Luke Zettlemoyer:
Reducing Target Group Bias in Hate Speech Detectors. CoRR abs/2112.03858 (2021) - [i81]Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona T. Diab, Veselin Stoyanov, Xian Li:
Few-shot Learning with Multilingual Language Models. CoRR abs/2112.10668 (2021) - [i80]Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, Ves Stoyanov:
Efficient Large Scale Language Modeling with Mixtures of Experts. CoRR abs/2112.10684 (2021) - 2020
- [j9]Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy:
SpanBERT: Improving Pre-training by Representing and Predicting Spans. Trans. Assoc. Comput. Linguistics 8: 64-77 (2020) - [j8]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. Trans. Assoc. Comput. Linguistics 8: 726-742 (2020) - [c123]Terra Blevins, Luke Zettlemoyer:
Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders. ACL 2020: 1006-1017 - [c122]Nabil Hossain, Marjan Ghazvininejad, Luke Zettlemoyer:
Simple and Effective Retrieve-Edit-Rerank Text Generation. ACL 2020: 2532-2538 - [c121]Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, Veselin Stoyanov:
Emerging Cross-lingual Structure in Pretrained Language Models. ACL 2020: 6022-6034 - [c120]Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, Ido Dagan:
Controlled Crowdsourcing for High-Quality QA-SRL Annotation. ACL 2020: 7008-7013 - [c119]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. ACL 2020: 7871-7880 - [c118]Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer:
Active Learning for Coreference Resolution using Discrete Annotation. ACL 2020: 8320-8331 - [c117]Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov:
Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020: 8440-8451 - [c116]Ayal Klein, Jonathan Mamou, Valentina Pyatkin, Daniela Stepanov, Hangfeng He, Dan Roth, Luke Zettlemoyer, Ido Dagan:
QANom: Question-Answer driven SRL for Nominalizations. COLING 2020: 3069-3083 - [c115]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CVPR 2020: 10737-10746 - [c114]Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, Luke Zettlemoyer:
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. EMNLP (1) 2020: 1938-1952 - [c113]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles. EMNLP (Findings) 2020: 3031-3045 - [c112]Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, Sonal Gupta:
Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing. EMNLP (1) 2020: 5090-5100 - [c111]Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer:
AmbigQA: Answering Ambiguous Open-domain Questions. EMNLP (1) 2020: 5783-5797 - [c110]Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer:
Scalable Zero-shot Entity Linking with Dense Entity Retrieval. EMNLP (1) 2020: 6397-6407 - [c109]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. EMNLP (1) 2020: 6869-6882 - [c108]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c107]Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy:
Aligned Cross Entropy for Non-Autoregressive Machine Translation. ICML 2020: 3515-3523 - [c106]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. NeurIPS 2020 - [i79]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. CoRR abs/2001.08210 (2020) - [i78]Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer:
Semi-Autoregressive Training Improves Mask-Predict Decoding. CoRR abs/2001.08785 (2020) - [i77]Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy:
Aligned Cross Entropy for Non-Autoregressive Machine Translation. CoRR abs/2004.01655 (2020) - [i76]Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer:
AmbigQA: Answering Ambiguous Open-domain Questions. CoRR abs/2004.10645 (2020) - [i75]Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer:
Active Learning for Coreference Resolution using Discrete Annotation. CoRR abs/2004.13671 (2020) - [i74]Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, Luke Zettlemoyer:
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. CoRR abs/2005.00652 (2020) - [i73]Terra Blevins, Luke Zettlemoyer:
Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders. CoRR abs/2005.02590 (2020) - [i72]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida I. Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. CoRR abs/2006.15020 (2020) - [i71]Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
DeLighT: Very Deep and Light-weight Transformer. CoRR abs/2008.00623 (2020) - [i70]Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta:
Better Fine-Tuning by Reducing Representational Collapse. CoRR abs/2008.03156 (2020) - [i69]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. CoRR abs/2009.07396 (2020) - [i68]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. CoRR abs/2010.00710 (2020) - [i67]Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, Sonal Gupta:
Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing. CoRR abs/2010.03546 (2020) - [i66]Chunting Zhou, Jiatao Gu, Mona T. Diab, Paco Guzman, Luke Zettlemoyer, Marjan Ghazvininejad:
Detecting Hallucinated Content in Conditional Neural Sequence Generation. CoRR abs/2011.02593 (2020) - [i65]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles. CoRR abs/2011.03856 (2020) - [i64]Armen Aghajanyan, Luke Zettlemoyer, Sonal Gupta:
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. CoRR abs/2012.13255 (2020)
2010 – 2019
- 2019
- [c105]Terra Blevins, Luke Zettlemoyer:
Better Character Language Modeling through Morphology. ACL (1) 2019: 1606-1613 - [c104]Gabriel Stanovsky, Noah A. Smith, Luke Zettlemoyer:
Evaluating Gender Bias in Machine Translation. ACL (1) 2019: 1679-1684 - [c103]Victor Zhong, Luke Zettlemoyer:
E3: Entailment-driven Extracting and Editing for Conversational Machine Reading. ACL (1) 2019: 2310-2320 - [c102]Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer:
Compositional Questions Do Not Necessitate Multi-hop Reasoning. ACL (1) 2019: 4249-4257 - [c101]Fei Liu, Luke Zettlemoyer, Jacob Eisenstein:
The Referential Reader: A Recurrent Entity Network for Anaphora Resolution. ACL (1) 2019: 5918-5925 - [c100]Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi:
Multi-hop Reading Comprehension through Question Decomposition and Rescoring. ACL (1) 2019: 6097-6109 - [c99]Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer:
Vision-and-Dialog Navigation. CoRL 2019: 394-406 - [c98]Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, Luke Zettlemoyer:
Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog. EMNLP/IJCNLP (1) 2019: 1520-1526 - [c97]Sewon Min, Danqi Chen, Hannaneh Hajishirzi, Luke Zettlemoyer:
A Discrete Hard EM Approach for Weakly Supervised Question Answering. EMNLP/IJCNLP (1) 2019: 2851-2864 - [c96]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. EMNLP/IJCNLP (1) 2019: 4067-4080 - [c95]Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli:
Cloze-driven Pretraining of Self-attention Networks. EMNLP/IJCNLP (1) 2019: 5359-5368 - [c94]Srinivasan Iyer, Alvin Cheung, Luke Zettlemoyer:
Learning Programmatic Idioms for Scalable Semantic Parsing. EMNLP/IJCNLP (1) 2019: 5425-5434 - [c93]Rajas Agashe, Srinivasan Iyer, Luke Zettlemoyer:
JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation. EMNLP/IJCNLP (1) 2019: 5435-5445 - [c92]Mandar Joshi, Omer Levy, Luke Zettlemoyer, Daniel S. Weld:
BERT for Coreference Resolution: Baselines and Analysis. EMNLP/IJCNLP (1) 2019: 5802-5807 - [c91]Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer:
Mask-Predict: Parallel Decoding of Conditional Masked Language Models. EMNLP/IJCNLP (1) 2019: 6111-6120 - [c90]Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard H. Hovy:
Iterative Search for Weakly Supervised Semantic Parsing. NAACL-HLT (1) 2019: 2669-2680 - [c89]Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference. NAACL-HLT (1) 2019: 3597-3608 - [i63]Fei Liu, Luke Zettlemoyer, Jacob Eisenstein:
The Referential Reader: A Recurrent Entity Network for Anaphora Resolution. CoRR abs/1902.01541 (2019) - [i62]Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer:
Improving Semantic Parsing for Task Oriented Dialog. CoRR abs/1902.06000 (2019) - [i61]Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli:
Cloze-driven Pretraining of Self-attention Networks. CoRR abs/1903.07785 (2019) - [i60]Srinivasan Iyer, Alvin Cheung, Luke Zettlemoyer:
Learning Programmatic Idioms for Scalable Semantic Parsing. CoRR abs/1904.09086 (2019) - [i59]Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer:
Constant-Time Machine Translation with Conditional Masked Language Models. CoRR abs/1904.09324 (2019) - [i58]Abdelrahman Mohamed, Dmytro Okhonko, Luke Zettlemoyer:
Transformers with convolutional context for ASR. CoRR abs/1904.11660 (2019) - [i57]Gabriel Stanovsky, Noah A. Smith, Luke Zettlemoyer:
Evaluating Gender Bias in Machine Translation. CoRR abs/1906.00591 (2019) - [i56]Terra Blevins, Luke Zettlemoyer:
Better Character Language Modeling Through Morphology. CoRR abs/1906.01037 (2019) - [i55]Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer:
Compositional Questions Do Not Necessitate Multi-hop Reasoning. CoRR abs/1906.02900 (2019) - [i54]Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi:
Multi-hop Reading Comprehension through Question Decomposition and Rescoring. CoRR abs/1906.02916 (2019) - [i53]Victor Zhong, Luke Zettlemoyer:
E3: Entailment-driven Extracting and Editing for Conversational Machine Reading. CoRR abs/1906.05373 (2019) - [i52]Tim Dettmers, Luke Zettlemoyer:
Sparse Networks from Scratch: Faster Training without Losing Performance. CoRR abs/1907.04840 (2019) - [i51]