


Остановите войну!
for scientists:


default search action
Mike Lewis
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [i59]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi:
Progressive Prompts: Continual Learning for Language Models. CoRR abs/2301.12314 (2023) - [i58]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. CoRR abs/2301.12652 (2023) - [i57]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i56]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Jimmy Ba, Amjad Almahairi:
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization. CoRR abs/2305.03937 (2023) - [i55]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. CoRR abs/2305.07185 (2023) - [i54]Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023) - 2022
- [c50]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. ACL (Findings) 2022: 711-728 - [c49]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. ACL (1) 2022: 5316-5330 - [c48]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. EMNLP 2022: 3781-3797 - [c47]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? EMNLP 2022: 11048-11064 - [c46]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. ICLR 2022 - [c45]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. ICLR 2022 - [c44]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR 2022 - [c43]Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech:
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models. NAACL-HLT 2022: 2361-2375 - [c42]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. NAACL-HLT 2022: 2791-2809 - [c41]Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, Angela Fan:
Tricks for Training Sparse Translation Models. NAACL-HLT 2022: 3340-3345 - [c40]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c39]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NeurIPS 2022 - [i53]Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer:
CM3: A Causal Masked Multimodal Model of the Internet. CoRR abs/2201.07520 (2022) - [i52]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? CoRR abs/2202.12837 (2022) - [i51]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. CoRR abs/2204.05999 (2022) - [i50]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. CoRR abs/2204.07496 (2022) - [i49]Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer:
Few-shot Mining of Naturally Occurring Inputs and Outputs. CoRR abs/2205.04050 (2022) - [i48]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe
, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. CoRR abs/2206.03318 (2022) - [i47]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. CoRR abs/2206.10658 (2022) - [i46]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i45]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. CoRR abs/2208.07339 (2022) - [i44]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. CoRR abs/2210.03350 (2022) - [i43]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i42]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i41]Weiyan Shi, Emily Dinan, Adi Renduchintala, Daniel Fried, Athul Paul Jacob, Zhou Yu, Mike Lewis:
AutoReply: Detecting Nonsense in Dialogue Introspectively with Discriminative Replies. CoRR abs/2211.12615 (2022) - [i40]Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, Sida I. Wang:
Coder Reviewer Reranking for Code Generation. CoRR abs/2211.16490 (2022) - [i39]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. CoRR abs/2212.01349 (2022) - [i38]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. CoRR abs/2212.02437 (2022) - [i37]Andrew Lee, David Wu, Emily Dinan, Mike Lewis:
Improving Chess Commentaries by Combining Language Models with Symbolic Reasoning Engines. CoRR abs/2212.08195 (2022) - 2021
- [c38]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. ACL/IJCNLP (1) 2021: 5493-5505 - [c37]Michael Sejr Schlichtkrull, Vladimir Karpukhin, Barlas Oguz, Mike Lewis, Wen-tau Yih, Sebastian Riedel:
Joint Verification and Reranking for Open Fact Checking Over Tables. ACL/IJCNLP (1) 2021: 6787-6799 - [c36]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c35]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c34]Athul Paul Jacob, Mike Lewis, Jacob Andreas:
Multitasking Inhibits Semantic Drift. NAACL-HLT 2021: 5351-5366 - [i36]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i35]Athul Paul Jacob, Mike Lewis, Jacob Andreas:
Multitasking Inhibits Semantic Drift. CoRR abs/2104.07219 (2021) - [i34]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. CoRR abs/2106.08190 (2021) - [i33]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. CoRR abs/2107.06955 (2021) - [i32]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. CoRR abs/2108.04106 (2021) - [i31]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i30]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. CoRR abs/2108.12409 (2021) - [i29]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. CoRR abs/2110.02861 (2021) - [i28]Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, Angela Fan:
Tricks for Training Sparse Translation Models. CoRR abs/2110.08246 (2021) - [i27]Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech:
Sparse Distillation: Speeding Up Text Classification by Using Bigger Models. CoRR abs/2110.08536 (2021) - [i26]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. CoRR abs/2110.15943 (2021) - 2020
- [j5]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. Trans. Assoc. Comput. Linguistics 8: 726-742 (2020) - [c33]Alex Wang, Kyunghyun Cho, Mike Lewis:
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. ACL 2020: 5008-5020 - [c32]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. ACL 2020: 7871-7880 - [c31]Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Michael Haeger, Haoran Li, Yashar Mehdad, Veselin Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta:
Conversational Semantic Parsing. EMNLP (1) 2020: 5026-5035 - [c30]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. EMNLP (1) 2020: 6869-6882 - [c29]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c28]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. NeurIPS 2020 - [c27]Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela:
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NeurIPS 2020 - [i25]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. CoRR abs/2001.08210 (2020) - [i24]Alex Wang, Kyunghyun Cho, Mike Lewis:
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. CoRR abs/2004.04228 (2020) - [i23]Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela:
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. CoRR abs/2005.11401 (2020) - [i22]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida I. Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. CoRR abs/2006.15020 (2020) - [i21]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. CoRR abs/2009.07396 (2020) - [i20]Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta:
Conversational Semantic Parsing. CoRR abs/2009.13655 (2020) - [i19]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. CoRR abs/2010.00710 (2020) - [i18]Michael Sejr Schlichtkrull, Vladimir Karpukhin, Barlas Oguz, Mike Lewis, Wen-tau Yih, Sebastian Riedel:
Joint Verification and Reranking for Open Fact Checking Over Tables. CoRR abs/2012.15115 (2020) - [i17]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. CoRR abs/2012.15832 (2020)
2010 – 2019
- 2019
- [c26]Angela Fan, Mike Lewis, Yann N. Dauphin:
Strategies for Structuring Story Generation. ACL (1) 2019: 2650-2660 - [c25]Akshat Agarwal, Swaminathan Gurumurthy, Vasu Sharma, Mike Lewis, Katia P. Sycara:
Community Regularization of Visually-Grounded Dialog. AAMAS 2019: 1042-1050 - [c24]Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, Luke Zettlemoyer:
Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog. EMNLP/IJCNLP (1) 2019: 1520-1526 - [c23]Mike Lewis, Angela Fan:
Generative Question Answering: Learning to Answer the Whole Question. ICLR (Poster) 2019 - [c22]Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis:
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. NAACL-HLT (1) 2019: 3795-3805 - [c21]Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis:
Hierarchical Decision Making by Generating and Following Natural Language Instructions. NeurIPS 2019: 10025-10034 - [i16]Angela Fan, Mike Lewis, Yann N. Dauphin:
Strategies for Structuring Story Generation. CoRR abs/1902.01109 (2019) - [i15]Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer:
Improving Semantic Parsing for Task Oriented Dialog. CoRR abs/1902.06000 (2019) - [i14]Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis:
Hierarchical Decision Making by Generating and Following Natural Language Instructions. CoRR abs/1906.00744 (2019) - [i13]Sean Vasquez, Mike Lewis:
MelNet: A Generative Model for Audio in the Frequency Domain. CoRR abs/1906.01083 (2019) - [i12]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov:
RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019) - [i11]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. CoRR abs/1910.13461 (2019) - [i10]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. CoRR abs/1911.00172 (2019) - [i9]Siddharth Dalmia, Abdelrahman Mohamed, Mike Lewis, Florian Metze, Luke Zettlemoyer:
Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models. CoRR abs/1911.03782 (2019) - 2018
- [j4]Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi:
Evaluating Visual Reasoning through Grounded Language Understanding. AI Mag. 39(2): 45-52 (2018) - [c20]Angela Fan, Mike Lewis, Yann N. Dauphin:
Hierarchical Neural Story Generation. ACL (1) 2018: 889-898 - [c19]Spandana Gella, Mike Lewis, Marcus Rohrbach:
A Dataset for Telling the Stories of Social Media Videos. EMNLP 2018: 968-974 - [c18]Nitish Gupta, Mike Lewis:
Neural Compositional Denotational Semantics for Question Answering. EMNLP 2018: 2152-2161 - [c17]Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, Mike Lewis:
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations. EMNLP 2018: 2787-2792 - [c16]Denis Yarats, Mike Lewis:
Hierarchical Text Generation and Planning for Strategic Dialogue. ICML 2018: 5587-5595 - [c15]Paul C. Hershey, Mike Sica, Mike Lewis:
Common ground control system (CGCS) to support autonomous object observation, collection, and response in multi-domain environments. SysCon 2018: 1-6 - [i8]Angela Fan, Mike Lewis, Yann N. Dauphin:
Hierarchical Neural Story Generation. CoRR abs/1805.04833 (2018) - [i7]Nitish Gupta, Mike Lewis:
Neural Compositional Denotational Semantics for Question Answering. CoRR abs/1808.09942 (2018) - [i6]Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, Mike Lewis:
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations. CoRR abs/1810.07942 (2018) - [i5]Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis:
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. CoRR abs/1810.13327 (2018) - 2017
- [c14]Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi:
A Corpus of Natural Language for Visual Reasoning. ACL (2) 2017: 217-223 - [c13]Luheng He, Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Deep Semantic Role Labeling: What Works and What's Next. ACL (1) 2017: 473-483 - [c12]Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer:
End-to-end Neural Coreference Resolution. EMNLP 2017: 188-197 - [c11]Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra:
Deal or No Deal? End-to-End Learning of Negotiation Dialogues. EMNLP 2017: 2443-2453 - [i4]Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra:
Deal or No Deal? End-to-End Learning for Negotiation Dialogues. CoRR abs/1706.05125 (2017) - [i3]Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer:
End-to-end Neural Coreference Resolution. CoRR abs/1707.07045 (2017) - [i2]Denis Yarats, Mike Lewis:
Hierarchical Text Generation and Planning for Strategic Dialogue. CoRR abs/1712.05846 (2017) - 2016
- [c10]Luheng He, Julian Michael, Mike Lewis, Luke Zettlemoyer:
Human-in-the-Loop Parsing. EMNLP 2016: 2337-2342 - [c9]Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Global Neural CCG Parsing with Optimality Guarantees. EMNLP 2016: 2366-2376 - [c8]Mike Lewis, Kenton Lee, Luke Zettlemoyer:
LSTM CCG Parsing. HLT-NAACL 2016: 221-231 - [i1]Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Global Neural CCG Parsing with Optimality Guarantees. CoRR abs/1607.01432 (2016) - 2015
- [c7]Luheng He, Mike Lewis, Luke Zettlemoyer:
Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language. EMNLP 2015: 643-653 - [c6]Mike Lewis, Luheng He, Luke Zettlemoyer:
Joint A* CCG Parsing and Semantic Role Labelling. EMNLP 2015: 1444-1454 - 2014
- [j3]Mike Lewis, Mark Steedman:
Improved CCG Parsing with Semi-supervised Supertagging. Trans. Assoc. Comput. Linguistics 2: 327-338 (2014) - [c5]Mike Lewis, Mark Steedman:
A* CCG Parsing with a Supertag-factored Model. EMNLP 2014: 990-1000 - [c4]Peter Kaiser, Mike Lewis, Ronald P. A. Petrick, Tamim Asfour
, Mark Steedman:
Extracting common sense knowledge from text for robot planning. ICRA 2014: 3749-3756 - 2013
- [j2]Mike Lewis, Mark Steedman:
Combined Distributional and Logical Semantics. Trans. Assoc. Comput. Linguistics 1: 179-192 (2013) - [c3]Mike Lewis, Mark Steedman:
Unsupervised Induction of Cross-Lingual Semantic Relations. EMNLP 2013: 681-692 - [c2]Kai Welke, Peter Kaiser, Alexey Kozlov, Nils Adermann, Tamim Asfour
, Mike Lewis, Mark Steedman:
Grounded spatial symbols for task planning based on experience. Humanoids 2013: 484-491 - [c1]Imran Khan Azeemi, Mike Lewis, Theo Tryfonas
:
Migrating To The Cloud: Lessons And Limitations Of 'Traditional' IS Success Models. CSER 2013: 737-746
2000 – 2009
- 2007
- [j1]David S. Wishart
, Dan Tzur, Craig Knox, Roman Eisner, Anchi Guo, Nelson Young, Dean Cheng, Kevin Jewell, David Arndt, Summit Sawhney, Chris Fung, Lisa Nikolai, Mike Lewis, Marie-Aude Coutouly, Ian J. Forsythe, Peter Tang, Savita Shrivastava, Kevin Jeroncic, Paul Stothard, Godwin Amegbey, David Block, David D. Hau, James Wagner, Jessica Miniaci, Melisa Clements, Mulu Gebremedhin, Natalie Guo, Ying Zhang, Gavin E. Duggan, Glen D. MacInnis, Alim M. Weljie, Reza Dowlatabadi, Fiona Bamforth, Derrick Clive, Russell Greiner, Liang Li, Tom Marrie, Brian D. Sykes, Hans J. Vogel, Lori Querengesser:
HMDB: the Human Metabolome Database. Nucleic Acids Res. 35(Database-Issue): 521-526 (2007)