default search action
Jacob Andreas
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j2]Jessy Lin, Nicholas Tomlin, Jacob Andreas, Jason Eisner:
Decision-Oriented Dialogue for Human-AI Collaboration. Trans. Assoc. Comput. Linguistics 12: 892-911 (2024) - [c96]Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas:
Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling. ACL (Findings) 2024: 231-247 - [c95]Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas:
Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability. ACL (Findings) 2024: 9802-9818 - [c94]Alexis Ross, Jacob Andreas:
Toward In-Context Teaching: Adapting Examples to Students' Misconceptions. ACL (1) 2024: 13283-13310 - [c93]Kaj Bostrom, Harsh Jhamtani, Hao Fang, Sam Thomson, Richard Shin, Patrick Xia, Benjamin Van Durme, Jason Eisner, Jacob Andreas:
Language-to-Code Translation with a Single Labeled Example. EMNLP 2024: 8101-8112 - [c92]Saadia Gabriel, Liang Lyu, James Siderius, Marzyeh Ghassemi, Jacob Andreas, Asuman E. Ozdaglar:
MisinfoEval: Generative AI in the Era of "Alternative Facts". EMNLP 2024: 8566-8578 - [c91]Gabriel Grand, Lionel Wong, Matthew Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas:
LILO: Learning Interpretable Libraries by Compressing and Documenting Code. ICLR 2024 - [c90]Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, David Bau:
Linearity of Relation Decoding in Transformer Language Models. ICLR 2024 - [c89]Athul Paul Jacob, Abhishek Gupta, Jacob Andreas:
Modeling Boundedly Rational Agents with Latent Inference Budgets. ICLR 2024 - [c88]Athul Paul Jacob, Yikang Shen, Gabriele Farina, Jacob Andreas:
The Consensus Game: Language Model Generation via Equilibrium Search. ICLR 2024 - [c87]Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore R. Sumers, Thomas L. Griffiths, Jacob Andreas, Julie Shah:
Learning with Language-Guided State Abstractions. ICLR 2024 - [c86]Lionel Wong, Jiayuan Mao, Pratyusha Sharma, Zachary S. Siegel, Jiahai Feng, Noa Korneev, Joshua B. Tenenbaum, Jacob Andreas:
Learning Grounded Action Abstractions from Language. ICLR 2024 - [c85]Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas:
In-Context Language Learning: Architectures and Algorithms. ICML 2024 - [c84]Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang:
Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling. ICML 2024 - [c83]Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba:
A Multimodal Automated Interpretability Agent. ICML 2024 - [c82]Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Benjamin Van Durme:
Natural Language Decomposition and Interpretation of Complex Utterances. IJCAI 2024: 6306-6314 - [c81]Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas:
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes. NAACL-HLT 2024: 1311-1329 - [c80]Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim:
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks. NAACL-HLT 2024: 1819-1862 - [c79]Athul Paul Jacob, Gabriele Farina, Jacob Andreas:
Regularized Conventions: Equilibrium Computation as a Model of Pragmatic Reasoning. NAACL-HLT 2024: 2944-2955 - [c78]Nikita Moghe, Patrick Xia, Jacob Andreas, Jason Eisner, Benjamin Van Durme, Harsh Jhamtani:
Interpreting User Requests in the Context of Natural Language Standing Instructions. NAACL-HLT (Findings) 2024: 4043-4060 - [i107]Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas:
Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability. CoRR abs/2401.08574 (2024) - [i106]Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas:
In-Context Language Learning: Architectures and Algorithms. CoRR abs/2401.12973 (2024) - [i105]Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore R. Sumers, Thomas L. Griffiths, Jacob Andreas, Julie A. Shah:
Learning with Language-Guided State Abstractions. CoRR abs/2402.18759 (2024) - [i104]Gabriel Grand, Valerio Pepe, Jacob Andreas, Joshua B. Tenenbaum:
Loose LIPS Sink Ships: Asking Questions in Battleship with Language-Informed Program Sampling. CoRR abs/2402.19471 (2024) - [i103]Kunal Handa, Yarin Gal, Ellie Pavlick, Noah D. Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li:
Bayesian Preference Elicitation with Language Models. CoRR abs/2403.05534 (2024) - [i102]Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas:
Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling. CoRR abs/2403.14551 (2024) - [i101]Emmy Liu, Graham Neubig, Jacob Andreas:
An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models. CoRR abs/2404.03028 (2024) - [i100]Achyuta Rajaram, Neil Chowdhury, Antonio Torralba, Jacob Andreas, Sarah Schwettmann:
Automatic Discovery of Visual Circuits. CoRR abs/2404.14349 (2024) - [i99]Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba:
A Multimodal Automated Interpretability Agent. CoRR abs/2404.14394 (2024) - [i98]Megha Srivastava, Cédric Colas, Dorsa Sadigh, Jacob Andreas:
Policy Learning with a Language Bottleneck. CoRR abs/2405.04118 (2024) - [i97]Alexis Ross, Jacob Andreas:
Toward In-Context Teaching: Adapting Examples to Students' Misconceptions. CoRR abs/2405.04495 (2024) - [i96]Canaan Breiss, Alexis Ross, Amani Maina-Kilaas, Roger Levy, Jacob Andreas:
Learning Phonotactics from Linguistic Informants. CoRR abs/2405.04726 (2024) - [i95]Anna A. Ivanova, Aalok Sathe, Benjamin Lipkin, Unnathi Kumar, Setayesh Radkani, Thomas Hikaru Clark, Carina Kauf, Jennifer Hu, R. T. Pramod, Gabriel Grand, Vivian C. Paulun, Maria Ryskina, Ekin Akyürek, Ethan Wilcox, Nafisa Rashid, Leshem Choshen, Roger Levy, Evelina Fedorenko, Joshua B. Tenenbaum, Jacob Andreas:
Elements of World Knowledge (EWOK): A cognition-inspired framework for evaluating basic world knowledge in language models. CoRR abs/2405.09605 (2024) - [i94]Bairu Hou, Yang Zhang, Jacob Andreas, Shiyu Chang:
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation. CoRR abs/2406.06950 (2024) - [i93]Belinda Z. Li, Emmy Liu, Alexis Ross, Abbas Zeitoun, Graham Neubig, Jacob Andreas:
Language Modeling with Editable External Knowledge. CoRR abs/2406.11830 (2024) - [i92]Eric Zhang, Leshem Choshen, Jacob Andreas:
Unforgettable Generalization in Language Models. CoRR abs/2409.02228 (2024) - [i91]Andi Peng, Belinda Z. Li, Ilia Sucholutsky, Nishanth Kumar, Julie A. Shah, Jacob Andreas, Andreea Bobu:
Adaptive Language-Guided Abstraction from Contrastive Explanations. CoRR abs/2409.08212 (2024) - [i90]Ziqian Zhong, Jacob Andreas:
Algorithmic Capabilities of Random Transformers. CoRR abs/2410.04368 (2024) - [i89]Mehul Damani, Idan Shenfeld, Andi Peng, Andreea Bobu, Jacob Andreas:
Learning How Hard to Think: Input-Adaptive Allocation of LM Computation. CoRR abs/2410.04707 (2024) - [i88]Saadia Gabriel, Liang Lyu, James Siderius, Marzyeh Ghassemi, Jacob Andreas, Asuman E. Ozdaglar:
MisinfoEval: Generative AI in the Era of "Alternative Facts". CoRR abs/2410.09949 (2024) - [i87]Morris Yau, Ekin Akyürek, Jiayuan Mao, Joshua B. Tenenbaum, Stefanie Jegelka, Jacob Andreas:
Learning Linear Attention in Polynomial Time. CoRR abs/2410.10101 (2024) - [i86]Leshem Choshen, Yang Zhang, Jacob Andreas:
A Hitchhiker's Guide to Scaling Law Estimation. CoRR abs/2410.11840 (2024) - [i85]Reece Shuttleworth, Jacob Andreas, Antonio Torralba, Pratyusha Sharma:
LoRA vs Full Fine-tuning: An Illusion of Equivalence. CoRR abs/2410.21228 (2024) - [i84]Ekin Akyürek, Mehul Damani, Linlu Qiu, Han Guo, Yoon Kim, Jacob Andreas:
The Surprising Effectiveness of Test-Time Training for Abstract Reasoning. CoRR abs/2411.07279 (2024) - 2023
- [c77]Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning:
Grokking of Hierarchical Structure in Vanilla Transformers. ACL (2) 2023: 439-448 - [c76]Ekin Akyürek, Jacob Andreas:
LexSym: Compositionality as Lexical Symmetry. ACL (1) 2023: 639-657 - [c75]Hao Fang, Anusha Balakrishnan, Harsh Jhamtani, John Bufe, Jean Crawford, Jayant Krishnamurthy, Adam Pauls, Jason Eisner, Jacob Andreas, Dan Klein:
The Whole Truth and Nothing But the Truth: Faithful and Controllable Dialogue Response Generation with Dataflow Transduction and Constrained Decoding. ACL (Findings) 2023: 5682-5700 - [c74]Belinda Z. Li, Maxwell I. Nye, Jacob Andreas:
Language Modeling with Latent Situations. ACL (Findings) 2023: 12556-12571 - [c73]Shinjini Ghosh, Yoon Kim, Ramón Fernandez Astudillo, Tahira Naseem, Jacob Andreas:
Alignment via Mutual Information. CoNLL 2023: 488-497 - [c72]Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning:
Pushdown Layers: Encoding Recursive Structure in Transformer Language Models. EMNLP 2023: 3233-3247 - [c71]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? EMNLP 2023: 4791-4797 - [c70]Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou:
What learning algorithm is in-context learning? Investigations with linear models. ICLR 2023 - [c69]Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning:
Characterizing intrinsic compositionality in transformers with Tree Projections. ICLR 2023 - [c68]Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, Jacob Andreas:
Guiding Pretraining in Reinforcement Learning with Large Language Models. ICML 2023: 8657-8677 - [c67]Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang:
PromptBoosting: Black-Box Text Classification with Ten Forward Passes. ICML 2023: 13309-13324 - [c66]Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, Antonio Torralba:
FIND: A Function Description Benchmark for Evaluating Interpretability Methods. NeurIPS 2023 - [c65]Ziqian Zhong, Ziming Liu, Max Tegmark, Jacob Andreas:
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks. NeurIPS 2023 - [i83]Belinda Z. Li, William Chen, Pratyusha Sharma, Jacob Andreas:
LaMPP: Language Models as Probabilistic Priors for Perception and Action. CoRR abs/2302.02801 (2023) - [i82]Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas, Trevor Darrell, Pieter Abbeel, Abhishek Gupta, Jacob Andreas:
Guiding Pretraining in Reinforcement Learning with Large Language Models. CoRR abs/2302.06692 (2023) - [i81]Eric Chu, Jacob Andreas, Stephen Ansolabehere, Deb Roy:
Language Models Trained on Media Diets Can Predict Public Opinion. CoRR abs/2303.16779 (2023) - [i80]Evan Hernandez, Belinda Z. Li, Jacob Andreas:
Measuring and Manipulating Knowledge Representations in Language Models. CoRR abs/2304.00740 (2023) - [i79]Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Benjamin Van Durme:
Natural Language Decomposition and Interpretation of Complex Utterances. CoRR abs/2305.08677 (2023) - [i78]Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning:
Grokking of Hierarchical Structure in Vanilla Transformers. CoRR abs/2305.18741 (2023) - [i77]Jessy Lin, Nicholas Tomlin, Jacob Andreas, Jason Eisner:
Decision-Oriented Dialogue for Human-AI Collaboration. CoRR abs/2305.20076 (2023) - [i76]Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum:
From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought. CoRR abs/2306.12672 (2023) - [i75]Ziqian Zhong, Ziming Liu, Max Tegmark, Jacob Andreas:
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks. CoRR abs/2306.17844 (2023) - [i74]Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim:
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks. CoRR abs/2307.02477 (2023) - [i73]Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, David Bau:
Linearity of Relation Decoding in Transformer Language Models. CoRR abs/2308.09124 (2023) - [i72]Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, Antonio Torralba:
A Function Interpretation Benchmark for Evaluating Interpretability Methods. CoRR abs/2309.03886 (2023) - [i71]Athul Paul Jacob, Yikang Shen, Gabriele Farina, Jacob Andreas:
The Consensus Game: Language Model Generation via Equilibrium Search. CoRR abs/2310.09139 (2023) - [i70]Belinda Z. Li, Alex Tamkin, Noah D. Goodman, Jacob Andreas:
Eliciting Human Preferences with Language Models. CoRR abs/2310.11589 (2023) - [i69]Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas:
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes. CoRR abs/2310.13257 (2023) - [i68]Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning:
Pushdown Layers: Encoding Recursive Structure in Transformer Language Models. CoRR abs/2310.19089 (2023) - [i67]Gabriel Grand, Lionel Wong, Matthew Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas:
LILO: Learning Interpretable Libraries by Compressing and Documenting Code. CoRR abs/2310.19791 (2023) - [i66]Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang:
Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling. CoRR abs/2311.08718 (2023) - [i65]Athul Paul Jacob, Gabriele Farina, Jacob Andreas:
Regularized Conventions: Equilibrium Computation as a Model of Pragmatic Reasoning. CoRR abs/2311.09712 (2023) - [i64]Nikita Moghe, Patrick Xia, Jacob Andreas, Jason Eisner, Benjamin Van Durme, Harsh Jhamtani:
Interpreting User Requests in the Context of Natural Language Standing Instructions. CoRR abs/2311.09796 (2023) - [i63]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? CoRR abs/2312.03729 (2023) - [i62]Athul Paul Jacob, Abhishek Gupta, Jacob Andreas:
Modeling Boundedly Rational Agents with Latent Inference Budgets. CoRR abs/2312.04030 (2023) - [i61]Shawn Im, Jacob Andreas, Yilun Zhou:
Evaluating the Utility of Model Explanations for Model Development. CoRR abs/2312.06032 (2023) - [i60]Lionel Wong, Jiayuan Mao, Pratyusha Sharma, Zachary S. Siegel, Jiahai Feng, Noa Korneev, Joshua B. Tenenbaum, Jacob Andreas:
Learning adaptive planning representations with natural language guidance. CoRR abs/2312.08566 (2023) - 2022
- [c64]Anton Belyy, Chieh-Yang Huang, Jacob Andreas, Emmanouil Antonios Platanios, Sam Thomson, Richard Shin, Subhro Roy, Aleksandr Nisnevich, Charles Chen, Benjamin Van Durme:
Guided K-best Selection for Semantic Parsing Annotation. ACL (demo) 2022: 114-126 - [c63]Pratyusha Sharma, Antonio Torralba, Jacob Andreas:
Skill Induction and Planning with Latent Language. ACL (1) 2022: 1713-1726 - [c62]Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Josh Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan:
Identifying concept libraries from language about object structure. CogSci 2022 - [c61]Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu:
Towards Tracing Knowledge in Language Models Back to the Training Data. EMNLP (Findings) 2022: 2429-2446 - [c60]Jacob Andreas:
Language Models as Agent Models. EMNLP (Findings) 2022: 5769-5779 - [c59]Bailin Wang, Ivan Titov, Jacob Andreas, Yoon Kim:
Hierarchical Phrase-Based Sequence-to-Sequence Learning. EMNLP 2022: 8211-8229 - [c58]Afra Feyza Akyürek, Ekin Akyürek, Derry Wijaya, Jacob Andreas:
Subspace Regularizers for Few-Shot Class Incremental Learning. ICLR 2022 - [c57]Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas:
Natural Language Descriptions of Deep Visual Features. ICLR 2022 - [c56]Athul Paul Jacob, David J. Wu, Gabriele Farina, Adam Lerer, Hengyuan Hu, Anton Bakhtin, Jacob Andreas, Noam Brown:
Modeling Strong and Human-Like Gameplay with KL-Regularized Search. ICML 2022: 9695-9728 - [c55]Belinda Z. Li, Jane A. Yu, Madian Khabsa, Luke Zettlemoyer, Alon Y. Halevy, Jacob Andreas:
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. NAACL-HLT 2022: 4696-4715 - [c54]Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, Yuke Zhu:
Pre-Trained Language Models for Interactive Decision-Making. NeurIPS 2022 - [c53]Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Torralba, Jacob Andreas, Dieter Fox:
Correcting Robot Plans with Natural Language Feedback. Robotics: Science and Systems 2022 - [i59]Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas:
Natural Language Descriptions of Deep Visual Features. CoRR abs/2201.11114 (2022) - [i58]Ekin Akyürek, Jacob Andreas:
Compositionality as Lexical Symmetry. CoRR abs/2201.12926 (2022) - [i57]Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, Yuke Zhu:
Pre-Trained Language Models for Interactive Decision-Making. CoRR abs/2202.01771 (2022) - [i56]Olivia Watkins, Trevor Darrell, Pieter Abbeel, Jacob Andreas, Abhishek Gupta:
Teachable Reinforcement Learning via Advice Distillation. CoRR abs/2203.11197 (2022) - [i55]Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Torralba, Jacob Andreas, Dieter Fox:
Correcting Robot Plans with Natural Language Feedback. CoRR abs/2204.05186 (2022) - [i54]Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan:
Identifying concept libraries from language about object structure. CoRR abs/2205.05666 (2022) - [i53]Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu:
Tracing Knowledge in Language Models Back to the Training Data. CoRR abs/2205.11482 (2022) - [i52]Hao Fang, Anusha Balakrishnan, Harsh Jhamtani, John Bufe, Jean Crawford, Jayant Krishnamurthy, Adam Pauls, Jason Eisner, Jacob Andreas, Dan Klein:
The Whole Truth and Nothing But the Truth: Faithful and Controllable Dialogue Response Generation with Dataflow Transduction and Constrained Decoding. CoRR abs/2209.07800 (2022) - [i51]Alex Gu, Tamara Mitrovska, Daniela Velez, Jacob Andreas, Armando Solar-Lezama:
ObSynth: An Interactive Synthesis System for Generating Object Models from Natural Language Specifications. CoRR abs/2210.11468 (2022) - [i50]Shikhar Murty, Pratyusha Sharma, Jacob Andreas, Christopher D. Manning:
Characterizing Intrinsic Compositionality in Transformers with Tree Projections. CoRR abs/2211.01288 (2022) - [i49]Bailin Wang, Ivan Titov, Jacob Andreas, Yoon Kim:
Hierarchical Phrase-based Sequence-to-Sequence Learning. CoRR abs/2211.07906 (2022) - [i48]Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou:
What learning algorithm is in-context learning? Investigations with linear models. CoRR abs/2211.15661 (2022) - [i47]Jacob Andreas:
Language Models as Agent Models. CoRR abs/2212.01681 (2022) - [i46]Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang:
PromptBoosting: Black-Box Text Classification with Ten Forward Passes. CoRR abs/2212.09257 (2022) - [i45]Belinda Z. Li, Maxwell I. Nye, Jacob Andreas:
Language Modeling with Latent Situations. CoRR abs/2212.10012 (2022) - 2021
- [c52]Joe O'Connor, Jacob Andreas:
What Context Features Can Transformer Language Models Use? ACL/IJCNLP (1) 2021: 851-864 - [c51]Belinda Z. Li, Maxwell I. Nye, Jacob Andreas:
Implicit Representations of Meaning in Neural Language Models. ACL/IJCNLP (1) 2021: 1813-1827 - [c50]Emmanouil Antonios Platanios, Adam Pauls, Subhro Roy, Yuchen Zhang, Alexander Kyte, Alan Guo, Sam Thomson, Jayant Krishnamurthy, Jason Andrew Wolfe, Jacob Andreas, Dan Klein:
Value-Agnostic Conversational Semantic Parsing. ACL/IJCNLP (1) 2021: 3666-3681 - [c49]Ekin Akyürek, Jacob Andreas:
Lexicon Learning for Few Shot Sequence Modeling. ACL/IJCNLP (1) 2021: 4934-4946 - [c48]Catherine Wong, Yoni Friedman, Jacob Andreas, Josh Tenenbaum:
Language as a bootstrap for compositional visual reasoning. CogSci 2021 - [c47]Evan Hernandez, Jacob Andreas:
The Low-Dimensional Linear Geometry of Contextualized Word Representations. CoNLL 2021: 82-93 - [c46]D. Anthony Bau, Jacob Andreas:
How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction. EMNLP (1) 2021: 5513-5526 - [c45]Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba:
Toward a Visual Concept Vocabulary for GAN Latent Space. ICCV 2021: 6784-6792 - [c44]Ekin Akyürek, Afra Feyza Akyürek, Jacob Andreas:
Learning to Recombine and Resample Data For Compositional Generalization. ICLR 2021 - [c43]Maxwell I. Nye, Yewen Pu, Matthew Bowers, Jacob Andreas, Joshua B. Tenenbaum, Armando Solar-Lezama:
Representing Partial Programs with Blended Abstract Semantics. ICLR 2021 - [c42]Catherine Wong, Kevin Ellis, Joshua B. Tenenbaum, Jacob Andreas:
Leveraging Language to Learn Program Abstractions and Search Heuristics. ICML 2021: 11193-11204 - [c41]Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, Jacob Andreas:
Compositional Generalization for Neural Semantic Parsing via Span-level Supervised Attention.