Остановите войну!
for scientists:
default search action
Tal Linzen
Person information
- affiliation: New York University, NY, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [i60]William Merrill, Zhaofeng Wu, Norihito Naka, Yoon Kim, Tal Linzen:
Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment. CoRR abs/2402.13956 (2024) - [i59]Grusha Prasad, Tal Linzen:
SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser. CoRR abs/2403.07202 (2024) - 2023
- [c53]Aditya Yedetore, Tal Linzen, Robert Frank, R. Thomas McCoy:
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech. ACL (1) 2023: 9370-9393 - [c52]Aaron Mueller, Tal Linzen:
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases. ACL (1) 2023: 11237-11252 - [c51]Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim:
SLOG: A Structural Generalization Benchmark for Semantic Parsing. EMNLP 2023: 3213-3232 - [c50]Sophie Hao, Tal Linzen:
Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number. EMNLP (Findings) 2023: 4531-4539 - [c49]William Timkey, Tal Linzen:
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing. EMNLP (Findings) 2023: 8705-8720 - [i58]Aditya Yedetore, Tal Linzen, Robert Frank, R. Thomas McCoy:
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech. CoRR abs/2301.11462 (2023) - [i57]Aaron Mueller, Tal Linzen:
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases. CoRR abs/2305.19905 (2023) - [i56]Cara Su-Yi Leong, Tal Linzen:
Language Models Can Learn Exceptions to Syntactic Rules. CoRR abs/2306.05969 (2023) - [i55]Matthew Mandelkern, Tal Linzen:
Do Language Models Refer? CoRR abs/2308.05576 (2023) - [i54]Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim:
SLOG: A Structural Generalization Benchmark for Semantic Parsing. CoRR abs/2310.15040 (2023) - [i53]Sophie Hao, Tal Linzen:
Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number. CoRR abs/2310.15151 (2023) - [i52]William Timkey, Tal Linzen:
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing. CoRR abs/2310.16142 (2023) - [i51]Jackson Petty, Sjoerd van Steenkiste, Ishita Dasgupta, Fei Sha, Dan Garrette, Tal Linzen:
The Impact of Depth and Width on Transformer Language Model Generalization. CoRR abs/2310.19956 (2023) - [i50]Tiwalayo Eisape, Michael Henry Tessler, Ishita Dasgupta, Fei Sha, Sjoerd van Steenkiste, Tal Linzen:
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models. CoRR abs/2311.00445 (2023) - [i49]Aaron Mueller, Albert Webson, Jackson Petty, Tal Linzen:
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax. CoRR abs/2311.07811 (2023) - 2022
- [j7]Nouha Dziri, Hannah Rashkin, Tal Linzen, David Reitter:
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark. Trans. Assoc. Comput. Linguistics 10: 1066-1083 (2022) - [c48]Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, Sebastian Schuster:
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models. ACL (Findings) 2022: 1352-1368 - [c47]Aaron Mueller, Yu Xia, Tal Linzen:
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models. CoNLL 2022: 95-109 - [c46]William Merrill, Alex Warstadt, Tal Linzen:
Entailment Semantics Can Be Extracted from an Ideal Language Model. CoNLL 2022: 176-193 - [c45]Suhas Arehalli, Brian Dillon, Tal Linzen:
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities. CoNLL 2022: 301-313 - [c44]Kristijan Armeni, Christopher J. Honey, Tal Linzen:
Characterizing Verbatim Short-Term Memory in Neural Language Models. CoNLL 2022: 405-424 - [c43]Thibault Sellam, Steve Yadlowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Raluca Turc, Jacob Eisenstein, Dipanjan Das, Ellie Pavlick:
The MultiBERTs: BERT Reproductions for Robustness Analysis. ICLR 2022 - [c42]Sebastian Schuster, Tal Linzen:
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it. NAACL-HLT 2022: 969-982 - [c41]Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Krzysztof Nowak, Tal Linzen, Fei Sha, Kristina Toutanova:
Improving Compositional Generalization with Latent Structure and Data Augmentation. NAACL-HLT 2022: 4341-4362 - [i48]Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, Sebastian Schuster:
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models. CoRR abs/2203.09397 (2022) - [i47]Sebastian Schuster, Tal Linzen:
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it. CoRR abs/2205.03472 (2022) - [i46]William Merrill, Alex Warstadt, Tal Linzen:
Entailment Semantics Can Be Extracted from an Ideal Language Model. CoRR abs/2209.12407 (2022) - [i45]Suhas Arehalli, Brian Dillon, Tal Linzen:
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities. CoRR abs/2210.12187 (2022) - [i44]Kristijan Armeni, Christopher J. Honey, Tal Linzen:
Characterizing Verbatim Short-Term Memory in Neural Language Models. CoRR abs/2210.13569 (2022) - [i43]Aaron Mueller, Yu Xia, Tal Linzen:
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models. CoRR abs/2210.14328 (2022) - [i42]Najoung Kim, Tal Linzen, Paul Smolensky:
Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models. CoRR abs/2212.10769 (2022) - 2021
- [j6]Marten van Schijndel, Tal Linzen:
Single-Stage Prediction Models Do Not Explain the Magnitude of Syntactic Disambiguation Difficulty. Cogn. Sci. 45(6) (2021) - [c40]Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart M. Shieber, Tal Linzen, Yonatan Belinkov:
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models. ACL/IJCNLP (1) 2021: 1828-1843 - [c39]Laura Aina, Tal Linzen:
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. BlackboxNLP@EMNLP 2021: 42-57 - [c38]Shauli Ravfogel, Grusha Prasad, Tal Linzen, Yoav Goldberg:
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction. CoNLL 2021: 194-209 - [c37]Alicia Parrish, Sebastian Schuster, Alex Warstadt, Omar Agha, Soo-Hwan Lee, Zhuoye Zhao, Samuel R. Bowman, Tal Linzen:
NOPE: A Corpus of Naturally-Occurring Presuppositions in English. CoNLL 2021: 349-366 - [c36]Jason Wei, Dan Garrette, Tal Linzen, Ellie Pavlick:
Frequency Effects on Syntactic Rule Learning in Transformers. EMNLP (1) 2021: 932-948 - [c35]Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alex Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, Samuel R. Bowman:
Does Putting a Linguist in the Loop Improve NLU Data Collection? EMNLP (Findings) 2021: 4886-4901 - [c34]Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick:
Predicting Inductive Biases of Pre-Trained Models. ICLR 2021 - [i41]Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alex Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, Samuel R. Bowman:
Does Putting a Linguist in the Loop Improve NLU Data Collection? CoRR abs/2104.07179 (2021) - [i40]Nouha Dziri, Hannah Rashkin, Tal Linzen, David Reitter:
Evaluating Groundedness in Dialogue Systems: The BEGIN Benchmark. CoRR abs/2105.00071 (2021) - [i39]Shauli Ravfogel, Grusha Prasad, Tal Linzen, Yoav Goldberg:
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction. CoRR abs/2105.06965 (2021) - [i38]Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart M. Shieber, Tal Linzen, Yonatan Belinkov:
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models. CoRR abs/2106.06087 (2021) - [i37]Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ian Tenney, Ellie Pavlick:
The MultiBERTs: BERT Reproductions for Robustness Analysis. CoRR abs/2106.16163 (2021) - [i36]Alicia Parrish, Sebastian Schuster, Alex Warstadt, Omar Agha, Soo-Hwan Lee, Zhuoye Zhao, Samuel R. Bowman, Tal Linzen:
NOPE: A Corpus of Naturally-Occurring Presuppositions in English. CoRR abs/2109.06987 (2021) - [i35]Jason Wei, Dan Garrette, Tal Linzen, Ellie Pavlick:
Frequency Effects on Syntactic Rule Learning in Transformers. CoRR abs/2109.07020 (2021) - [i34]Laura Aina, Tal Linzen:
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. CoRR abs/2109.07848 (2021) - [i33]Wang Zhu, Peter Shaw, Tal Linzen, Fei Sha:
Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks. CoRR abs/2111.05013 (2021) - [i32]R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, Asli Celikyilmaz:
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN. CoRR abs/2111.09509 (2021) - [i31]Linlu Qiu, Peter Shaw, Panupong Pasupat, Pawel Krzysztof Nowak, Tal Linzen, Fei Sha, Kristina Toutanova:
Improving Compositional Generalization with Latent Structure and Data Augmentation. CoRR abs/2112.07610 (2021) - 2020
- [j5]R. Thomas McCoy, Robert Frank, Tal Linzen:
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks. Trans. Assoc. Comput. Linguistics 8: 125-140 (2020) - [c33]Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal Linzen:
Syntactic Data Augmentation Increases Robustness to Inference Heuristics. ACL 2020: 2339-2352 - [c32]Michael A. Lepori, Tal Linzen, R. Thomas McCoy:
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs. ACL 2020: 3306-3316 - [c31]Tal Linzen:
How Can We Accelerate Progress Towards Human-like Linguistic Generalization? ACL 2020: 5210-5217 - [c30]Aaron Mueller, Garrett Nicolai, Panayiota Petrou-Zeniou, Natalia Talmina, Tal Linzen:
Cross-Linguistic Syntactic Evaluation of Word Prediction Models. ACL 2020: 5523-5539 - [c29]R. Thomas McCoy, Junghyun Min, Tal Linzen:
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. BlackboxNLP@EMNLP 2020: 217-227 - [c28]Paul Soulos, R. Thomas McCoy, Tal Linzen, Paul Smolensky:
Discovering the Compositional Structure of Vector Representations with Role Learning Networks. BlackboxNLP@EMNLP 2020: 238-254 - [c27]Suhas Arehalli, Tal Linzen:
Neural Language Models Capture Some, But Not All Agreement Attraction Effects. CogSci 2020 - [c26]Richard Thomas McCoy, Erin Grant, Paul Smolensky, Tom Griffiths, Tal Linzen:
Universal linguistic inductive biases via meta-learning. CogSci 2020 - [c25]Najoung Kim, Tal Linzen:
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation. EMNLP (1) 2020: 9087-9105 - [e5]Raquel Fernández, Tal Linzen:
Proceedings of the 24th Conference on Computational Natural Language Learning, CoNLL 2020, Online, November 19-20, 2020. Association for Computational Linguistics 2020, ISBN 978-1-952148-63-7 [contents] - [i30]R. Thomas McCoy, Robert Frank, Tal Linzen:
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. CoRR abs/2001.03632 (2020) - [i29]Tal Linzen, Marco Baroni:
Syntactic Structure from Deep Learning. CoRR abs/2004.10827 (2020) - [i28]Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal Linzen:
Syntactic Data Augmentation Increases Robustness to Inference Heuristics. CoRR abs/2004.11999 (2020) - [i27]Michael A. Lepori, Tal Linzen, R. Thomas McCoy:
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs. CoRR abs/2005.00019 (2020) - [i26]Aaron Mueller, Garrett Nicolai, Panayiota Petrou-Zeniou, Natalia Talmina, Tal Linzen:
Cross-Linguistic Syntactic Evaluation of Word Prediction Models. CoRR abs/2005.00187 (2020) - [i25]Tal Linzen:
How Can We Accelerate Progress Towards Human-like Linguistic Generalization? CoRR abs/2005.00955 (2020) - [i24]R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen:
Universal linguistic inductive biases via meta-learning. CoRR abs/2006.16324 (2020) - [i23]Najoung Kim, Tal Linzen:
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation. CoRR abs/2010.05465 (2020)
2010 – 2019
- 2019
- [j4]Afra Alishahi, Grzegorz Chrupala, Tal Linzen:
Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop. Nat. Lang. Eng. 25(4): 543-557 (2019) - [c24]Tom McCoy, Ellie Pavlick, Tal Linzen:
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. ACL (1) 2019: 3428-3448 - [c23]Brenden M. Lake, Tal Linzen, Marco Baroni:
Human few-shot learning of compositional instructions. CogSci 2019: 611-617 - [c22]Grusha Prasad, Tal Linzen:
How much harder are hard garden-path sentences than easy ones? CogSci 2019: 3339 - [c21]Grusha Prasad, Marten van Schijndel, Tal Linzen:
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models. CoNLL 2019: 66-76 - [c20]Marten van Schijndel, Aaron Mueller, Tal Linzen:
Quantity doesn't buy quality syntax with neural language models. EMNLP/IJCNLP (1) 2019: 5830-5836 - [c19]R. Thomas McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky:
RNNs implicitly implement tensor-product representations. ICLR (Poster) 2019 - [c18]Shauli Ravfogel, Yoav Goldberg, Tal Linzen:
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages. NAACL-HLT (1) 2019: 3532-3542 - [c17]Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick:
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension. *SEM@NAACL-HLT 2019: 235-249 - [e4]Tal Linzen, Grzegorz Chrupala, Yonatan Belinkov, Dieuwke Hupkes:
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@ACL 2019, Florence, Italy, August 1, 2019. Association for Computational Linguistics 2019, ISBN 978-1-950737-30-7 [contents] - [i22]Brenden M. Lake, Tal Linzen, Marco Baroni:
Human few-shot learning of compositional instructions. CoRR abs/1901.04587 (2019) - [i21]R. Thomas McCoy, Ellie Pavlick, Tal Linzen:
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. CoRR abs/1902.01007 (2019) - [i20]Shauli Ravfogel, Yoav Goldberg, Tal Linzen:
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages. CoRR abs/1903.06400 (2019) - [i19]Afra Alishahi, Grzegorz Chrupala, Tal Linzen:
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop. CoRR abs/1904.04063 (2019) - [i18]Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick:
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension. CoRR abs/1904.11544 (2019) - [i17]Marten van Schijndel, Aaron Mueller, Tal Linzen:
Quantity doesn't buy quality syntax with neural language models. CoRR abs/1909.00111 (2019) - [i16]Grusha Prasad, Marten van Schijndel, Tal Linzen:
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models. CoRR abs/1909.10579 (2019) - [i15]Paul Soulos, Tom McCoy, Tal Linzen, Paul Smolensky:
Discovering the Compositional Structure of Vector Representations with Role Learning Networks. CoRR abs/1910.09113 (2019) - [i14]R. Thomas McCoy, Junghyun Min, Tal Linzen:
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. CoRR abs/1911.02969 (2019) - 2018
- [c16]Laura Gwilliams, David Poeppel, Alec Marantz, Tal Linzen:
Phonological (un)certainty weights lexical activation. CMCL 2018: 29-34 - [c15]Tal Linzen, Brian Leonard:
Distinct patterns of syntactic agreement errors in recurrent networks and humans. CogSci 2018 - [c14]Richard Thomas McCoy, Robert Frank, Tal Linzen:
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. CogSci 2018 - [c13]Marten van Schijndel, Tal Linzen:
Modeling garden path effects without explicit hierarchical syntax. CogSci 2018 - [c12]Rebecca Marvin, Tal Linzen:
Targeted Syntactic Evaluation of Language Models. EMNLP 2018: 1192-1202 - [c11]Marten van Schijndel, Tal Linzen:
A Neural Model of Adaptation in Reading. EMNLP 2018: 4704-4710 - [c10]Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, Marco Baroni:
Colorless Green Recurrent Networks Dream Hierarchically. NAACL-HLT 2018: 1195-1205 - [e3]Asad B. Sayeed, Cassandra Jacobs, Tal Linzen, Marten van Schijndel:
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics, CMCL 2018, Salt Lake City, Utah, USA, January 7, 2018. Association for Computational Linguistics 2018, ISBN 978-1-948087-10-0 [contents] - [e2]Tal Linzen, Grzegorz Chrupala, Afra Alishahi:
Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018. Association for Computational Linguistics 2018, ISBN 978-1-948087-71-1 [contents] - [i13]R. Thomas McCoy, Robert Frank, Tal Linzen:
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. CoRR abs/1802.09091 (2018) - [i12]Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, Marco Baroni:
Colorless green recurrent networks dream hierarchically. CoRR abs/1803.11138 (2018) - [i11]Tal Linzen, Brian Leonard:
Distinct patterns of syntactic agreement errors in recurrent networks and humans. CoRR abs/1807.06882 (2018) - [i10]Rebecca Marvin, Tal Linzen:
Targeted Syntactic Evaluation of Language Models. CoRR abs/1808.09031 (2018) - [i9]Marten van Schijndel, Tal Linzen:
A Neural Model of Adaptation in Reading. CoRR abs/1808.09930 (2018) - [i8]Tal Linzen:
What can linguistics and deep learning contribute to each other? CoRR abs/1809.04179 (2018) - [i7]Marten van Schijndel, Tal Linzen:
Can Entropy Explain Successor Surprisal Effects in Reading? CoRR abs/1810.11481 (2018) - [i6]R. Thomas McCoy, Tal Linzen:
Non-entailed subsequences as a challenge for natural language inference. CoRR abs/1811.12112 (2018) - [i5]R. Thomas McCoy, Tal Linzen, Ewan Dunbar, Paul Smolensky:
RNNs Implicitly Implement Tensor Product Representations. CoRR abs/1812.08718 (2018) - 2017
- [c9]Tal Linzen, Noam Siegelman, Louisa Bogaerts:
Prediction and uncertainty in an artificial language. CogSci 2017 - [c8]Émile Enguehard, Yoav Goldberg, Tal Linzen:
Exploring the Syntactic Abilities of RNNs with Multi-task Learning. CoNLL 2017: 3-14 - [c7]Gaël Le Godais, Tal Linzen, Emmanuel Dupoux:
Comparing Character-level Neural Language Models Using a Lexical Decision Task. EACL (2) 2017: 125-130 - [e1]Ted Gibson, Tal Linzen, Asad B. Sayeed, Marten van Schijndel, William Schuler:
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics, CMCL@EACL 2017, Valencia, Spain, April 3, 2017. Association for Computational Linguistics 2017, ISBN 978-1-945626-38-8 [contents] - [i4]Émile Enguehard, Yoav Goldberg, Tal Linzen:
Exploring the Syntactic Abilities of RNNs with Multi-task Learning. CoRR abs/1706.03542 (2017) - [i3]Laura Gwilliams, David Poeppel, Alec Marantz, Tal Linzen:
Phonological (un)certainty weights lexical activation. CoRR abs/1711.06729 (2017) - 2016
- [j3]Tal Linzen, T. Florian Jaeger:
Uncertainty and Expectation in Sentence Processing: Evidence From Subcategorization Distributions. Cogn. Sci. 40(6): 1382-1411 (2016) - [j2]Tal Linzen, Emmanuel Dupoux, Yoav Goldberg:
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Trans. Assoc. Comput. Linguistics 4: 521-535 (2016) - [c6]Tal Linzen:
Issues in evaluating semantic spaces using word analogies. RepEval@ACL 2016: 13-18 - [c5]Allyson Ettinger, Tal Linzen:
Evaluating vector space models using human semantic priming results. RepEval@ACL 2016: 72-77 - [c4]Tal Linzen, Emmanuel Dupoux, Benjamin Spector:
Quantificational features in distributional word representations. *SEM@ACL 2016 - [i2]Tal Linzen:
Issues in evaluating semantic spaces using word analogies. CoRR abs/1606.07736 (2016) - [i1]Tal Linzen, Emmanuel Dupoux, Yoav Goldberg:
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. CoRR abs/1611.01368 (2016) - 2015
- [j1]Joseph Fruchter, Tal Linzen, Masha Westerlund, Alec Marantz:
Lexical Preactivation in Basic Linguistic Phrases. J. Cogn. Neurosci. 27(10): 1912-1935 (2015) - [c3]Tal Linzen, Timothy O'Donnell:
A model of rapid phonotactic generalization. EMNLP 2015: 1126-1131 - 2014
- [c2]Tal Linzen, T. Florian Jaeger:
Investigating the role of entropy in sentence processing. CMCL@ACL 2014: 10-18 - [c1]Tal Linzen, Gillian Gallagher:
The timecourse of phonotactic learning. CogSci 2014
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-04-17 20:43 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint