


default search action
BlackboxNLP@EMNLP 2018: Brussels, Belgium
- Tal Linzen, Grzegorz Chrupala, Afra Alishahi:

Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018. Association for Computational Linguistics 2018, ISBN 978-1-948087-71-1 - Axel Kerinec, Chloé Braud

, Anders Søgaard:
When does deep multi-task learning work for loosely related document classification tasks? 1-8 - Zied Elloumi, Laurent Besacier, Olivier Galibert, Benjamin Lecouteux:

Analyzing Learned Representations of a Deep ASR Performance Prediction Model. 9-15 - Danilo Croce, Daniele Rossini, Roberto Basili:

Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures. 16-24 - Anders Søgaard, Miryam de Lhoneux

, Isabelle Augenstein:
Nightmare at test time: How punctuation prevents parsers from generalizing. 25-29 - Graham Spinks

, Marie-Francine Moens:
Evaluating Textual Representations through Image Generation. 30-39 - José Camacho-Collados

, Mohammad Taher Pilehvar:
On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis. 40-46 - Jasmijn Bastings, Marco Baroni, Jason Weston, Kyunghyun Cho, Douwe Kiela:

Jump to better conclusions: SCAN both left and right. 47-55 - Alon Jacovi, Oren Sar Shalom, Yoav Goldberg

:
Understanding Convolutional Neural Networks for Text Classification. 56-65 - Ola Rønning, Daniel Hardt, Anders Søgaard:

Linguistic representations in multi-task neural networks for ellipsis resolution. 66-73 - Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata:

Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models. 74-81 - Madhumita Sushil, Simon Suster, Walter Daelemans:

Rule induction for global explanation of trained models. 82-97 - Shauli Ravfogel, Yoav Goldberg

, Francis M. Tyers:
Can LSTM Learn to Capture Agreement? The Case of Basque. 98-107 - João Loula, Marco Baroni, Brenden M. Lake:

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks. 108-114 - Luzi Sennhauser, Robert C. Berwick:

Evaluating the Ability of LSTMs to Learn Context-Free Grammars. 115-124 - Reid Pryzant, Sugato Basu, Kazoo Sone:

Interpretable Neural Architectures for Attributing an Ad's Performance to its Writing Style. 125-135 - Eric Wallace, Shi Feng, Jordan L. Boyd-Graber:

Interpreting Neural Networks with Nearest Neighbors. 136-144 - Yova Kementchedjhieva, Adam Lopez:

'Indicatements' that character language models learn English morpho-syntactic units and regularities. 145-153 - Pankaj Gupta, Hinrich Schütze:

LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation. 154-164 - Dieuwke Hupkes, Sanne Bouwmeester, Raquel Fernández:

Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue. 165-174 - Felix Stahlberg, Danielle Saunders, Bill Byrne:

An Operation Sequence Model for Explainable Neural Machine Translation. 175-186 - Andreas Krug, Sebastian Stober:

Introspection for convolutional automatic speech recognition. 187-199 - Valentin Trifonov, Octavian-Eugen Ganea, Anna Potapenko, Thomas Hofmann:

Learning and Evaluating Sparse Interpretable Sentence Embeddings. 200-210 - Ethan Wilcox, Roger Levy, Takashi Morita

, Richard Futrell:
What do RNN Language Models Learn about Filler-Gap Dependencies? 211-221 - Jaap Jumelet, Dieuwke Hupkes:

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. 222-231 - Natalia Skachkova, Thomas Alexander Trost, Dietrich Klakow:

Closing Brackets with Recurrent Neural Networks. 232-239 - Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, Willem H. Zuidema:

Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information. 240-248 - Martin Tutek, Jan Snajder:

Iterative Recursive Attention Model for Interpretable Sequence Classification. 249-257 - Avery Hiebert, Cole Peterson, Alona Fyshe, Nishant A. Mehta:

Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models. 258-266 - Gaël Letarte, Frédérik Paradis, Philippe Giguère

, François Laviolette:
Importance of Self-Attention for Sentiment Analysis. 267-275 - Pia Sommerauer, Antske Fokkens:

Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not: Testing whether Word Embeddings Can Tell. 276-286 - Alessandro Raganato

, Jörg Tiedemann:
An Analysis of Encoder Representations in Transformer-Based Machine Translation. 287-297 - Johnny Wei, Khiem Pham, Brendan O'Connor, Brian Dillon:

Evaluating Grammaticality in Seq2seq Models with a Broad Coverage HPSG Grammar: A Case Study on Machine Translation. 298-305 - Yiding Hao, William Merrill, Dana Angluin, Robert Frank

, Noah Amsel
, Andrew Benz, Simon Mendelsohn:
Context-Free Transductions with Neural Stacks. 306-315 - David Harbecke, Robert Schwarzenberg

, Christoph Alt:
Learning Explanations from Language Data. 316-318 - Barbara Rychalska, Dominika Basaj, Anna Wróblewska, Przemyslaw Biecek:

How much should you ask? On the question structure in QA systems. 319-321 - Barbara Rychalska, Dominika Basaj, Anna Wróblewska, Przemyslaw Biecek:

Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System. 322-324 - Nina Pörner, Benjamin Roth, Hinrich Schütze:

Interpretable Textual Neuron Representations for NLP. 325-327 - Naomi Saphra, Adam Lopez:

Language Models Learn POS First. 328-330 - Nicolas Garneau, Jean-Samuel Leboeuf, Luc Lamontagne

:
Predicting and interpreting embeddings for out of vocabulary words in downstream tasks. 331-333 - Geoff Bacon, Terry Regier:

Probing sentence embeddings for structure-dependent tense. 334-336 - Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme:

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation. 337-340 - Kyoungrok Jang, Sung-Hyon Myaeng, Sang-Bum Kim:

Interpretable Word Embedding Contextualization. 341-343 - Lyan Verwimp, Hugo Van hamme, Vincent Renkens, Patrick Wambacq:

State Gradients for RNN Memory Analysis. 344-346 - David Marecek, Rudolf Rosa

:
Extracting Syntactic Trees from Transformer Encoder Self-Attentions. 347-349 - Thomas Lippincott:

Portable, layer-wise task performance monitoring for NLP models. 350-352 - Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman:

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. 353-355 - Clara Vania, Adam Lopez:

Explicitly modeling case improves neural dependency parsing. 356-358 - Kelly W. Zhang

, Samuel R. Bowman:
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis. 359-361 - Steven Derby, Paul Miller, Brian Murphy, Barry Devereux:

Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model. 362-364 - Ben Peters, Vlad Niculae, André F. T. Martins:

Interpretable Structure Induction via Sparse Attention. 365-367 - Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush

:
Debugging Sequence-to-Sequence Models with Seq2Seq-Vis. 368-370 - Phu Mon Htut, Kyunghyun Cho, Samuel R. Bowman:

Grammar Induction with Neural Language Models: An Unusual Replication. 371-373 - Prajit Dhar

, Arianna Bisazza:
Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages? 374-377 - Kaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, Thomas L. Griffiths:

Exploiting Attention to Reveal Shortcomings in Memory Models. 378-380 - Pranava Swaroop Madhyastha

, Josiah Wang, Lucia Specia:
End-to-end Image Captioning Exploits Distributional Similarity in Multimodal Space. 381-383 - Denis Paperno:

Limitations in learning an interpreted language with recurrent models. 384-386

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














