default search action
Sebastian Lapuschkin
Person information
- affiliation: Fraunhofer Heinrich Hertz Institute, Berlin, Germany
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j16]Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, Wojciech Samek:
AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark. J. Frankl. Inst. 361(1): 418-428 (2024) - [j15]Johanna Vielhaben, Sebastian Lapuschkin, Grégoire Montavon, Wojciech Samek:
Explainable AI for time series via Virtual Inspection Layers. Pattern Recognit. 150: 110309 (2024) - [c25]Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin:
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space. AAAI 2024: 21046-21054 - [c24]Maximilian Dreyer, Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin:
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations. CVPR Workshops 2024: 3491-3501 - [c23]Dilyara Bareeva, Maximilian Dreyer, Frederik Pahde, Wojciech Samek, Sebastian Lapuschkin:
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression. CVPR Workshops 2024: 3532-3541 - [c22]Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek:
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers. ICML 2024 - [c21]Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer:
Explainable Concept Mappings of MRI: Revealing the Mechanisms Underlying Deep Learning-Based Brain Disease Classification. xAI (2) 2024: 202-216 - [c20]Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina M.-C. Höhne:
A Fresh Look at Sanity Checks for Saliency Maps. xAI (1) 2024: 403-420 - [e4]Luca Longo, Sebastian Lapuschkin, Christin Seifert:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I. Communications in Computer and Information Science 2153, Springer 2024, ISBN 978-3-031-63786-5 [contents] - [e3]Luca Longo, Sebastian Lapuschkin, Christin Seifert:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part II. Communications in Computer and Information Science 2154, Springer 2024, ISBN 978-3-031-63796-4 [contents] - [e2]Luca Longo, Sebastian Lapuschkin, Christin Seifert:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III. Communications in Computer and Information Science 2155, Springer 2024, ISBN 978-3-031-63799-5 [contents] - [e1]Luca Longo, Sebastian Lapuschkin, Christin Seifert:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part IV. Communications in Computer and Information Science 2156, Springer 2024, ISBN 978-3-031-63802-2 [contents] - [i59]Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina M.-C. Höhne:
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test. CoRR abs/2401.06465 (2024) - [i58]Florian Bley, Sebastian Lapuschkin, Wojciech Samek, Grégoire Montavon:
Explaining Predictive Uncertainty by Exposing Second-Order Effects. CoRR abs/2401.17441 (2024) - [i57]Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek:
AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers. CoRR abs/2402.05602 (2024) - [i56]Galip Ümit Yolcu, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
DualView: Data Attribution from the Dual Perspective. CoRR abs/2402.12118 (2024) - [i55]Maximilian Dreyer, Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin:
PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits. CoRR abs/2404.06453 (2024) - [i54]Dilyara Bareeva, Maximilian Dreyer, Frederik Pahde, Wojciech Samek, Sebastian Lapuschkin:
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression. CoRR abs/2404.09601 (2024) - [i53]Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer:
Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification. CoRR abs/2404.10433 (2024) - [i52]Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina M.-C. Höhne:
A Fresh Look at Sanity Checks for Saliency Maps. CoRR abs/2405.02383 (2024) - [i51]Laura Kopf, Philine Lou Bommer, Anna Hedström, Sebastian Lapuschkin, Marina M.-C. Höhne, Kirill Bykov:
CoSy: Evaluating Textual Explanations of Neurons. CoRR abs/2405.20331 (2024) - [i50]Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers. CoRR abs/2408.12568 (2024) - [i49]Jonas R. Naujoks, Aleksander Krasowski, Moritz Weckbecker, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek, René P. Klausen:
PINNfluence: Influence Functions for Physics-Informed Neural Networks. CoRR abs/2409.08958 (2024) - [i48]Rohan Reddy Mekala, Frederik Pahde, Simon Baur, Sneha Chandrashekar, Madeline Diep, Markus Wenzel, Eric L. Wisotzky, Galip Ümit Yolcu, Sebastian Lapuschkin, Jackie Ma, Peter Eisert, Mikael Lindvall, Adam A. Porter, Wojciech Samek:
Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization. CoRR abs/2410.05114 (2024) - [i47]Dilyara Bareeva, Galip Ümit Yolcu, Anna Hedström, Niklas Schmolenski, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond. CoRR abs/2410.07158 (2024) - 2023
- [j14]Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek:
Beyond explaining: Opportunities and challenges of XAI-based model improvement. Inf. Fusion 92: 154-176 (2023) - [j13]Anna Hedström, Leander Weber, Daniel Krakowczyk, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne:
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. J. Mach. Learn. Res. 24: 34:1-34:11 (2023) - [j12]Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
From attribution maps to human-understandable explanations through Concept Relevance Propagation. Nat. Mac. Intell. 5(9): 1006-1019 (2023) - [j11]Anna Hedström, Philine Lou Bommer, Kristoffer Knutsen Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne:
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus. Trans. Mach. Learn. Res. 2023 (2023) - [c19]Annika Frommholz, Fabian Seipel, Sebastian Lapuschkin, Wojciech Samek, Johanna Vielhaben:
XAI-based Comparison of Audio Event Classifiers with different Input Representations. CBMI 2023: 126-132 - [c18]Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin:
Optimizing Explanations by Network Canonization and Hyperparameter Search. CVPR Workshops 2023: 3819-3828 - [c17]Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations. CVPR Workshops 2023: 3829-3839 - [c16]Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations. CVPR 2023: 16143-16152 - [c15]Daniel G. Krakowczyk, Paul Prasse, David R. Reich, Sebastian Lapuschkin, Tobias Scheffer, Lena A. Jäger:
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models. ETRA 2023: 3:1-3:8 - [c14]Karam Dawoud, Wojciech Samek, Peter Eisert, Sebastian Lapuschkin, Sebastian Bosse:
Human-Centered Evaluation of XAI Methods. ICDM (Workshops) 2023: 912-921 - [c13]Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin:
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. MICCAI (2) 2023: 596-606 - [i46]Anna Hedström, Philine Lou Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne:
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus. CoRR abs/2302.07265 (2023) - [i45]Johanna Vielhaben, Sebastian Lapuschkin, Grégoire Montavon, Wojciech Samek:
Explainable AI for Time Series via Virtual Inspection Layers. CoRR abs/2303.06365 (2023) - [i44]Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin:
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. CoRR abs/2303.12641 (2023) - [i43]Daniel G. Krakowczyk, Paul Prasse, David R. Reich, Sebastian Lapuschkin, Tobias Scheffer, Lena A. Jäger:
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models. CoRR abs/2304.13536 (2023) - [i42]Annika Frommholz, Fabian Seipel, Sebastian Lapuschkin, Wojciech Samek, Johanna Vielhaben:
XAI-based Comparison of Input Representations for Audio Event Classification. CoRR abs/2304.14019 (2023) - [i41]Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin:
From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space. CoRR abs/2308.09437 (2023) - [i40]Leander Weber, Jim Berend, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
Layer-wise Feedback Propagation. CoRR abs/2308.12053 (2023) - [i39]Karam Dawoud, Wojciech Samek, Peter Eisert, Sebastian Lapuschkin, Sebastian Bosse:
Human-Centered Evaluation of XAI Methods. CoRR abs/2310.07534 (2023) - [i38]Gabriel Nobis, Marco Aversa, Maximilian Springenberg, Michael Detzel, Stefano Ermon, Shinichi Nakajima, Roderick Murray-Smith, Sebastian Lapuschkin, Christoph Knochenhauer, Luis Oala, Wojciech Samek:
Generative Fractional Diffusion Models. CoRR abs/2310.17638 (2023) - [i37]Maximilian Dreyer, Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin:
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations. CoRR abs/2311.16681 (2023) - 2022
- [j10]Djordje Slijepcevic, Fabian Horst, Sebastian Lapuschkin, Brian Horsak, Anna-Maria Raberger, Andreas Kranzl, Wojciech Samek, Christian Breiteneder, Wolfgang Immanuel Schöllhorn, Matthias Zeppelzauer:
Explaining Machine Learning Models for Clinical Gait Analysis. ACM Trans. Comput. Heal. 3(2): 14:1-14:27 (2022) - [j9]Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Alexander Binder:
Explain and improve: LRP-inference fine-tuning for image captioning models. Inf. Fusion 77: 233-246 (2022) - [j8]Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin:
Finding and removing Clever Hans: Using explanation methods to debug and improve deep models. Inf. Fusion 77: 261-295 (2022) - [j7]Simon M. Hofmann, Frauke Beyer, Sebastian Lapuschkin, Ole Goltermann, Markus Loeffler, Klaus-Robert Müller, Arno Villringer, Wojciech Samek, Anja Veronica Witte:
Towards the interpretability of deep learning models for multi-modal neuroimaging: Finding structural changes of the ageing brain. NeuroImage 261: 119504 (2022) - [c12]Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin:
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI. CD-MAKE 2022: 1-18 - [c11]Daniel Krakowczyk, David R. Reich, Paul Prasse, Sebastian Lapuschkin, Tobias Scheffer, Lena A. Jäger:
Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification. Gaze Meets ML 2022: 66-97 - [c10]Franz Motzkus, Leander Weber, Sebastian Lapuschkin:
Measurably Stronger Explanation Reliability Via Model Canonization. ICIP 2022: 516-520 - [i36]Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin:
PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging. CoRR abs/2202.03482 (2022) - [i35]Franz Motzkus, Leander Weber, Sebastian Lapuschkin:
Measurably Stronger Explanation Reliability via Model Canonization. CoRR abs/2202.06621 (2022) - [i34]Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne:
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations. CoRR abs/2202.06861 (2022) - [i33]Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek:
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement. CoRR abs/2203.08008 (2022) - [i32]Michael Gerstenberger, Sebastian Lapuschkin, Peter Eisert, Sebastian Bosse:
But that's not why: Inference adjustment by interactive prototype deselection. CoRR abs/2203.10087 (2022) - [i31]Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin:
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI. CoRR abs/2205.01929 (2022) - [i30]Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation. CoRR abs/2206.03208 (2022) - [i29]Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations. CoRR abs/2211.11426 (2022) - [i28]Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations. CoRR abs/2211.12486 (2022) - [i27]Fabian Horst, Djordje Slijepcevic, Matthias Zeppelzauer, Anna-Maria Raberger, Sebastian Lapuschkin, Wojciech Samek, Wolfgang Immanuel Schöllhorn, Christian Breiteneder, Brian Horsak:
Explaining automated gender classification of human gait. CoRR abs/2211.17015 (2022) - [i26]Djordje Slijepcevic, Fabian Horst, Marvin Simak, Sebastian Lapuschkin, Anna-Maria Raberger, Wojciech Samek, Christian Breiteneder, Wolfgang Immanuel Schöllhorn, Matthias Zeppelzauer, Brian Horsak:
Explaining machine learning models for age classification in human gait analysis. CoRR abs/2211.17016 (2022) - [i25]Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin:
Optimizing Explanations by Network Canonization and Hyperparameter Search. CoRR abs/2211.17174 (2022) - 2021
- [j6]Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller:
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proc. IEEE 109(3): 247-278 (2021) - [j5]Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek:
Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognit. 115: 107899 (2021) - [i24]Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin:
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy. CoRR abs/2106.13200 (2021) - [i23]Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin:
ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs. CoRR abs/2109.04236 (2021) - 2020
- [c9]Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin:
ECQ x: Explainability-Driven Quantization for Low-Bit and Sparse DNNs. xxAI@ICML 2020: 271-296 - [c8]Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder:
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution. ICPR 2020: 4949-4956 - [c7]Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder:
Explanation-Guided Training for Cross-Domain Few-Shot Classification. ICPR 2020: 7609-7616 - [c6]Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin:
Towards Best Practice in Explaining Neural Network Decisions with LRP. IJCNN 2020: 1-7 - [i22]Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Alexander Binder:
Understanding Image Captioning Models beyond Visualizing Attention. CoRR abs/2001.01037 (2020) - [i21]Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller:
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond. CoRR abs/2003.07631 (2020) - [i20]Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder:
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution. CoRR abs/2004.10484 (2020) - [i19]Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder:
Explanation-Guided Training for Cross-Domain Few-Shot Classification. CoRR abs/2007.08790 (2020)
2010 – 2019
- 2019
- [b1]Sebastian Lapuschkin:
Opening the machine learning black box with Layer-wise Relevance Propagation. Technical University of Berlin, Germany, 2019 - [j4]Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans:
iNNvestigate Neural Networks! J. Mach. Learn. Res. 20: 93:1-93:8 (2019) - [p1]Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller:
Layer-Wise Relevance Propagation: An Overview. Explainable AI 2019: 193-209 - [i18]Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller:
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn. CoRR abs/1902.10178 (2019) - [i17]Miriam Hägele, Philipp Seegerer, Sebastian Lapuschkin, Michael Bockmayr, Wojciech Samek, Frederick Klauschen, Klaus-Robert Müller, Alexander Binder:
Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. CoRR abs/1908.06943 (2019) - [i16]Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin:
Towards best practice in explaining neural network decisions with LRP. CoRR abs/1910.09840 (2019) - [i15]Fabian Horst, Djordje Slijepcevic, Sebastian Lapuschkin, Anna-Maria Raberger, Matthias Zeppelzauer, Wojciech Samek, Christian Breiteneder, Wolfgang Immanuel Schöllhorn, Brian Horsak:
On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence. CoRR abs/1912.07737 (2019) - [i14]Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek:
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning. CoRR abs/1912.08881 (2019) - [i13]Christopher J. Anders, Talmaj Marinc, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin:
Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed. CoRR abs/1912.11425 (2019) - 2018
- [i12]Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek:
Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals. CoRR abs/1807.03418 (2018) - [i11]Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans:
iNNvestigate neural networks! CoRR abs/1808.04260 (2018) - [i10]Fabian Horst, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller, Wolfgang Immanuel Schöllhorn:
What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis. CoRR abs/1808.04308 (2018) - 2017
- [j3]Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, Klaus-Robert Müller:
Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit. 65: 211-222 (2017) - [j2]Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller:
Evaluating the Visualization of What a Deep Neural Network Has Learned. IEEE Trans. Neural Networks Learn. Syst. 28(11): 2660-2673 (2017) - [c5]Vignesh Srinivasan, Sebastian Lapuschkin, Cornelius Hellge, Klaus-Robert Müller, Wojciech Samek:
Interpretable human action recognition in compressed domain. ICASSP 2017: 1692-1696 - [c4]Wojciech Samek, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller:
Understanding and Comparing Deep Neural Networks for Age and Gender Classification. ICCV Workshops 2017: 1629-1638 - [i9]Sebastian Lapuschkin, Alexander Binder, Klaus-Robert Müller, Wojciech Samek:
Understanding and Comparing Deep Neural Networks for Age and Gender Classification. CoRR abs/1708.07689 (2017) - 2016
- [j1]Sebastian Lapuschkin, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
The LRP Toolbox for Artificial Neural Networks. J. Mach. Learn. Res. 17: 114:1-114:5 (2016) - [c3]Sebastian Lapuschkin, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. CVPR 2016: 2912-2920 - [c2]Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek:
Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. ICANN (2) 2016: 63-71 - [c1]Sebastian Bach, Alexander Binder, Klaus-Robert Müller, Wojciech Samek:
Controlling explanatory heatmap resolution and semantics via decomposition depth. ICIP 2016: 2271-2275 - [i8]Sebastian Bach, Alexander Binder, Klaus-Robert Müller, Wojciech Samek:
Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth. CoRR abs/1603.06463 (2016) - [i7]Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller, Wojciech Samek:
Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers. CoRR abs/1604.00825 (2016) - [i6]Irene Sturm, Sebastian Bach, Wojciech Samek, Klaus-Robert Müller:
Interpretable Deep Neural Networks for Single-Trial EEG Classification. CoRR abs/1604.08201 (2016) - [i5]Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller:
Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation. CoRR abs/1611.08191 (2016) - 2015
- [i4]Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller:
Evaluating the visualization of what a Deep Neural Network has learned. CoRR abs/1509.06321 (2015) - [i3]Sebastian Bach, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. CoRR abs/1512.00172 (2015) - [i2]Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller:
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition. CoRR abs/1512.02479 (2015) - 2014
- [i1]Guido Schwenk, Sebastian Bach:
Detecting Behavioral and Structural Anomalies in MediaCloud Applications. CoRR abs/1409.8035 (2014)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the