


Остановите войну!
for scientists:
Sebastian Lapuschkin
Person information

- affiliation: Fraunhofer Heinrich Hertz Institute, Berlin, Germany
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [j9]Djordje Slijepcevic, Fabian Horst, Sebastian Lapuschkin, Brian Horsak
, Anna-Maria Raberger, Andreas Kranzl, Wojciech Samek, Christian Breiteneder, Wolfgang Immanuel Schöllhorn, Matthias Zeppelzauer:
Explaining Machine Learning Models for Clinical Gait Analysis. ACM Trans. Comput. Heal. 3(2): 14:1-14:27 (2022) - [j8]Jiamei Sun
, Sebastian Lapuschkin
, Wojciech Samek
, Alexander Binder
:
Explain and improve: LRP-inference fine-tuning for image captioning models. Inf. Fusion 77: 233-246 (2022) - [j7]Christopher J. Anders, Leander Weber, David Neumann
, Wojciech Samek
, Klaus-Robert Müller
, Sebastian Lapuschkin
:
Finding and removing Clever Hans: Using explanation methods to debug and improve deep models. Inf. Fusion 77: 261-295 (2022) - [i31]Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin:
PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging. CoRR abs/2202.03482 (2022) - [i30]Franz Motzkus, Leander Weber, Sebastian Lapuschkin:
Measurably Stronger Explanation Reliability via Model Canonization. CoRR abs/2202.06621 (2022) - [i29]Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne:
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations. CoRR abs/2202.06861 (2022) - [i28]Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek:
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement. CoRR abs/2203.08008 (2022) - [i27]Michael Gerstenberger, Sebastian Lapuschkin, Peter Eisert, Sebastian Bosse:
But that's not why: Inference adjustment by interactive prototype deselection. CoRR abs/2203.10087 (2022) - [i26]Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin:
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI. CoRR abs/2205.01929 (2022) - [i25]Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation. CoRR abs/2206.03208 (2022) - 2021
- [j6]Wojciech Samek
, Grégoire Montavon
, Sebastian Lapuschkin
, Christopher J. Anders, Klaus-Robert Müller
:
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proc. IEEE 109(3): 247-278 (2021) - [j5]Seul-Ki Yeom, Philipp Seegerer
, Sebastian Lapuschkin
, Alexander Binder, Simon Wiedemann
, Klaus-Robert Müller
, Wojciech Samek:
Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognit. 115: 107899 (2021) - [i24]Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin:
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy. CoRR abs/2106.13200 (2021) - [i23]Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin:
ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs. CoRR abs/2109.04236 (2021) - 2020
- [c9]Daniel Becking
, Maximilian Dreyer, Wojciech Samek
, Karsten Müller
, Sebastian Lapuschkin
:
ECQ x: Explainability-Driven Quantization for Low-Bit and Sparse DNNs. xxAI@ICML 2020: 271-296 - [c8]Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder:
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution. ICPR 2020: 4949-4956 - [c7]Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder:
Explanation-Guided Training for Cross-Domain Few-Shot Classification. ICPR 2020: 7609-7616 - [c6]Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder
, Wojciech Samek, Sebastian Lapuschkin
:
Towards Best Practice in Explaining Neural Network Decisions with LRP. IJCNN 2020: 1-7 - [i22]Jiamei Sun, Sebastian Lapuschkin
, Wojciech Samek, Alexander Binder
:
Understanding Image Captioning Models beyond Visualizing Attention. CoRR abs/2001.01037 (2020) - [i21]Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller:
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond. CoRR abs/2003.07631 (2020) - [i20]Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber
, Wojciech Samek, Alexander Binder
:
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution. CoRR abs/2004.10484 (2020) - [i19]Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder
:
Explanation-Guided Training for Cross-Domain Few-Shot Classification. CoRR abs/2007.08790 (2020)
2010 – 2019
- 2019
- [b1]Sebastian Lapuschkin:
Opening the machine learning black box with Layer-wise Relevance Propagation. Technical University of Berlin, Germany, 2019 - [j4]Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans:
iNNvestigate Neural Networks! J. Mach. Learn. Res. 20: 93:1-93:8 (2019) - [p1]Grégoire Montavon, Alexander Binder
, Sebastian Lapuschkin
, Wojciech Samek, Klaus-Robert Müller:
Layer-Wise Relevance Propagation: An Overview. Explainable AI 2019: 193-209 - [i18]Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller:
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn. CoRR abs/1902.10178 (2019) - [i17]Miriam Hägele, Philipp Seegerer
, Sebastian Lapuschkin
, Michael Bockmayr, Wojciech Samek, Frederick Klauschen, Klaus-Robert Müller, Alexander Binder:
Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. CoRR abs/1908.06943 (2019) - [i16]Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder
, Wojciech Samek, Sebastian Lapuschkin
:
Towards best practice in explaining neural network decisions with LRP. CoRR abs/1910.09840 (2019) - [i15]Fabian Horst, Djordje Slijepcevic, Sebastian Lapuschkin
, Anna-Maria Raberger, Matthias Zeppelzauer, Wojciech Samek, Christian Breiteneder, Wolfgang Immanuel Schöllhorn, Brian Horsak:
On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence. CoRR abs/1912.07737 (2019) - [i14]Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin
, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek:
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning. CoRR abs/1912.08881 (2019) - [i13]Christopher J. Anders, Talmaj Marinc, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin
:
Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed. CoRR abs/1912.11425 (2019) - 2018
- [i12]Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek:
Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals. CoRR abs/1807.03418 (2018) - [i11]Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans:
iNNvestigate neural networks! CoRR abs/1808.04260 (2018) - [i10]Fabian Horst, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller, Wolfgang Immanuel Schöllhorn:
What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis. CoRR abs/1808.04308 (2018) - 2017
- [j3]Grégoire Montavon, Sebastian Lapuschkin
, Alexander Binder
, Wojciech Samek, Klaus-Robert Müller
:
Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit. 65: 211-222 (2017) - [j2]Wojciech Samek
, Alexander Binder
, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller
:
Evaluating the Visualization of What a Deep Neural Network Has Learned. IEEE Trans. Neural Networks Learn. Syst. 28(11): 2660-2673 (2017) - [c5]Vignesh Srinivasan, Sebastian Lapuschkin
, Cornelius Hellge, Klaus-Robert Müller
, Wojciech Samek:
Interpretable human action recognition in compressed domain. ICASSP 2017: 1692-1696 - [c4]Wojciech Samek, Alexander Binder
, Sebastian Lapuschkin, Klaus-Robert Müller
:
Understanding and Comparing Deep Neural Networks for Age and Gender Classification. ICCV Workshops 2017: 1629-1638 - [i9]Sebastian Lapuschkin, Alexander Binder, Klaus-Robert Müller, Wojciech Samek:
Understanding and Comparing Deep Neural Networks for Age and Gender Classification. CoRR abs/1708.07689 (2017) - 2016
- [j1]Sebastian Lapuschkin, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
The LRP Toolbox for Artificial Neural Networks. J. Mach. Learn. Res. 17: 114:1-114:5 (2016) - [c3]Sebastian Lapuschkin
, Alexander Binder
, Grégoire Montavon, Klaus-Robert Müller
, Wojciech Samek:
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. CVPR 2016: 2912-2920 - [c2]Alexander Binder
, Grégoire Montavon, Sebastian Lapuschkin
, Klaus-Robert Müller, Wojciech Samek:
Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. ICANN (2) 2016: 63-71 - [c1]Sebastian Bach
, Alexander Binder
, Klaus-Robert Müller
, Wojciech Samek:
Controlling explanatory heatmap resolution and semantics via decomposition depth. ICIP 2016: 2271-2275 - [i8]Sebastian Bach, Alexander Binder, Klaus-Robert Müller, Wojciech Samek:
Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth. CoRR abs/1603.06463 (2016) - [i7]Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller, Wojciech Samek:
Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers. CoRR abs/1604.00825 (2016) - [i6]Irene Sturm, Sebastian Bach, Wojciech Samek, Klaus-Robert Müller:
Interpretable Deep Neural Networks for Single-Trial EEG Classification. CoRR abs/1604.08201 (2016) - [i5]Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller:
Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation. CoRR abs/1611.08191 (2016) - 2015
- [i4]Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller:
Evaluating the visualization of what a Deep Neural Network has learned. CoRR abs/1509.06321 (2015) - [i3]Sebastian Bach, Alexander Binder, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek:
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. CoRR abs/1512.00172 (2015) - [i2]Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller:
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition. CoRR abs/1512.02479 (2015) - 2014
- [i1]Guido Schwenk, Sebastian Bach:
Detecting Behavioral and Structural Anomalies in MediaCloud Applications. CoRR abs/1409.8035 (2014)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
Tweets on dblp homepage
Show tweets from on the dblp homepage.
Privacy notice: By enabling the option above, your browser will contact twitter.com and twimg.com to load tweets curated by our Twitter account. At the same time, Twitter will persistently store several cookies with your web browser. While we did signal Twitter to not track our users by setting the "dnt" flag, we do not have any control over how Twitter uses your data. So please proceed with care and consider checking the Twitter privacy policy.
last updated on 2022-07-28 22:14 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint