


default search action
xxAI@ICML 2020: Vienna, Austria
- Andreas Holzinger

, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller
, Wojciech Samek
:
xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers. Lecture Notes in Computer Science 13200, Springer 2022, ISBN 978-3-031-04082-5
Editorial
- Andreas Holzinger

, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller
, Wojciech Samek
:
xxAI - Beyond Explainable Artificial Intelligence. 3-10
Current Methods and Challenges
- Andreas Holzinger

, Anna Saranti
, Christoph Molnar
, Przemyslaw Biecek
, Wojciech Samek
:
Explainable AI Methods - A Brief Overview. 13-38 - Christoph Molnar

, Gunnar König
, Julia Herbinger
, Timo Freiesleben
, Susanne Dandl
, Christian A. Scholbeck
, Giuseppe Casalicchio
, Moritz Grosse-Wentrup
, Bernd Bischl
:
General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models. 39-68 - Leonard Salewski

, A. Sophia Koepke
, Hendrik P. A. Lensch
, Zeynep Akata
:
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations. 69-88
New Developments in Explainable AI
- Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, Gitta Kutyniok

:
A Rate-Distortion Framework for Explaining Black-Box Model Decisions. 91-115 - Grégoire Montavon

, Jacob R. Kauffmann
, Wojciech Samek
, Klaus-Robert Müller
:
Explaining the Predictions of Unsupervised Learning Models. 117-138 - Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera

:
Towards Causal Algorithmic Recourse. 139-166 - Bolei Zhou

:
Interpreting Generative Adversarial Networks for Interactive Image Generation. 167-175 - Marius-Constantin Dinu, Markus Hofmarcher, Vihang Prakash Patil, Matthias Dorfer, Patrick M. Blies, Johannes Brandstetter, Jose A. Arjona-Medina, Sepp Hochreiter:

XAI and Strategy Extraction via Reward Redistribution. 177-205 - Osbert Bastani, Jeevana Priya Inala, Armando Solar-Lezama:

Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis. 207-228 - Chandan Singh, Wooseok Ha, Bin Yu:

Interpreting and Improving Deep-Learning Models with Reality Checks. 229-254 - Sarah Adel Bargal

, Andrea Zunino
, Vitali Petsiuk
, Jianming Zhang
, Vittorio Murino
, Stan Sclaroff
, Kate Saenko
:
Beyond the Visual Analysis of Deep Model Saliency. 255-269 - Daniel Becking

, Maximilian Dreyer, Wojciech Samek
, Karsten Müller
, Sebastian Lapuschkin
:
ECQ x: Explainability-Driven Quantization for Low-Bit and Sparse DNNs. 271-296 - Diego Marcos

, Jana Kierdorf
, Ted Cheeseman, Devis Tuia
, Ribana Roscher
:
A Whale's Tail - Finding the Right Whale in an Uncertain World. 297-313 - Antonios Mamalakis

, Imme Ebert-Uphoff
, Elizabeth A. Barnes
:
Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science. 315-339
An Interdisciplinary Approach to Explainable AI
- Philipp Hacker, Jan-Hendrik Passoth:

Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond. 343-373 - Jianlong Zhou

, Fang Chen, Andreas Holzinger
:
Towards Explainability for AI Fairness. 375-386 - Chun-Hua Tsai, John M. Carroll:

Logic and Pragmatics in AI Explanation. 387-396

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














