default search action
SafeAI@AAAI 2020: New York City, NY, USA
- Huáscar Espinoza, José Hernández-Orallo, Xin Cynthia Chen, Seán S. ÓhÉigeartaigh, Xiaowei Huang, Mauricio Castillo-Effen, Richard Mallah, John A. McDermid:
Proceedings of the Workshop on Artificial Intelligence Safety, co-located with 34th AAAI Conference on Artificial Intelligence, SafeAI@AAAI 2020, New York City, NY, USA, February 7, 2020. CEUR Workshop Proceedings 2560, CEUR-WS.org 2020
Session 1: Adversarial Machine Learning
- Bowei Xi, Yujie Chen, Fei Fan, Zhan Tu, Xinyan Deng:
Bio-Inspired Adversarial Attack Against Deep Neural Networks. 1-5 - Kazuya Kakizaki, Kosuke Yoshida:
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems. 6-13
Session 2: Assurance Cases for AI-based Systems
- Ewen Denney, Ganesh Pai, Colin Smith:
Hazard Contribution Modes of Machine Learning Components. 14-22 - Chiara Picardi, Colin Paterson, Richard Hawkins, Radu Calinescu, Ibrahim Habli:
Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems. 23-30
Session 3: Considerations for the AI Safety Landscape
- Vahid Behzadan, Ibrahim M. Baggili:
Founding The Domain of AI Forensics. 31-35 - John Burden, José Hernández-Orallo:
Exploring AI Safety in Degrees: Generality, Capability and Control. 36-40
Session 4: Fairness and Bias
- Michiel A. Bakker, Humberto Riverón Valdés, Duy Patrick Tu, Krishna P. Gummadi, Kush R. Varshney, Adrian Weller, Alex Pentland:
Fair Enough: Improving Fairness in Budget-Constrained Decision Making Using Confidence Thresholds. 41-53 - Kamran Alipour, Jürgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius Burachas:
A Study on Multimodal and Interactive Explanations for Visual Question Answering. 54-62 - Botty Dimanov, Umang Bhatt, Mateja Jamnik, Adrian Weller:
You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods. 63-73
Session 5: Uncertainty and Safe AI
- Melanie Ducoffe, Sébastien Gerchinovitz, Jayant Sen Gupta:
A High Probability Safety Guarantee for Shifted Neural Network Surrogates. 74-82 - Maximilian Henne, Adrian Schwaiger, Karsten Roscher, Gereon Weiss:
Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related Metrics. 83-90 - Rick Salay, Krzysztof Czarnecki, Maria Soledad Elli, Ignacio J. Alvarez, Sean Sedwards, Jack Weast:
PURSS: Towards Perceptual Uncertainty Aware Responsibility Sensitive Safety with ML. 91-95
Poster Papers
- Ashish Gaurav, Sachin Vernekar, Jaeyoung Lee, Vahdat Abdelzad, Krzysztof Czarnecki, Sean Sedwards:
Simple Continual Learning Strategies for Safer Classifers. 96-104 - Jin-Young Kim, Sung-Bae Cho:
Fair Representation for Safe Artificial Intelligence via Adversarial Learning of Unbiased Information Bottleneck. 105-112 - Ryo Kamoi, Kei Kobayashi:
Out-of-Distribution Detection with Likelihoods Assigned by Deep Generative Models Using Multimodal Prior Distributions. 113-116 - Carroll L. Wainwright, Peter Eckersley:
SafeLife 1.0: Exploring Side Effects in Complex Environments. 117-127 - Vojtech Kovarík, Ryan Carey:
(When) Is Truth-telling Favored in AI Debate? 128-137 - Sarthak Jindal, Raghav Sood, Richa Singh, Mayank Vatsa, Tanmoy Chakraborty:
NewsBag: A Benchmark Multimodal Dataset for Fake News Detection. 138-145 - Ignacio Serna, Aythami Morales, Julian Fiérrez, Manuel Cebrián, Nick Obradovich, Iyad Rahwan:
Algorithmic Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics. 146-152 - Bharat Prakash, Nicholas R. Waytowich, Ashwinkumar Ganesan, Tim Oates, Tinoosh Mohsenin:
Guiding Safe Reinforcement Learning Policies Using Structured Language Constraints. 153-161 - Sina Mohseni, Mandar Pitale, Vasu Singh, Zhangyang Wang:
Practical Solutions for Machine Learning Safety in Autonomous Vehicles. 162-169 - Lifeng Liu, Yingxuan Zhu, Tim Tingqiu Yuan, Jian Li:
Continuous Safe Learning Based on First Principles and Constraints for Autonomous Driving. 170-177 - Dmitry Vengertsev, Elena Sherman:
Recurrent Neural Network Properties and their Verification with Monte Carlo Techniques. 178-185 - Imane Lamrani, Ayan Banerjee, Sandeep K. S. Gupta:
Toward Operational Safety Verification Via Hybrid Automata Mining Using I/O Traces of AI-Enabled CPS. 186-194
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.