


Остановите войну!
for scientists:


default search action
Soheil Feizi
Soheil Feizi-Khankandi
Person information

Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [c70]Alexander Levine, Soheil Feizi:
Goal-Conditioned Q-learning as Knowledge Distillation. AAAI 2023: 8500-8509 - [c69]Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi:
Text2Concept: Concept Activation Vectors Directly from Text. CVPR Workshops 2023: 3744-3749 - [c68]Vinu Sankar Sadasivan, Mahdi Soltanolkotabi, Soheil Feizi:
CUDA: Convolution-Based Unlearnable Datasets. CVPR 2023: 3862-3871 - [c67]Samyadeep Basu, Megan Stanley, John Bronskill, Soheil Feizi, Daniela Massiceti:
Hard-Meta-Dataset++: Towards Understanding Few-Shot Performance on Difficult Tasks. ICLR 2023 - [c66]Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi:
Provable Robustness against Wasserstein Distribution Shifts via Input Randomization. ICLR 2023 - [c65]Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang:
Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication. ICLR 2023 - [c64]Neha Mukund Kalibhat, Shweta Bhardwaj, C. Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi:
Identifying Interpretable Subspaces in Image Representations. ICML 2023: 15623-15638 - [c63]Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi:
Text-To-Concept (and Back) via Cross-Model Alignment. ICML 2023: 25037-25060 - [c62]Keivan Rezaei, Kiarash Banihashem, Atoosa Malemir Chegini, Soheil Feizi:
Run-off Election: Improved Provable Defense against Data Poisoning Attacks. ICML 2023: 29030-29050 - [i90]Keivan Rezaei, Kiarash Banihashem, Atoosa Malemir Chegini, Soheil Feizi:
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks. CoRR abs/2302.02300 (2023) - [i89]Wenxiao Wang, Soheil Feizi:
Temporal Robustness against Data Poisoning. CoRR abs/2302.03684 (2023) - [i88]Vinu Sankar Sadasivan, Mahdi Soltanolkotabi, Soheil Feizi:
CUDA: Convolution-based Unlearnable Datasets. CoRR abs/2303.04278 (2023) - [i87]Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, Soheil Feizi:
Can AI-Generated Text be Reliably Detected? CoRR abs/2303.11156 (2023) - [i86]Shoumik Saha, Wenxiao Wang, Yigitcan Kaya, Soheil Feizi:
Adversarial Robustness of Learning-based Static Malware Classifiers. CoRR abs/2303.13372 (2023) - [i85]Aounon Kumar, Vinu Sankar Sadasivan, Soheil Feizi:
Provable Robustness for Streaming Models with a Sliding Window. CoRR abs/2303.16308 (2023) - [i84]Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi:
Strong Baselines for Parameter Efficient Few-Shot Fine-tuning. CoRR abs/2304.01917 (2023) - [i83]Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi:
Text-To-Concept (and Back) via Cross-Model Alignment. CoRR abs/2305.06386 (2023) - [i82]Vedant Nanda, Till Speicher, John P. Dickerson, Soheil Feizi, Krishna P. Gummadi, Adrian Weller:
Diffused Redundancy in Pre-trained Representations. CoRR abs/2306.00183 (2023) - [i81]Wenxiao Wang, Soheil Feizi:
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks. CoRR abs/2306.16415 (2023) - [i80]Samyadeep Basu, Maziar Sanjabi, Daniela Massiceti, Shell Xu Hu, Soheil Feizi:
Augmenting CLIP with Improved Visio-Linguistic Reasoning. CoRR abs/2307.09233 (2023) - [i79]Neha Mukund Kalibhat, Shweta Bhardwaj, C. Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi:
Identifying Interpretable Subspaces in Image Representations. CoRR abs/2307.10504 (2023) - [i78]Clark W. Barrett, Brad Boyd, Ellie Burzstein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha, Daniel Kang, Florian Kerschbaum, Eric Mitchell, John C. Mitchell, Zulfikar Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, Diyi Yang:
Identifying and Mitigating the Security Risks of Generative AI. CoRR abs/2308.14840 (2023) - [i77]Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Soheil Feizi, Hima Lakkaraju:
Certifying LLM Safety against Adversarial Prompting. CoRR abs/2309.02705 (2023) - [i76]Neha Mukund Kalibhat, Samuel Sharpe, Jeremy Goodsitt, C. Bayan Bruss, Soheil Feizi:
Adapting Self-Supervised Representations to Multi-Domain Setups. CoRR abs/2309.03999 (2023) - 2022
- [j11]Jiang Liu
, Chun Pong Lau
, Hossein Souri, Soheil Feizi
, Rama Chellappa:
Mutual Adversarial Training: Learning Together is Better Than Going Alone. IEEE Trans. Inf. Forensics Secur. 17: 2364-2377 (2022) - [c61]Alexander Levine, Soheil Feizi:
Provable Adversarial Robustness for Fractional Lp Threat Models. AISTATS 2022: 9908-9942 - [c60]Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi:
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection. CVPR 2022: 14953-14962 - [c59]Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi:
A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes. CVPR 2022: 19065-19075 - [c58]Sahil Singla, Soheil Feizi:
Salient ImageNet: How to discover spurious features in Deep Learning? ICLR 2022 - [c57]Sahil Singla, Surbhi Singla, Soheil Feizi:
Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100. ICLR 2022 - [c56]Aounon Kumar, Alexander Levine, Soheil Feizi:
Policy Smoothing for Provably Robust Reinforcement Learning. ICLR 2022 - [c55]Priyatham Kattakinda
, Soheil Feizi:
FOCUS: Familiar Objects in Common and Uncommon Settings. ICML 2022: 10825-10847 - [c54]Wenxiao Wang, Alexander Levine, Soheil Feizi:
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation. ICML 2022: 22769-22783 - [c53]Wenxiao Wang, Alexander Levine, Soheil Feizi:
Lethal Dose Conjecture on Data Poisoning. NeurIPS 2022 - [c52]Sahil Singla, Soheil Feizi:
Improved techniques for deterministic l2 robustness. NeurIPS 2022 - [c51]Mazda Moayeri, Sahil Singla, Soheil Feizi:
Hard ImageNet: Segmentations for Objects with Strong Spurious Cues. NeurIPS 2022 - [c50]Mazda Moayeri, Kiarash Banihashem, Soheil Feizi:
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness. NeurIPS 2022 - [c49]Gaurang Sriramanan, Maharshi Gor, Soheil Feizi:
Toward Efficient Robust Training against Union of $\ell_p$ Threat Models. NeurIPS 2022 - [i75]Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi:
A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes. CoRR abs/2201.10766 (2022) - [i74]Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi:
Certifying Model Accuracy under Distribution Shifts. CoRR abs/2201.12440 (2022) - [i73]Wenxiao Wang, Alexander Levine, Soheil Feizi:
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation. CoRR abs/2202.02628 (2022) - [i72]Neha Mukund Kalibhat, Kanika Narang, Liang Tan, Hamed Firooz, Maziar Sanjabi, Soheil Feizi:
Understanding Failure Modes of Self-Supervised Learning. CoRR abs/2203.01881 (2022) - [i71]Alexander Levine, Soheil Feizi:
Provable Adversarial Robustness for Fractional Lp Threat Models. CoRR abs/2203.08945 (2022) - [i70]Sahil Singla, Mazda Moayeri, Soheil Feizi:
Core Risk Minimization using Salient ImageNet. CoRR abs/2203.15566 (2022) - [i69]Aya Abdelsalam Ismail, Sercan Ö. Arik, Jinsung Yoon, Ankur Taly, Soheil Feizi, Tomas Pfister:
Interpretable Mixture of Experts for Structured Data. CoRR abs/2206.02107 (2022) - [i68]Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang:
Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems. CoRR abs/2206.10158 (2022) - [i67]Wenxiao Wang, Alexander Levine, Soheil Feizi:
Lethal Dose Conjecture on Data Poisoning. CoRR abs/2208.03309 (2022) - [i66]Alexander Levine, Soheil Feizi:
Goal-Conditioned Q-Learning as Knowledge Distillation. CoRR abs/2208.13298 (2022) - [i65]Mazda Moayeri, Kiarash Banihashem, Soheil Feizi:
Explicit Tradeoffs between Adversarial and Natural Distributional Robustness. CoRR abs/2209.07592 (2022) - [i64]Sahil Singla, Soheil Feizi:
Improved techniques for deterministic l2 robustness. CoRR abs/2211.08453 (2022) - [i63]Sahil Singla, Atoosa Malemir Chegini, Mazda Moayeri, Soheil Feizi:
Data-Centric Debugging: mitigating model failures via targeted data collection. CoRR abs/2211.09859 (2022) - [i62]Priyatham Kattakinda, Alexander Levine, Soheil Feizi:
Invariant Learning via Diffusion Dreamed Distribution Shifts. CoRR abs/2211.10370 (2022) - [i61]Sriram Balasubramanian, Soheil Feizi:
Towards Better Input Masking for Convolutional Neural Networks. CoRR abs/2211.14646 (2022) - [i60]Mazda Moayeri, Wenxiao Wang, Sahil Singla, Soheil Feizi:
Spuriosity Rankings: Sorting Data for Spurious Correlation Robustness. CoRR abs/2212.02648 (2022) - 2021
- [c48]Neha Mukund Kalibhat, Yogesh Balaji, Soheil Feizi:
Winning Lottery Tickets in Deep Generative Models. AAAI 2021: 8038-8046 - [c47]Mucong Ding, Constantinos Daskalakis, Soheil Feizi:
GANs with Conditional Independence Graphs: On Subadditivity of Probability Divergences. AISTATS 2021: 3709-3717 - [c46]Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson:
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning. FAccT 2021: 466-477 - [c45]Mazda Moayeri, Soheil Feizi:
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings. ICCV 2021: 7657-7666 - [c44]Vasu Singla, Sahil Singla, Soheil Feizi, David Jacobs:
Low Curvature Activations Reduce Overfitting in Adversarial Training. ICCV 2021: 16403-16413 - [c43]Alexander Levine, Soheil Feizi:
Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks. ICLR 2021 - [c42]Sahil Singla, Soheil Feizi:
Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers. ICLR 2021 - [c41]Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi:
Understanding Over-parameterization in Generative Adversarial Networks. ICLR 2021 - [c40]Samyadeep Basu, Phillip Pope, Soheil Feizi:
Influence Functions in Deep Learning Are Fragile. ICLR 2021 - [c39]Cassidy Laidlaw, Sahil Singla, Soheil Feizi:
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models. ICLR 2021 - [c38]Alexander Levine, Soheil Feizi:
Improved, Deterministic Smoothing for L1 Certified Robustness. ICML 2021: 6254-6264 - [c37]Sahil Singla, Soheil Feizi:
Skew Orthogonal Convolutions. ICML 2021: 9756-9766 - [c36]Aya Abdelsalam Ismail, Héctor Corrada Bravo, Soheil Feizi:
Improving Deep Learning Interpretability by Saliency Guided Training. NeurIPS 2021: 26726-26739 - [c35]Gowthami Somepalli, Yexin Wu, Yogesh Balaji, Bhanukiran Vinzamuri, Soheil Feizi:
Unsupervised anomaly detection with adversarial mirrored autoencoders. UAI 2021: 365-375 - [i59]Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi:
Low Curvature Activations Reduce Overfitting in Adversarial Training. CoRR abs/2102.07861 (2021) - [i58]Alexander Levine, Soheil Feizi:
Improved, Deterministic Smoothing for L1 Certified Robustness. CoRR abs/2103.10834 (2021) - [i57]Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi:
Understanding Overparameterization in Generative Adversarial Networks. CoRR abs/2104.05605 (2021) - [i56]Sahil Singla, Soheil Feizi:
Skew Orthogonal Convolutions. CoRR abs/2105.11417 (2021) - [i55]Aounon Kumar, Alexander Levine, Soheil Feizi:
Policy Smoothing for Provably Robust Reinforcement Learning. CoRR abs/2106.11420 (2021) - [i54]Sahil Singla, Surbhi Singla, Soheil Feizi:
Householder Activations for Provable Robustness against Adversarial Attacks. CoRR abs/2108.04062 (2021) - [i53]Mazda Moayeri, Soheil Feizi:
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings. CoRR abs/2108.13797 (2021) - [i52]Priyatham Kattakinda, Soheil Feizi:
FOCUS: Familiar Objects in Common and Uncommon Settings. CoRR abs/2110.03804 (2021) - [i51]Sahil Singla, Soheil Feizi:
Causal ImageNet: How to discover spurious features in Deep Learning? CoRR abs/2110.04301 (2021) - [i50]Samyadeep Basu, Amr Sharaf, Nicolò Fusi, Soheil Feizi:
On Hard Episodes in Meta-Learning. CoRR abs/2110.11190 (2021) - [i49]Aya Abdelsalam Ismail, Héctor Corrada Bravo, Soheil Feizi:
Improving Deep Learning Interpretability by Saliency Guided Training. CoRR abs/2111.14338 (2021) - [i48]Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi:
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection. CoRR abs/2112.04532 (2021) - [i47]Jiang Liu, Chun Pong Lau, Hossein Souri, Soheil Feizi, Rama Chellappa:
Mutual Adversarial Training: Learning together is better than going alone. CoRR abs/2112.05005 (2021) - [i46]Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa:
Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses. CoRR abs/2112.06323 (2021) - 2020
- [j10]Soheil Feizi
, Farzan Farnia
, Tony Ginart, David Tse:
Understanding GANs in the LQG Setting: Formulation, Generalization and Stability. IEEE J. Sel. Areas Inf. Theory 1(1): 304-311 (2020) - [j9]Soheil Feizi
, Gerald T. Quon, Mariana Recamonde Mendoza
, Muriel Médard, Manolis Kellis, Ali Jadbabaie
:
Spectral Alignment of Graphs. IEEE Trans. Netw. Sci. Eng. 7(3): 1182-1197 (2020) - [c34]Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein:
Adversarially Robust Distillation. AAAI 2020: 3996-4003 - [c33]Alexander Levine, Soheil Feizi:
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation. AAAI 2020: 4585-4593 - [c32]Luke J. O'Connor, Muriel Médard, Soheil Feizi:
Maximum Likelihood Embedding of Logistic Random Dot Product Graphs. AAAI 2020: 5289-5297 - [c31]Phillip Pope, Yogesh Balaji, Soheil Feizi:
Adversarial Robustness of Flow-Based Generative Models. AISTATS 2020: 3795-3805 - [c30]Alexander Levine, Soheil Feizi:
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks. AISTATS 2020: 3938-3947 - [c29]Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson:
Deep k-NN Defense Against Clean-Label Data Poisoning Attacks. ECCV Workshops (1) 2020: 55-70 - [c28]Samyadeep Basu, Xuchen You, Soheil Feizi:
On Second-Order Group Influence Functions for Black-Box Predictions. ICML 2020: 715-724 - [c27]Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi:
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness. ICML 2020: 5458-5467 - [c26]Sahil Singla, Soheil Feizi:
Second-Order Provable Defenses against Adversarial Attacks. ICML 2020: 8981-8991 - [c25]Alexander Levine, Soheil Feizi:
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks. NeurIPS 2020 - [c24]Yogesh Balaji, Rama Chellappa, Soheil Feizi:
Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation. NeurIPS 2020 - [c23]Aya Abdelsalam Ismail, Mohamed K. Gunady, Héctor Corrada Bravo, Soheil Feizi:
Benchmarking Deep Learning Interpretability in Time Series Predictions. NeurIPS 2020 - [c22]Aounon Kumar, Alexander Levine, Soheil Feizi, Tom Goldstein:
Certifying Confidence via Randomized Smoothing. NeurIPS 2020 - [c21]Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi:
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks. NeurIPS 2020 - [i45]Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi:
Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness. CoRR abs/2002.03239 (2020) - [i44]Alexander Levine, Soheil Feizi:
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks. CoRR abs/2002.10733 (2020) - [i43]Mucong Ding, Constantinos Daskalakis, Soheil Feizi:
Subadditivity of Probability Divergences on Bayes-Nets with Applications to Time Series GANs. CoRR abs/2003.00652 (2020) - [i42]Yexin Wu, Yogesh Balaji, Bhanukiran Vinzamuri, Soheil Feizi:
Mirrored Autoencoders with Simplex Interpolation for Unsupervised Anomaly Detection. CoRR abs/2003.10713 (2020) - [i41]Sahil Singla, Soheil Feizi:
Second-Order Provable Defenses against Adversarial Attacks. CoRR abs/2006.00731 (2020) - [i40]Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson:
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning. CoRR abs/2006.12621 (2020) - [i39]Cassidy Laidlaw, Sahil Singla, Soheil Feizi:
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models. CoRR abs/2006.12655 (2020) - [i38]Samyadeep Basu, Phillip Pope, Soheil Feizi:
Influence Functions in Deep Learning Are Fragile. CoRR abs/2006.14651 (2020) - [i37]Alexander Levine, Soheil Feizi:
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks. CoRR abs/2006.14768 (2020) - [i36]Wei-An Lin, Chun Pong Lau
, Alexander Levine, Rama Chellappa, Soheil Feizi:
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks. CoRR abs/2009.02470 (2020) - [i35]Aounon Kumar, Alexander Levine, Soheil Feizi, Tom Goldstein:
Certifying Confidence via Randomized Smoothing. CoRR abs/2009.08061 (2020) - [i34]Pirazh Khorramshahi, Hossein Souri, Rama Chellappa, Soheil Feizi:
GANs with Variational Entropy Regularizers: Applications in Mitigating the Mode-Collapse Issue. CoRR abs/2009.11921 (2020) - [i33]Neha Mukund Kalibhat, Yogesh Balaji, Soheil Feizi:
Winning Lottery Tickets in Deep Generative Models. CoRR abs/2010.02350 (2020) - [i32]Yogesh Balaji, Rama Chellappa, Soheil Feizi:
Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation. CoRR abs/2010.05862 (2020) - [i31]Alexander Levine, Aounon Kumar, Thomas A. Goldstein, Soheil Feizi:
Tight Second-Order Certificates for Randomized Smoothing. CoRR abs/2010.10549 (2020) - [i30]Aya Abdelsalam Ismail, Mohamed K. Gunady, Héctor Corrada Bravo, Soheil Feizi:
Benchmarking Deep Learning Interpretability in Time Series Predictions. CoRR abs/2010.13924 (2020)
2010 – 2019
- 2019
- [j8]Soheil Feizi
, Muriel Médard, Gerald T. Quon, Manolis Kellis, Ken Duffy
:
Network Infusion to Infer Information Sources in Networks. IEEE Trans. Netw. Sci. Eng. 6(3): 402-417 (2019) - [c20]Yogesh Balaji, Rama Chellappa, Soheil Feizi:
Normalized Wasserstein for Mixture Distributions With Applications in Adversarial Learning and Domain Adaptation. ICCV 2019: 6499-6507 - [c19]Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein:
Are adversarial examples inevitable? ICLR (Poster) 2019 - [c18]Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi:
Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs. ICML 2019: 414-423 - [c17]Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi:
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation. ICML 2019: 5848-5856 - [c16]Shouvanik Chakrabarti, Yiming Huang, Tongyang Li, Soheil Feizi, Xiaodi Wu:
Quantum Wasserstein Generative Adversarial Networks. NeurIPS 2019: 6778-6789 - [c15]Cassidy Laidlaw, Soheil Feizi:
Functional Adversarial Attacks. NeurIPS 2019: 10408-10418 - [c14]Aya Abdelsalam Ismail, Mohamed K. Gunady, Luiz Pessoa, Héctor Corrada Bravo, Soheil Feizi:
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks. NeurIPS 2019: 10813-10823 - [i29]Angeline Aguinaldo, Ping-Yeh Chiang, Alexander Gain, Ameya Patil, Kolten Pearson, Soheil Feizi:
Compressing GANs using Knowledge Distillation. CoRR abs/1902.00159 (2019) - [i28]Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi:
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation. CoRR abs/1902.00407 (2019) - [i27]Yogesh Balaji, Rama Chellappa, Soheil Feizi:
Normalized Wasserstein Distance for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation. CoRR abs/1902.00415 (2019) - [i26]Sahil Singla, Soheil Feizi:
Robustness Certificates Against Adversarial Examples for ReLU Networks. CoRR abs/1902.01235 (2019) - [i25]Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein:
Adversarially Robust Distillation. CoRR abs/1905.09747 (2019) - [i24]Alexander Levine, Sahil Singla, Soheil Feizi:
Certifiably Robust Interpretation in Deep Learning. CoRR abs/1905.12105 (2019) - [i23]Samuel Barham, Soheil Feizi:
Interpretable Adversarial Training for Text. CoRR abs/1905.12864 (2019)