


default search action
Satyapriya Krishna
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i26]Shaona Ghosh, Heather Frase, Adina Williams, Sarah Luger, Paul Röttger, Fazl Barez, Sean McGregor, Kenneth Fricklas, Mala Kumar, Quentin Feuillade--Montixi, Kurt Bollacker, Felix Friedrich, Ryan Tsang, Bertie Vidgen, Alicia Parrish, Chris Knotz, Eleonora Presani, Jonathan Bennion, Marisa Ferrara Boston, Mike Kuniavsky, Wiebke Hutiri, James Ezick, Malek Ben Salem, Rajat Sahay, Sujata S. Goswami, Usman Gohar, Ben Huang, Supheakmungkol Sarin, Elie Alhajjar, Canyu Chen, Roman Eng, Kashyap Ramanandula Manjusha, Virendra Mehta, Eileen Long, Murali Emani, Natan Vidra, Benjamin Rukundo, Abolfazl Shahbazi, Kongtao Chen, Rajat Ghosh, Vithursan Thangarasa, Pierre Peigné, Abhinav Singh, Max Bartolo, Satyapriya Krishna, Mubashara Akhtar, Rafael Gold, Cody Coleman, Luis Oala, Vassil Tashev, Joseph Marvin Imperial, Amy Russ, Sasidhar Kunapuli, Nicolas Miailhe, Julien Delaunay, Bhaktipriya Radharapu, Rajat Shinde, Tuesday, Debojyoti Dutta, Declan Grabb, Ananya Gangavarapu, Saurav Sahay, Agasthya Gangavarapu, Patrick Schramowski, Stephen Singam, Tom David, Xudong Han, Priyanka Mary Mammen, Tarunima Prabhakar, Venelin Kovatchev, Ahmed Ahmed, Kelvin N. Manyeki, Sandeep Madireddy, Foutse Khomh, Fedor Zhdanov, Joachim Baumann, Nina Vasan, Xianjun Yang, Carlos Mougn, Jibin Rajan Varghese, Hussain Chinoy, Seshakrishna Jitendar, Manil Maskey, Claire V. Hardgrove, Tianhao Li, Aakash Gupta, Emil Joswin, Yifan Mai, Shachi H. Kumar, Cigdem Patlak, Kevin Lu, Vincent Alessi, Sree Bhargavi Balija, Chenhe Gu, Robert Sullivan, James Gealy, Matt Lavrisa, James Goel, Peter Mattson, Percy Liang, Joaquin Vanschoren:
AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons. CoRR abs/2503.05731 (2025) - 2024
- [j2]Satyapriya Krishna, Tessa Han, Alex Gu, Steven Wu, Shahin Jabbari, Himabindu Lakkaraju:
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective. Trans. Mach. Learn. Res. 2024 (2024) - [c13]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
On the Trade-offs between Adversarial Robustness and Actionable Explanations. AIES (1) 2024: 784-795 - [c12]Stephen Casper
, Carson Ezell
, Charlotte Siegmann
, Noam Kolt
, Taylor Lynn Curtis
, Benjamin Bucknall
, Andreas A. Haupt
, Kevin Wei
, Jérémy Scheurer
, Marius Hobbhahn
, Lee Sharkey
, Satyapriya Krishna
, Marvin Von Hagen
, Silas Alberti
, Alan Chan
, Qinyi Sun
, Michael Gerovitch
, David Bau
, Max Tegmark
, David Krueger
, Dylan Hadfield-Menell
:
Black-Box Access is Insufficient for Rigorous AI Audits. FAccT 2024: 2254-2272 - [c11]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Understanding the Effects of Iterative Prompting on Truthfulness. ICML 2024 - [c10]Mubashara Akhtar, Omar Benjelloun, Costanza Conforti, Luca Foschini, Joan Giner-Miguelez, Pieter Gijsbers, Sujata S. Goswami, Nitisha Jain, Michalis Karamousadakis, Michael Kuchnik, Satyapriya Krishna, Sylvain Lesage, Quentin Lhoest, Pierre Marcenac, Manil Maskey, Peter Mattson, Luis Oala, Hamidah Oderinwale, Pierre Ruyssen, Tim Santos, Rajat Shinde, Elena Simperl, Arjun Suresh, Goeffry Thomas, Slava Tykhonov, Joaquin Vanschoren, Susheel Varma, Jos van der Velde, Steffen Vogler, Carole-Jean Wu, Luyao Zhang:
Croissant: A Metadata Format for ML-Ready Datasets. NeurIPS 2024 - [i25]Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Alexander Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen
, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell:
Black-Box Access is Insufficient for Rigorous AI Audits. CoRR abs/2401.14446 (2024) - [i24]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Understanding the Effects of Iterative Prompting on Truthfulness. CoRR abs/2402.06625 (2024) - [i23]Bo Peng, Daniel Goldstein, Quentin Anthony
, Alon Albalak, Eric Alcaide, Stella Biderman, Eugene Cheah, Xingjian Du
, Teddy Ferdinan
, Haowen Hou, Przemyslaw Kazienko, Kranthi Kiran GV, Jan Kocon
, Bartlomiej Koptyra
, Satyapriya Krishna, Ronald McClelland Jr., Niklas Muennighoff, Fares Obeid, Atsushi Saito, Guangyu Song, Haoqin Tu, Stanislaw Wozniak, Ruichong Zhang, Bingchen Zhao, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu:
Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence. CoRR abs/2404.05892 (2024) - [i22]Aaron J. Li, Satyapriya Krishna, Himabindu Lakkaraju:
More RLHF, More Trust? On The Impact of Human Preference Alignment On Language Model Trustworthiness. CoRR abs/2404.18870 (2024) - [i21]Apurv Verma
, Satyapriya Krishna, Sebastian Gehrmann, Madhavan Seshadri, Anu Pradhan, Tom Ault, Leslie Barrett, David Rabinowitz, John A. Doucette, NhatHai Phan:
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs). CoRR abs/2407.14937 (2024) - [i20]Satyapriya Krishna, Kalpesh Krishna, Anhad Mohananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, Manaal Faruqui:
Fact, Fetch, and Reason: A Unified Evaluation of Retrieval-Augmented Generation. CoRR abs/2409.12941 (2024) - [i19]Jared Joselowitz, Arjun Jagota, Satyapriya Krishna, Sonali Parbhoo:
Insights from the Inverse: Reconstructing LLM Training Goals Through Inverse RL. CoRR abs/2410.12491 (2024) - 2023
- [j1]Dylan Slack
, Satyapriya Krishna
, Himabindu Lakkaraju, Sameer Singh:
Explaining machine learning models with interactive natural language conversations using TalkToModel. Nat. Mac. Intell. 5(8): 873-883 (2023) - [c9]Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju:
Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten. ICML 2023: 17808-17826 - [c8]Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, Himabindu Lakkaraju:
Post Hoc Explanations of Language Models Can Improve Language Models. NeurIPS 2023 - [i18]Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju:
Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten. CoRR abs/2302.04288 (2023) - [i17]Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, Himabindu Lakkaraju:
Post Hoc Explanations of Language Models Can Improve Language Models. CoRR abs/2305.11426 (2023) - [i16]Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
On the Trade-offs between Adversarial Robustness and Actionable Explanations. CoRR abs/2309.16452 (2023) - [i15]Nicholas Kroeger, Dan Ley, Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju:
Are Large Language Models Post Hoc Explainers? CoRR abs/2310.05797 (2023) - [i14]Satyapriya Krishna:
On the Intersection of Self-Correction and Trust in Language Models. CoRR abs/2311.02801 (2023) - 2022
- [c7]Umang Gupta, Jwala Dhamala, Varun Kumar
, Apurv Verma, Yada Pruksachatkun, Satyapriya Krishna, Rahul Gupta, Kai-Wei Chang, Greg Ver Steeg, Aram Galstyan:
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal. ACL (Findings) 2022: 658-678 - [c6]Satyapriya Krishna, Rahul Gupta, Apurv Verma, Jwala Dhamala, Yada Pruksachatkun, Kai-Wei Chang:
Measuring Fairness of Text Classifiers via Prediction Sensitivity. ACL (1) 2022: 5830-5842 - [c5]Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju:
OpenXAI: Towards a Transparent Evaluation of Model Explanations. NeurIPS 2022 - [i13]Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, Himabindu Lakkaraju:
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective. CoRR abs/2202.01602 (2022) - [i12]Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju:
Rethinking Stability for Attribution-based Explanations. CoRR abs/2203.06877 (2022) - [i11]Satyapriya Krishna, Rahul Gupta, Apurv Verma
, Jwala Dhamala, Yada Pruksachatkun, Kai-Wei Chang:
Measuring Fairness of Text Classifiers via Prediction Sensitivity. CoRR abs/2203.08670 (2022) - [i10]Umang Gupta, Jwala Dhamala, Varun Kumar, Apurv Verma
, Yada Pruksachatkun, Satyapriya Krishna, Rahul Gupta, Kai-Wei Chang, Greg Ver Steeg, Aram Galstyan:
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal. CoRR abs/2203.12574 (2022) - [i9]Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju:
OpenXAI: Towards a Transparent Evaluation of Model Explanations. CoRR abs/2206.11104 (2022) - [i8]Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, Sameer Singh:
TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues. CoRR abs/2207.04154 (2022) - 2021
- [c4]Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, Kai-Wei Chang:
Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification. ACL/IJCNLP (Findings) 2021: 3320-3331 - [c3]Satyapriya Krishna, Rahul Gupta, Christophe Dupuy:
ADePT: Auto-encoder based Differentially Private Text Transformation. EACL 2021: 2435-2439 - [c2]Justin Payan, Yuval Merhav, He Xie, Satyapriya Krishna, Anil Ramakrishna, Mukund Sridhar, Rahul Gupta:
Towards Realistic Single-Task Continuous Learning Research for NER. EMNLP (Findings) 2021: 3773-3783 - [c1]Jwala Dhamala
, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta:
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation. FAccT 2021: 862-872 - [i7]Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta:
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation. CoRR abs/2101.11718 (2021) - [i6]Satyapriya Krishna, Rahul Gupta, Christophe Dupuy:
ADePT: Auto-encoder based Differentially Private Text Transformation. CoRR abs/2102.01502 (2021) - [i5]Michiel de Jong, Satyapriya Krishna, Anuva Agarwal:
Grounding Complex Navigational Instructions Using Scene Graphs. CoRR abs/2106.01607 (2021) - [i4]Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, Kai-Wei Chang:
Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification. CoRR abs/2106.10826 (2021) - [i3]Justin Payan, Yuval Merhav, He Xie, Satyapriya Krishna, Anil Ramakrishna, Mukund Sridhar, Rahul Gupta:
Towards Realistic Single-Task Continuous Learning Research for NER. CoRR abs/2110.14694 (2021) - 2020
- [i2]Aarsh Patel, Rahul Gupta, Mukund Harakere, Satyapriya Krishna, Aman Alok, Peng Liu:
Towards classification parity across cohorts. CoRR abs/2005.08033 (2020)
2010 – 2019
- 2019
- [i1]Yunzhe Tao, Saurabh Gupta, Satyapriya Krishna, Xiong Zhou, Orchid Majumder, Vineet Khare:
FineText: Text Classification via Attention-based Language Model Fine-tuning. CoRR abs/1910.11959 (2019)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-04-20 23:55 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint