


default search action
Alexandra Chouldechova
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i39]Hanna M. Wallach, Meera A. Desai, A. Feder Cooper, Angelina Wang, Chad Atalla, Solon Barocas, Su Lin Blodgett, Alexandra Chouldechova, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Nicholas Pangakis, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs:
Position: Evaluating Generative AI Systems is a Social Science Measurement Challenge. CoRR abs/2502.00561 (2025) - [i38]Luke Guerdan, Solon Barocas, Kenneth Holstein, Hanna M. Wallach, Zhiwei Steven Wu, Alexandra Chouldechova:
Validating LLM-as-a-Judge Systems in the Absence of Gold Labels. CoRR abs/2503.05965 (2025) - 2024
- [c28]Lingwei Cheng, Cameron Drayton, Alexandra Chouldechova, Rhema Vaithianathan:
Algorithm-Assisted Decision Making and Racial Disparities in Housing: A Study of the Allegheny Housing Assessment Tool. AIES (1) 2024: 281-292 - [c27]Anna Kawakami, Daricia Wilkinson, Alexandra Chouldechova:
Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders. AIES (1) 2024: 670-682 - [c26]Christine Herlihy
, Kimberly Truong
, Alexandra Chouldechova
, Miroslav Dudík
:
A structured regression approach for evaluating model performance across intersectional subgroups. FAccT 2024: 313-325 - [c25]Nil-Jana Akpinar
, Zachary C. Lipton
, Alexandra Chouldechova
:
The Impact of Differential Feature Under-reporting on Algorithmic Fairness. FAccT 2024: 1355-1382 - [c24]Misha Khodak, Lester Mackey, Alexandra Chouldechova, Miro Dudík:
SureMap: Simultaneous mean estimation for single-task and multi-task disaggregated evaluation. NeurIPS 2024 - [i37]Nil-Jana Akpinar, Zachary C. Lipton, Alexandra Chouldechova:
The Impact of Differential Feature Under-reporting on Algorithmic Fairness. CoRR abs/2401.08788 (2024) - [i36]Christine Herlihy, Kimberly Truong, Alexandra Chouldechova, Miroslav Dudík:
A structured regression approach for evaluating model performance across intersectional subgroups. CoRR abs/2401.14893 (2024) - [i35]Lingwei Cheng, Cameron Drayton, Alexandra Chouldechova, Rhema Vaithianathan:
Algorithm-Assisted Decision Making and Racial Disparities in Housing: A Study of the Allegheny Housing Assessment Tool. CoRR abs/2407.21209 (2024) - [i34]Anna Kawakami, Daricia Wilkinson, Alexandra Chouldechova:
Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders. CoRR abs/2408.12047 (2024) - [i33]Mikhail Khodak, Lester Mackey, Alexandra Chouldechova, Miroslav Dudík:
SureMap: Simultaneous Mean Estimation for Single-Task and Multi-Task Disaggregated Evaluation. CoRR abs/2411.09730 (2024) - [i32]Hanna M. Wallach, Meera A. Desai, Nicholas Pangakis, A. Feder Cooper, Angelina Wang, Solon Barocas, Alexandra Chouldechova, Chad Atalla, Su Lin Blodgett, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs:
Evaluating Generative AI Systems is a Social Science Measurement Challenge. CoRR abs/2411.10939 (2024) - [i31]P. Alex Dow, Jennifer Wortman Vaughan, Solon Barocas, Chad Atalla, Alexandra Chouldechova, Hanna M. Wallach:
Dimensions of Generative AI Evaluation Design. CoRR abs/2411.12709 (2024) - [i30]Luke Guerdan, Hanna M. Wallach, Solon Barocas, Alexandra Chouldechova:
A Framework for Evaluating LLMs Under Task Indeterminacy. CoRR abs/2411.13760 (2024) - [i29]Emma Harvey, Emily Sheng, Su Lin Blodgett, Alexandra Chouldechova, Jean Garcia-Gathright, Alexandra Olteanu, Hanna M. Wallach:
Gaps Between Research and Practice When Measuring Representational Harms Caused by LLM-Based Systems. CoRR abs/2411.15662 (2024) - [i28]Alexandra Chouldechova, Chad Atalla, Solon Barocas, A. Feder Cooper, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Nicholas Pangakis, Stefanie Reed, Emily Sheng, Dan Vann, Matthew Vogel, Hannah Washington, Hanna M. Wallach:
A Shared Standard for Valid Measurement of Generative AI Systems' Capabilities, Risks, and Impacts. CoRR abs/2412.01934 (2024) - [i27]A. Feder Cooper, Christopher A. Choquette-Choo, Miranda Bogen, Matthew Jagielski, Katja Filippova, Ken Ziyu Liu, Alexandra Chouldechova, Jamie Hayes, Yangsibo Huang, Niloofar Mireshghallah, Ilia Shumailov, Eleni Triantafillou, Peter Kairouz, Nicole Mitchell, Percy Liang, Daniel E. Ho, Yejin Choi, Sanmi Koyejo, Fernando A. Delgado, James Grimmelmann, Vitaly Shmatikov, Christopher De Sa, Solon Barocas, Amy Cyphert, Mark Lemley, danah boyd, Jennifer Wortman Vaughan, Miles Brundage, David Bau, Seth Neel, Abigail Z. Jacobs, Andreas Terzis, Hanna M. Wallach, Nicolas Papernot, Katherine Lee:
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice. CoRR abs/2412.06966 (2024) - 2023
- [c23]Lingwei Cheng
, Alexandra Chouldechova
:
Overcoming Algorithm Aversion: A Comparison between Process and Outcome Control. CHI 2023: 756:1-756:27 - [c22]Jamelle Watson-Daniels
, Solon Barocas
, Jake M. Hofman
, Alexandra Chouldechova
:
Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints. FAccT 2023: 297-311 - [c21]Anjalie Field
, Amanda Coston
, Nupoor Gandhi
, Alexandra Chouldechova
, Emily Putnam-Hornstein
, David Steier
, Yulia Tsvetkov
:
Examining risks of racial biases in NLP tools for child protective services. FAccT 2023: 1479-1492 - [i26]Lingwei Cheng, Alexandra Chouldechova:
Overcoming Algorithm Aversion: A Comparison between Process and Outcome Control. CoRR abs/2303.12896 (2023) - [i25]Anjalie Field, Amanda Coston, Nupoor Gandhi, Alexandra Chouldechova, Emily Putnam-Hornstein, David Steier, Yulia Tsvetkov:
Examining risks of racial biases in NLP tools for child protective services. CoRR abs/2305.19409 (2023) - [i24]Jamelle Watson-Daniels, Solon Barocas, Jake M. Hofman, Alexandra Chouldechova:
Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints. CoRR abs/2306.13738 (2023) - 2022
- [j5]Lingwei Cheng
, Alexandra Chouldechova:
Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening. Proc. ACM Hum. Comput. Interact. 6(CSCW2): 1-33 (2022) - [c20]Alexandra Chouldechova, Siqi Deng, Yongxin Wang, Wei Xia, Pietro Perona:
Unsupervised and Semi-supervised Bias Benchmarking in Face Recognition. ECCV (13) 2022: 289-306 - [c19]Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Ken Holstein, Zhiwei Steven Wu, Haiyi Zhu
:
Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. FAccT 2022: 1162-1177 - [c18]Emily Black
, Hadi Elzayn, Alexandra Chouldechova, Jacob Goldin, Daniel E. Ho:
Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models. FAccT 2022: 1479-1503 - [c17]Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi:
Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness. FAccT 2022: 1639-1656 - [i23]Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi:
Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness. CoRR abs/2202.08821 (2022) - [i22]Lingwei Cheng, Alexandra Chouldechova:
Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening. CoRR abs/2204.05478 (2022) - [i21]Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski:
Doubting AI Predictions: Influence-Driven Second Opinion Recommendation. CoRR abs/2205.00072 (2022) - [i20]Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu:
Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. CoRR abs/2205.08928 (2022) - [i19]Emily Black, Hadi Elzayn, Alexandra Chouldechova, Jacob Goldin, Daniel E. Ho:
Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models. CoRR abs/2206.09875 (2022) - 2021
- [j4]Rhema Vaithianathan, Diana Benavides Prado
, Eric E. Dalton, Alexandra Chouldechova, Emily Putnam-Hornstein:
Using a Machine Learning Tool to Support High-Stakes Decisions in Child Protection. AI Mag. 42(1): 53-60 (2021) - [j3]Riccardo Fogliato, Alexandra Chouldechova, Zachary C. Lipton:
The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies. Proc. ACM Hum. Comput. Interact. 5(CSCW2): 428:1-428:24 (2021) - [c16]Riccardo Fogliato, Alice Xiang
, Zachary C. Lipton, Daniel Nagin, Alexandra Chouldechova:
On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes. AIES 2021: 100-111 - [c15]Hao Fei Cheng
, Logan Stapleton, Ruiqi Wang, Paige Bullock, Alexandra Chouldechova, Zhiwei Steven Wu
, Haiyi Zhu
:
Soliciting Stakeholders' Fairness Notions in Child Maltreatment Predictive Systems. CHI 2021: 390:1-390:17 - [c14]Amanda Coston
, Neel Guha, Derek Ouyang, Lisa Lu, Alexandra Chouldechova, Daniel E. Ho:
Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy. FAccT 2021: 173-184 - [c13]Alan Mishler
, Edward H. Kennedy, Alexandra Chouldechova:
Fairness in Risk Assessment Instruments: Post-Processing to Achieve Counterfactual Equalized Odds. FAccT 2021: 386-400 - [c12]Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova:
The effect of differential victim crime reporting on predictive policing systems. FAccT 2021: 838-849 - [c11]Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova:
Characterizing Fairness Over the Set of Good Models Under Selective Labels. ICML 2021: 2144-2155 - [i18]Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova:
Characterizing Fairness Over the Set of Good Models Under Selective Labels. CoRR abs/2101.00352 (2021) - [i17]Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova:
Leveraging Expert Consistency to Improve Algorithmic Decision Support. CoRR abs/2101.09648 (2021) - [i16]Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova:
The effect of differential victim crime reporting on predictive policing systems. CoRR abs/2102.00128 (2021) - [i15]Hao Fei Cheng, Logan Stapleton, Ruiqi Wang, Paige Bullock, Alexandra Chouldechova, Zhiwei Steven Wu, Haiyi Zhu:
Soliciting Stakeholders' Fairness Notions in Child Maltreatment Predictive Systems. CoRR abs/2102.01196 (2021) - [i14]Riccardo Fogliato, Alexandra Chouldechova, Zachary C. Lipton:
The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies. CoRR abs/2109.01443 (2021) - 2020
- [j2]Alexandra Chouldechova, Aaron Roth
:
A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63(5): 82-89 (2020) - [c10]Riccardo Fogliato, Alexandra Chouldechova, Max G'Sell:
Fairness Evaluation in Presence of Biased Noisy Labels. AISTATS 2020: 2325-2336 - [c9]Maria De-Arteaga, Riccardo Fogliato, Alexandra Chouldechova:
A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. CHI 2020: 1-12 - [c8]Amanda Coston
, Alan Mishler
, Edward H. Kennedy, Alexandra Chouldechova:
Counterfactual risk assessments, evaluation, and fairness. FAT* 2020: 582-593 - [c7]Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova:
Counterfactual Predictions under Runtime Confounding. NeurIPS 2020 - [i13]Maria De-Arteaga, Riccardo Fogliato, Alexandra Chouldechova:
A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. CoRR abs/2002.08035 (2020) - [i12]Riccardo Fogliato, Max G'Sell, Alexandra Chouldechova:
Fairness Evaluation in Presence of Biased Noisy Labels. CoRR abs/2003.13808 (2020) - [i11]Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova:
Counterfactual Predictions under Runtime Confounding. CoRR abs/2006.16916 (2020) - [i10]Amanda Coston, Neel Guha, Derek Ouyang, Lisa Lu, Alexandra Chouldechova, Daniel E. Ho:
Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy. CoRR abs/2011.07194 (2020)
2010 – 2019
- 2019
- [c6]Anna Brown
, Alexandra Chouldechova, Emily Putnam-Hornstein
, Andrew Tobin, Rhema Vaithianathan:
Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. CHI 2019: 41 - [c5]Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes
, Christian Borgs
, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai:
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. FAT 2019: 120-128 - [c4]Alexey Romanov, Maria De-Arteaga, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Kalai
:
What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. NAACL-HLT (1) 2019: 4187-4195 - [i9]Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai:
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. CoRR abs/1901.09451 (2019) - [i8]Alexey Romanov, Maria De-Arteaga, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai:
What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. CoRR abs/1904.05233 (2019) - [i7]Amanda Coston, Alexandra Chouldechova, Edward H. Kennedy:
Counterfactual Risk Assessments, Evaluation, and Fairness. CoRR abs/1909.00066 (2019) - 2018
- [c3]Alexandra Chouldechova, Diana Benavides Prado
, Oleksandr Fialko, Rhema Vaithianathan:
A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. FAT 2018: 134-148 - [c2]Zachary C. Lipton, Julian J. McAuley, Alexandra Chouldechova:
Does mitigating ML's impact disparity require treatment disparity? NeurIPS 2018: 8136-8146 - [i6]Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova:
Learning under selective labels in the presence of expert consistency. CoRR abs/1807.00905 (2018) - [i5]Alexandra Chouldechova, Aaron Roth:
The Frontiers of Fairness in Machine Learning. CoRR abs/1810.08810 (2018) - 2017
- [j1]Alexandra Chouldechova:
Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5(2): 153-163 (2017) - [i4]Alexandra Chouldechova:
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. CoRR abs/1703.00056 (2017) - [i3]Alexandra Chouldechova, Max G'Sell:
Fairer and more accurate, but for whom? CoRR abs/1707.00046 (2017) - [i2]Zachary C. Lipton, Alexandra Chouldechova, Julian J. McAuley:
Does mitigating ML's disparate impact require disparate treatment? CoRR abs/1711.07076 (2017) - 2016
- [i1]Alexandra Chouldechova:
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. CoRR abs/1610.07524 (2016) - 2013
- [c1]Alexandra Chouldechova, David Mease:
Differences in search engine evaluations between query owners and non-owners. WSDM 2013: 103-112
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-04-14 00:28 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint