


default search action
Angelina Wang
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j4]Angelina Wang
, Jamie Morgenstern, John P. Dickerson:
Large language models that replace human participants can harmfully misportray and flatten identity groups. Nat. Mac. Intell. 7(3): 400-411 (2025)
[c15]Angelina Wang, Michelle Phan, Daniel E. Ho, Sanmi Koyejo:
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs. ACL (1) 2025: 6867-6893
[c14]Angelina Wang
:
Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning. FAccT 2025: 485-497
[c13]Angelina Wang
, Xuechunzi Bai
, Solon Barocas
, Su Lin Blodgett
:
Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What Ways. FAccT 2025: 746-762
[i28]Hanna M. Wallach, Meera A. Desai, A. Feder Cooper, Angelina Wang, Chad Atalla, Solon Barocas, Su Lin Blodgett, Alexandra Chouldechova, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Nicholas Pangakis, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs:
Position: Evaluating Generative AI Systems is a Social Science Measurement Challenge. CoRR abs/2502.00561 (2025)
[i27]Angelina Wang, Michelle Phan, Daniel E. Ho, Sanmi Koyejo:
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs. CoRR abs/2502.01926 (2025)
[i26]Laura Weidinger, Inioluwa Deborah Raji, Hanna M. Wallach, Margaret Mitchell, Angelina Wang, Olawale Salaudeen, Rishi Bommasani, Deep Ganguli, Sanmi Koyejo, William Isaac:
Toward an Evaluation Science for Generative AI Systems. CoRR abs/2503.05336 (2025)
[i25]Angelina Wang:
Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning. CoRR abs/2505.04038 (2025)
[i24]Olawale Salaudeen, Anka Reuel, Ahmed Ahmed, Suhana Bedi, Zachary Robertson, Sudharsan Sundar, Ben Domingue, Angelina Wang, Sanmi Koyejo:
Measurement to Meaning: A Validity-Centered Framework for AI Evaluation. CoRR abs/2505.10573 (2025)
[i23]Alexandra Olteanu, Su Lin Blodgett, Agathe Balayn
, Angelina Wang, Fernando Diaz, Flávio du Pin Calmon, Margaret Mitchell, Michael D. Ekstrand, Reuben Binns, Solon Barocas:
Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor. CoRR abs/2506.14652 (2025)
[i22]Rishi Bommasani, Scott R. Singer, Ruth E. Appel, Sarah H. Cen, A. Feder Cooper, Elena Cryst, Lindsey A. Gailmard, Ian Klaus, Meredith M. Lee, Inioluwa Deborah Raji, Anka Reuel, Drew Spence, Alexander Wan, Angelina Wang, Daniel Zhang, Daniel E. Ho, Percy Liang, Dawn Song, Joseph E. Gonzalez, Jonathan Zittrain, Jennifer Tour Chayes, Mariano-Florentino Cuellar, Li Fei-Fei:
The California Report on Frontier AI Policy. CoRR abs/2506.17303 (2025)
[i21]Lydia T. Liu, Inioluwa Deborah Raji, Angela Zhou, Luke Guerdan, Jessica Hullman, Daniel Malinsky, Bryan Wilder, Simone Zhang, Hammaad Adam, Amanda Coston, Benjamin Laufer, Ezinne Nwankwo, Michael Zanger-Tishler, Eli Ben-Michael, Solon Barocas, Avi Feller, Marissa Gerchick, Talia Gillis, Shion Guha, Daniel E. Ho, Lily Hu, Kosuke Imai, Sayash Kapoor, Joshua Loftus, Razieh Nabi, Arvind Narayanan, Ben Recht, Juan Carlos Perdomo, Matthew J. Salganik, Mark P. Sendak, Alexander Tolbert, Berk Ustun, Suresh Venkatasubramanian, Angelina Wang, Ashia Wilson:
Bridging Prediction and Intervention Problems in Social Systems. CoRR abs/2507.05216 (2025)
[i20]Angelina Wang, Daniel E. Ho, Sanmi Koyejo:
The Inadequacy of Offline LLM Evaluations: A Need to Account for Personalization in Model Behavior. CoRR abs/2509.19364 (2025)
[i19]Vyoma Raman, Judy Hanwen Shen, Andy K. Zhang, Lindsey A. Gailmard, Rishi Bommasani, Daniel E. Ho, Angelina Wang:
Disclosure and Evaluation as Fairness Interventions for General-Purpose AI. CoRR abs/2510.05292 (2025)
[i18]Anka Reuel, Avijit Ghosh, Jenny Chim, Andrew Tran, Yanan Long, Jennifer Mickel, Usman Gohar, Srishti Yadav, Pawan Sasanka Ammanamanchi, Mowafak Allaham, Hossein A. Rahmani, Mubashara Akhtar, Felix Friedrich, Robert Scholz, Michael Alexander Riegler, Jan Batzner, Eliya Habba, Arushi Saxena, Anastassia Kornilova, Kevin Wei, Prajna Soni, Yohan Mathew, Kevin Klyman, Jeba Sania, Subramanyam Sahoo, Olivia Beyer Bruvik, Pouya Sadeghi, Sujata S. Goswami, Angelina Wang, Yacine Jernite, Zeerak Talat, Stella Biderman, Mykel J. Kochenderfer, Sanmi Koyejo, Irene Solaiman:
Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations. CoRR abs/2511.05613 (2025)- 2024
[b1]Angelina Wang:
Operationalizing Responsible Machine Learning: From Equality Towards Equity. Princeton University, USA, 2024
[j3]Angelina Wang
, Aaron Hertzmann
, Olga Russakovsky:
Benchmark suites instead of leaderboards for evaluating AI fairness. Patterns 5(11): 101080 (2024)
[c12]Angelina Wang, Teresa Datta, John P. Dickerson:
Strategies for Increasing Corporate Responsible AI Prioritization. AIES (1) 2024: 1514-1526
[c11]Severin Engelmann
, Madiha Zahrah Choksi
, Angelina Wang
, Casey Fiesler
:
Visions of a Discipline: Analyzing Introductory AI Courses on YouTube. FAccT 2024: 2400-2420
[i17]Angelina Wang, Jamie Morgenstern, John P. Dickerson:
Large language models cannot replace human participants because they cannot portray identity groups. CoRR abs/2402.01908 (2024)
[i16]Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths:
Measuring Implicit Bias in Explicitly Unbiased Large Language Models. CoRR abs/2402.04105 (2024)
[i15]Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett:
Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways. CoRR abs/2402.04420 (2024)
[i14]Angelina Wang, Teresa Datta, John P. Dickerson:
Strategies for Increasing Corporate Responsible AI Prioritization. CoRR abs/2405.03855 (2024)
[i13]Severin Engelmann, Madiha Zahrah Choksi, Angelina Wang, Casey Fiesler:
Visions of a Discipline: Analyzing Introductory AI Courses on YouTube. CoRR abs/2407.13077 (2024)
[i12]Hanna M. Wallach, Meera A. Desai, Nicholas Pangakis, A. Feder Cooper, Angelina Wang, Solon Barocas, Alexandra Chouldechova, Chad Atalla, Su Lin Blodgett, Emily Corvi, P. Alex Dow, Jean Garcia-Gathright, Alexandra Olteanu, Stefanie Reed, Emily Sheng, Dan Vann, Jennifer Wortman Vaughan, Matthew Vogel, Hannah Washington, Abigail Z. Jacobs:
Evaluating Generative AI Systems is a Social Science Measurement Challenge. CoRR abs/2411.10939 (2024)- 2023
[j2]Arunesh Mathur
, Angelina Wang
, Carsten Schwemmer
, Maia Hamin, Brandon M. Stewart
, Arvind Narayanan
:
Manipulative tactics are the norm in political emails: Evidence from 300K emails from the 2020 US election cycle. Big Data Soc. 10(1): 205395172211453 (2023)
[c10]Jared Katzman, Angelina Wang, Morgan Klaus Scheuerman, Su Lin Blodgett, Kristen Laird, Hanna M. Wallach, Solon Barocas:
Taxonomizing and Measuring Representational Harms: A Look at Image Tagging. AAAI 2023: 14277-14285
[c9]Angelina Wang
, Sayash Kapoor
, Solon Barocas
, Arvind Narayanan
:
Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy. FAccT 2023: 626
[c8]Angelina Wang, Olga Russakovsky:
Overwriting Pretrained Bias with Finetuning Data. ICCV 2023: 3934-3945
[c7]Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy
, Ruth Fong, Olga Russakovsky:
Gender Artifacts in Visual Datasets. ICCV 2023: 4814-4825
[i11]Angelina Wang, Olga Russakovsky:
Overcoming Bias in Pretrained Models by Manipulating the Finetuning Dataset. CoRR abs/2303.06167 (2023)
[i10]Jared Katzman
, Angelina Wang, Morgan Klaus Scheuerman, Su Lin Blodgett, Kristen Laird, Hanna M. Wallach, Solon Barocas:
Taxonomizing and Measuring Representational Harms: A Look at Image Tagging. CoRR abs/2305.01776 (2023)- 2022
[j1]Angelina Wang
, Alexander Liu, Ryan Zhang, Anat Kleiman, Leslie Kim, Dora Zhao, Iroha Shirai, Arvind Narayanan, Olga Russakovsky:
REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets. Int. J. Comput. Vis. 130(7): 1790-1810 (2022)
[c6]Angelina Wang, Solon Barocas, Kristen Laird, Hanna M. Wallach:
Measuring Representational Harms in Image Captioning. FAccT 2022: 324-335
[c5]Angelina Wang, Vikram V. Ramaswamy
, Olga Russakovsky:
Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation. FAccT 2022: 336-349
[i9]Angelina Wang, Vikram V. Ramaswamy
, Olga Russakovsky:
Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation. CoRR abs/2205.04610 (2022)
[i8]Angelina Wang, Solon Barocas, Kristen Laird, Hanna M. Wallach:
Measuring Representational Harms in Image Captioning. CoRR abs/2206.07173 (2022)
[i7]Nicole Meister
, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy
, Ruth Fong, Olga Russakovsky:
Gender Artifacts in Visual Datasets. CoRR abs/2206.09191 (2022)- 2021
[c4]Dora Zhao, Angelina Wang, Olga Russakovsky:
Understanding and Evaluating Racial Biases in Image Captioning. ICCV 2021: 14810-14820
[c3]Angelina Wang, Olga Russakovsky:
Directional Bias Amplification. ICML 2021: 10882-10893
[i6]Alan Chan, Chinasa T. Okolo, Zachary Terner, Angelina Wang:
The Limits of Global Inclusion in AI Development. CoRR abs/2102.01265 (2021)
[i5]Angelina Wang, Olga Russakovsky:
Directional Bias Amplification. CoRR abs/2102.12594 (2021)
[i4]Dora Zhao, Angelina Wang, Olga Russakovsky:
Understanding and Evaluating Racial Biases in Image Captioning. CoRR abs/2106.08503 (2021)- 2020
[c2]Angelina Wang
, Arvind Narayanan
, Olga Russakovsky
:
REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets. ECCV (3) 2020: 733-751
[i3]Angelina Wang, Arvind Narayanan, Olga Russakovsky:
ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets. CoRR abs/2004.07999 (2020)
2010 – 2019
- 2019
[c1]Angelina Wang, Thanard Kurutach, Pieter Abbeel, Aviv Tamar:
Learning Robotic Manipulation through Visual Planning and Acting. Robotics: Science and Systems 2019
[i2]Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar:
Learning Robotic Manipulation through Visual Planning and Acting. CoRR abs/1905.04411 (2019)- 2017
[i1]William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel:
Safer Classification by Synthesis. CoRR abs/1711.08534 (2017)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-12-16 23:51 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







