California Gazette

Jingya Chen Champions AI Fairness Through Human-Centered Design

Jingya Chen Champions AI Fairness Through Human-Centered Design
Photo Courtesy: Jingya Chen / Jingya Chen at work

By: Elena Mishkin

At the CSCW 2024 proceedings in San José, Costa Rica this November, the paper “Tinker, Tailor, Configure, Customize: The Articulation Work of Contextualizing an AI Fairness Checklist” captured significant attention for its innovative approach to AI ethics. As the second author and lead designer of the project, Jingya Chen of Microsoft Research played a critical role in transforming complex research into actionable, user-centric solutions for AI practitioners. 

The paper addresses a pressing issue: AI systems, now pervasive in fields such as finance, healthcare, and education, can inadvertently introduce fairness-related harms. Chen and her co-authors, led by Michael Madaio, explored how AI practitioners tailor fairness checklists to their organizations’ specific workflows and deployment needs. 

Chen has an academic background from Carnegie Mellon University and Tsinghua University, her contributions bridged research insights with practical design frameworks, ensuring the team’s work resonated with end users. “A checklist should not only guide but inspire thoughtful reflection,” Chen explained. “Our goal was to create a tool that encourages teams to actively engage with fairness, not treat it as an afterthought.”

From Research to Design: A Practical Approach

Chen’s extensive experience in crafting inclusive and intuitive design frameworks gave her a unique perspective for the fairness checklist project. Previously, she spearheaded designs for Microsoft initiatives such as AutoGen Studio, a low-code AI platform, and the Responsible AI Toolbox, which equips developers with ethical guidelines. These projects reduced technical barriers and made complex systems more accessible. 

Building on this expertise, Chen’s work on the fairness checklist focused on fostering “ethical sensitivity.” She co-designed prototypes that prioritized collaboration among diverse stakeholders, including data scientists, engineers, and program managers. Her approach emphasized modularity and adaptability, ensuring the checklist could be tailored to different domains while sparking meaningful dialogue on fairness. 

“Fairness in AI is not just a technical issue—it’s a human one,” Chen remarked. “By embedding collaborative elements into the checklist design, we aimed to align stakeholders on shared values and priorities.”

The Paper’s Key Insights: Articulation in Action

The CSCW paper introduces the concept of “articulation work” to describe the process of customizing fairness practices for specific organizational contexts. Chen’s design contributions were critical in addressing challenges like adapting general-purpose checklists for specialized needs and navigating team dynamics in fairness initiatives. 

Her prototypes incorporated features such as modular checklist items and prompts to facilitate cross-disciplinary discussions. These designs not only enhanced usability but also positioned the checklist as a tool for fostering alignment and accountability within teams.

A Vision for Responsible AI

Chen’s work reflects a broader commitment to responsible AI that balances technical excellence with human-centered design. She envisions a future where AI systems empower users to engage responsibly and meaningfully.

“This project was a collaborative effort by a team dedicated to advancing fairness in AI,” Chen noted. “I’m grateful to have contributed and excited to see how these insights shape the future of responsible AI design.”

Impact Beyond the Checklist 

Jingya Chen’s contributions extend beyond her technical expertise. Her emphasis on designing AI tools that resonate with human values underscores a fundamental truth: ethical innovation isn’t just about developing new technologies—it’s about ensuring those technologies serve humanity thoughtfully and inclusively. 

Chen’s quiet yet transformative leadership in the field serves as a reminder that the path to responsible AI requires both innovation and empathy.

Published by Stephanie M.

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of California Gazette.