MEET THE ADVOCATE

Beyond the Algorithm

Judy Wawira Gichoya, MD, MS, FSIIM, studies how to use AI equitably and ethically in medicine

Advertisement


By Andrea Brown
October 3, 2025 | VOLUME 3, ISSUE 3

As artificial intelligence (AI) is becoming rapidly embedded in the fabric of health care, the stakes for clinicians and researchers grow ever higher. AI has the potential to streamline diagnoses, uncover hidden patterns in data, and support overburdened physicians—but it can also reinforce inequities if left unchecked.

Few people understand this balance more deeply than Judy Wawira Gichoya, MD, MS, FSIIM, Associate Professor of Interventional Radiology and Informatics at Emory University School of Medicine.

William Feldman, MD

Judy Wawira Gichoya, MD, MS, FSIIM
Associate Professor of Interventional Radiology and Informatics
Emory University School of Medicine

Dr. Gichoya has built her career around one central mission: to use data science to advance health equity. As Co-Director of the Healthcare AI Innovation and Translational Informatics (HITI) Lab at Emory, she leads pioneering research into how datasets are created, how AI models behave in the real world, and how clinicians can guard against hidden bias.

Discovering the power of data

Dr. Gichoya’s path into the world of AI began far from the labs of Atlanta. Trained as a physician in Kenya during the height of the HIV epidemic, she witnessed firsthand how the lack of organized data hampered patient care. “People really needed to know who was dying, who was getting antiretrovirals—just the basics of how you provide care,” she recalled.

Technology, she quickly saw, could be a lifeline. Through an open-source medical records project supported by US teams, Dr. Gichoya was introduced to the potential of marrying computers and medicine. That early exposure set her on a course toward radiology, informatics, and, ultimately, AI research.

“I realized that technology could do a wonderful job at organizing data,” she said. “That was the spark.”

Lived experience and the question of bias

As a Black woman, an immigrant, and a physician, Dr. Gichoya brings a unique lens to questions of fairness in AI. When she first arrived in the United States, Dr. Gichoya said, she became acutely aware of race in ways that were unfamiliar to her in her home country. That experience—coupled with the social justice reckoning following the murder of George Floyd in Minneapolis in 2020 and the disproportionate toll of the COVID-19 pandemic on communities of color—deepened her resolve to investigate how bias manifests in AI.


“I realized that technology could do a wonderful job at organizing data.”


Her research revealed a striking example: Algorithms trained on chest X-rays could identify whether a patient was Black or White with startling accuracy—despite race being a social, not biological, construct. This discovery illuminated how easily AI can internalize patterns that reflect social inequities rather than medical truths.

“Bias is not just a mathematical function,” she said. “It exists in context. To understand it, you have to look at subgroups, shortcuts, and the environments where AI is deployed.”

Building better datasets and models

At the HITI Lab, Dr. Gichoya and her team pursue four interlocking goals:

  1. Building diverse datasets: The EMory BrEast Imaging Dataset (EMBED) and chest X-ray collections are examples of how her lab ensures representation across patient populations. The EMBED collection has millions of images, includes an array of racial, ethnic, and age diversity, and has clinical variety of routine and diagnostic screenings.
  2. Evaluating AI for bias and fairness: By testing models against subgroups and searching for “shortcuts,” her group uncovers where algorithms succeed—and where they fail.
  3. Validating AI in the real world: Moving beyond controlled environments, her lab measures how models perform across inpatient, outpatient, and emergency settings.
  4. Training the next generation: Through “hive learning” and “village mentoring,” she has guided more than 60 students worldwide, many of whom are now faculty, postdoctoral researchers, and industry leaders.

The nuances of bias

Dr. Gichoya emphasized that bias can take many forms. Models may be statistically accurate overall, yet underperform in specific subgroups, such as in emergency room patients or people with darker skin tones. Others fall prey to “shortcuts”—for instance, mistaking an ICU setting for an indicator of disease rather than analyzing the radiographical image being studied.


“Bias is not just a mathematical function. It exists in context.”


She also draws distinctions between bias, fairness, and ethics. While a model may not be perfectly fair, it can still be deployed ethically if paired with safeguards. She recalled an example of an HIV prevention model that improved outcomes in men but not women. “I would still deploy it because it saves lives,” she said. “But I’d also use that opportunity to investigate why women aren’t benefiting and implement solutions for them.”

Guidance for clinicians and researchers

For pulmonary, critical care, and sleep specialists—many of whom are beginning to encounter AI in imaging, diagnostics, or predictive tools—Dr. Gichoya offers clear advice:

  1. Embrace subgroup analysis: Don’t settle for aggregate accuracy; examine how models perform across diverse populations and settings.
  2. Recognize shortcuts: Understand that models may learn unintended signals that do not reflect the clinical reality.
  3. Deploy AI models ethically: No model is perfect; what matters is pairing technology with awareness, oversight, and corrective measures.
  4. Expect human factors: Automation bias and alert fatigue can be just as dangerous as algorithmic bias.

“The search for a perfect model is not practical,” she cautioned. “But we can move toward robust testing and ethical deployment that allows clinicians to use AI responsibly.”


“AI can be a powerful tool for better, more equitable care.”


A vision for the future

Dr. Gichoya’s influence is widely recognized. She received the Minnies’ Most Influential Radiology Researcher Award for 2022, was named a 2023 Emerging Leader in Health and Medicine Scholar by the National Academy of Medicine, and earned a place on the 2024 STAT News Status List of the top 50 people shaping life sciences.

What Dr. Gichoya is most proud of, however, is her role as a mentor. “There are still very few physicians who can live in both worlds—programming and practicing medicine,” she said. “Training the next generation to do this work is what excites me the most.”

Her message to the scientific community is simple but urgent: AI is here to stay, and equity must be at its core.

“We may never achieve 100% fairness,” Dr. Gichoya said. “But if we recognize bias, test for it, and deploy with an ethical lens, AI can be a powerful tool for better, more equitable care.”


Read more from this issue