Artificial Intelligence (AI) technology has been integrated into American health settings in many ways. While AI offers significant potential to improve health outcomes and system efficiencies, its integration into primary care settings also raises questions about equity, trust, consent, and the future of patient/provider relationships.
To ensure that AI tools support equitable and human-centered primary care, it is essential to understand how people experience and perceive these technologies in their healthcare — especially people from communities whose perspectives are not typically represented in tech product development, who have been historically underserved by the medical community, or who are typically less trusting of doctors.
We’re partnering with the Commonwealth Fund to conduct a qualitative design research study around the integration of AI technologies in primary care settings, with a particular focus on patient populations that have been historically underserved by and/or are mistrustful of the US healthcare system. Our research will seek to identify how patients feel about existing integration of AI in primary care, identify language that allows individuals to best understand and consent to such integration, envision ways that AI-supported systems might eliminate gaps in the quality of care patients receive and enable them to be active partners in their care.
Synthesized insights and associated design principles will be delivered to the Commonwealth Fund, where they will be used primarily to inform healthcare system leaders, with secondary audiences including policymakers and AI technologists, to support ethical implementation of AI tools in primary care that serve patients’ needs.
Participants engaged
Data points collected
Inquiry Areas
To explore these questions, we spoke with historically underserved communities in New York City and Southwest Virginia, prioritizing low-income individuals from populations historically underserved by or mistrustful of healthcare systems. In New York, participants were predominantly people of color; in Virginia, they were predominantly white.
We recruited individuals who had attended at least one primary care visit in the past year so they could reflect on recent experiences with AI in healthcare. We also sought diversity in insurance coverage and familiarity with AI tools.
Using semi-structured interviews, we explored our inquiry areas and captured reactions to specific AI applications and clinical contexts. We also used design stimuli, a set of scenario cards representing potential uses of AI, to tease out participants’ perceptions and tradeoffs.
To complement our primary research, we compiled academic literature, professional journals, and reputable innovation news sources to fill gaps and provide additional context.
What We Heard
The videos below provide a preliminary overview of key moments from the field before synthesis or meaning-making.
PPL is a tax-exempt 501(c)(3)
nonprofit organization.
info@publicpolicylab.org
+1 646 535 6535
20 Jay Street, Suite 203
Brooklyn, NY 11201
We'd love to hear more. Send us a note and we'll be in touch.
We are accepting applications for a Graduate Summer Intern until February 23, 2026.
To hear about future job announcements, follow us on Instagram, Twitter, Threads, and LinkedIn or subscribe to our newsletter.
Enter your email below to subscribe to our occasional newsletter.
Wondering what you’ve missed?
Check out our
The Public Policy Lab is a tax-exempt
501(c)(3) nonprofit organization.
Donate now to support our work; your
gift is tax-deductible as allowed by law.