AI in healthcare: bridging the equity gap or widening it?

Physician wearing gloves checking female patients throat
Joe Ganley, athenahealth
Joe Ganley
April 23, 2025
7 min read

Healthcare AI is here—who will it benefit?

In February, I had the opportunity to sit down with two healthcare industry professionals—Vilas Dhar, President of the Patrick J. McGovern Foundation, and G. Paul Matherne, M.D., Professor of Practice at the Darden School of Business and Professor Emeritus at the University of Virginia’s School of Medicine, as part of the US News & World Report Healthcare of Tomorrow summit in Washington, D.C. Our discussion centered on how artificial intelligence (AI) is transforming healthcare and how it may shape the future of care access, delivery, and regulation.

It’s an important topic, and so much more than a trend. AI is shaping up to be one of the most powerful tools in modern healthcare, helping to streamline workflows, support clinical decision-making, and improve patient engagement. Many physicians are excited about the possibilities AI in healthcare brings, while others worry about losing human touch in medicine or about AI worsening the spread of medical misinformation.

But AI itself is not inherently good or bad; it is a reflection of the systems in which it operates. When deployed thoughtfully, AI can help bridge critical healthcare gaps. When deployed poorly, it risks reinforcing and even exacerbating the inequities that already exist in our system.

The question isn’t whether AI will reshape healthcare—it already is. The real question is: will it benefit everyone, or only those who already have access to the best care?

AI in healthcare as a force for equity

One of AI’s most promising applications is its potential to improve access to care for underserved populations—from rural communities with physician shortages to urban areas where clinics are stretched thin. Whether through telehealth, AI-powered diagnostics, or predictive analytics that identify patients at risk, AI can extend care to places and people too often left behind.

In some of the most remote areas of the world, AI is already making an impact. As Vilas Dhar noted, in deeply rural Rajasthan, India, AI is being used to map public health trends—tracking maternal health and infant birth weight to identify and predict where nutritional interventions are needed. Governments then deploy resources where they’re needed most. And it’s working: within months of sending additional food and healthcare resources to Rajasthan, measurable improvements in maternal and infant health were seen. It’s a powerful example of using AI intentionally—to cure complexity and close care gaps.

Here in the U.S., we face a different but no less urgent challenge: the primary care shortage. Independent practices—often solo clinicians—serve as the backbone of care in many underserved areas. But they’re burning out. Administrative burden, slow reimbursement, and regulatory red tape are driving them out of the field.

At athenahealth, we hear it every day: small practices are struggling to survive. That’s why we’re focused on reducing friction—using technology and AI to cure complexity, reduce physician burnout, and free up time for patient care. When deployed well, AI allows clinicians—especially in small, independent practices—to spend more time treating patients and less time on paperwork.

For communities that lack sufficient healthcare providers, AI could also help distribute medical expertise more equitably. As Dr. Matherne noted, AI can help address the shortage of stroke neurologists, who are often clustered in major U.S. cities. In stroke care, AI-powered algorithms are supporting treatment planning and predicting future events, enabling specialized knowledge to be shared beyond large urban health systems across communities. Instead of only the most well-funded hospitals having access to top-tier expertise, AI-powered technology deployed across the social determinants of health can help ensure that patients everywhere receive high-quality care.

AI itself is not inherently good or bad; it is a reflection of the systems in which it operates.

The risk: AI could deepen existing inequities

That said, AI’s benefits aren’t guaranteed. If we’re not deliberate, AI could widen the divide it’s meant to close.

One major concern is bias in training data. As Vilas Dhar pointed out, medicine has historically centered on the white male as the default for clinical research—a decades-long practice that has influenced everything from drug dosing to diagnostic algorithms. If AI is trained on this kind of skewed data, it can replicate and even amplify those biases, leading to disparities in diagnosis, treatment, and outcomes for women, racial minorities, and other historically underserved populations. To avoid that, AI models must be trained on diverse, representative data sets. Better inputs yield better results — and help prevent algorithms from aligning around a statistical average that overlooks the needs of real-world patients.

There’s also the question of cost. Advanced AI tools, such as ambient clinical listening and automated documentation, can dramatically improve healthcare efficiency, but they are an investment practices need to make in order to gain these benefits for physicians and patients. Dhar noted that even at well-funded institutions, bringing on AI-powered ambient technology is not an easy thing to do. For smaller, underfunded providers, it’s an even greater challenge.

If AI adoption follows the same patterns as other healthcare advancements, wealthier institutions will implement it first, improving their efficiency and patient outcomes, while underfunded clinics fall further behind. This creates a "haves and have-nots" system where the best technology is concentrated in areas of affluence, continuing or expanding differences in care access.

How to ensure AI works for everyone

We can’t afford to let that happen. To ensure AI improves care for all patients, not just some, we need to act with intention.

1. Ensure AI is trained on diverse, representative data. AI must be rigorously tested against multiple population groups to avoid biases that disproportionately impact certain communities. Policymakers, product designers, and industry organizations need to work together to ensure AI products have been tested for bias before being deployed in clinical settings and continually evaluate data output to monitor for machine-learned assumptions that could reinforce bias.

2. Develop funding mechanisms for under-resourced providers. AI should not become a luxury reserved for elite institutions. Governments and healthcare payers should explore subsidies, grants, or reimbursement models that make AI-powered tools accessible to safety-net hospitals, rural clinics, and smaller practices. If AI is only available to the most well-funded providers, it could widen disparities rather than close them.

3. Prioritize AI solutions that enhance, rather than replace, human care. AI is most effective when it serves as a tool to support clinicians—not when it replaces the human elements of care. Matherne raised an important point about physician trust in AI: the diagnostic part is the most frightening for providers. If we get so used to AI that we forget we need to check it, we could end up in a dangerous place. AI should be designed to assist in decision-making, not to take over the role of physicians.

4. Build AI regulations that ensure safety without stifling innovation. Healthcare AI doesn’t need its own category of sweeping regulations; rather, our existing laws and frameworks must be updated to reflect AI’s role. AI is a tool, and regulating AI itself won’t work. What we need is to ensure it is deployed safely and ethically across existing healthcare structures.

The future of AI in healthcare: a crossroads

AI has the potential to radically improve healthcare, but only if we do it right. Bias, access, and physician trust must be addressed from the start.

At athenahealth, our focus is on building technology that works for everyone—clinicians, patients, and healthcare systems of all sizes. AI should be a tool for expanding access, improving efficiency, and enabling better patient outcomes, not another factor deepening the existing divide between well-funded institutions and those struggling to keep up. We’re building upon a decade of AI research and deploying it intelligently and intentionally into our products where it can reduce friction and create improvements in both output and outcomes.

The future of AI in healthcare is still being written. It’s up to us—technologists, policymakers, clinicians, and patients—to ensure that it leads to a system that is not only more intelligent but also more just.

thought leadershipAI in healthcarehealthcare trendsreducing administrative burdenindependent medical practicehealth system

More thought leadership resources

A male physician in a white coat reviews data or patient information on a tablet, appearing engaged and attentive.
  • Alicia Bassolino
  • April 22, 2025
  • 7 min read
athenahealth research

AI moving from hype to habit in physician workflows

AI adoption is growing among physicians but varies among use cases and practice sizes. Learn more from our experts.
Read more

Continue exploring

Icon Computer

Read more actionable insights

Get thought leadership, research, and news about the business of healthcare.

Browse the blog