Envisioning the Future of Regulation for Artificial Intelligence in Healthcare

As artificial intelligence revolutionizes healthcare, we must develop a regulatory framework to ensure ethical use of patient data for research and care delivery. What should those guardrails look like, and how will we deploy them across the life sciences and clinical care industries?

Healthcare is one of the most heavily regulated industries on earth, yet the rapid pace of digital development has left some parts of the community feeling a little like the Wild West. We are in a golden age of discovery when it comes to the power of artificial intelligence (AI) and other machine learning techniques, but it’s extremely challenging for regulators to keep up.

The burden of regulating AI has largely fallen to the FDA, which has already released important guidance to set the tone for the future, positioning many AI-driven tools as “software-driven medical devices.”

But what about AI algorithms that don’t purport to directly deliver clinical decision support or identify anomalies in x-rays?  

Increasingly, technology companies are employing AI to curate data for research purposes, identify at-risk populations, improve clinical workflows, or manage administrative and financial tasks. These uses may not fall directly under the FDA’s existing scope of regulation, creating a vast gray area for a technology we do not yet fully have under our command.

As Dr. Elisabeth Rosenthal, Editor-in-Chief of Kaiser Health News, said in a recent WBUR On Point podcast, “AI has enormous potential, but enormous potential for misuse.”

During this first flush of innovation, we have an important opportunity — and an ethical duty — to create a strong, collaborative framework of regulation for artificial intelligence and its associated technologies.  

This new blueprint must certainly include a stronger role for the FDA, but we cannot rely solely on the agency to provide all of the guidance we need to ensure that AI brings more benefits than harm to patients.

Implementing regulation without stifling innovation

Regulators must strike the right balance between setting guardrails for the industry and allowing innovative ideas to flourish. This can be extremely challenging, especially when artificial intelligence is used as part of large-scale projects that intend to change fundamental aspects of the healthcare data landscape.

Take, for example, Oracle’s announcement that it will create a “unified national health records database” on the back of its $28.3 billion acquisition of Cerner. Oracle CEO Larry Ellison envisions that the database will solve some of the industry’s biggest interoperability problems while providing fodder for AI algorithms to help diagnose diseases and manage population health. 

While the plan may indeed foster new breakthroughs in disease pattern recognition and risk modeling, it also runs the risk of confirming unintentional biases and furthering health disparities if its AI algorithms are poorly designed.  

Who should be in charge of ensuring that such algorithms from Oracle (and rivals Epic Systems, Allscripts, and others) are coded correctly and working appropriately without dictating what can and cannot be done with the data? Currently, the answers are not clear.

A larger role for the FDA in AI regulation?

In the On Point podcast on AI regulation, experts agreed that the FDA has an important part to play in the AI ecosystem, despite the numerous challenges ahead of it.

As host Meghna Chakrabarti pointed out, “technology is always going to outpace what the current regulatory framework is. But in healthcare, you don’t really want the gap to be too big. Because in that gap, what we have are the lives of patients.”

Chakrabarti also noted that the FDA may not be fully equipped to monitor that gap due to its fundamental role as a device and drug regulator.  

The FDA itself has acknowledged that artificial intelligence sits at the intersection of devices and software, right on the edge of what the agency is designed to handle. The entity’s primary goal is to keep patients safe from harm, and lower-risk algorithms that don’t directly touch patients may not fall directly under its purview as currently defined.

“What we have found is that we can’t move to a really more modern regulatory framework, one that would truly be fit for purpose for modern-day software technologies, without changes in federal law,” said Dr. Matthew Diamond, head of digital health at the FDA, in an accompanying interview with Chakrabarti. “There is an increasing realization that if this is not addressed, there will be some critical regulatory hurdles in the digital health space in the years to come.”

The FDA is currently working with industry partners to better understand how to map the borders of its authority while bringing transparency and accountability into the burgeoning AI landscape. The agency plans to develop a regulatory framework that advances innovation without limiting creativity in the free market.  

However, the agency’s guidance might not ever be comprehensive enough to capture all of the potential use cases for AI in the clinical environment as well as the life sciences and research communities.

Building a better future for AI with a risk-based approach to regulation

The FDA has been a good partner to the AI industry thus far, and we hope that agency officials and congressional lawmakers will continue to work with providers, life sciences companies, and technology developers to create a robust and collaborative road map for the next generation of AI tools.

There are unprecedented opportunities for AI and machine learning to support better cancer care, but it may be decades before these tools are sophisticated enough to function without eagle-eyed oversight from experienced, discerning humans with the ability to make appropriate course corrections before a small bias in an algorithm becomes a big issue for real patients.

As we work through these issues, we believe that the FDA should adapt its successful risk-based approach to medical device regulation for the burgeoning AI environment.

Algorithms that have the potential to propagate direct harm to humans, such as those involved in identifying and testing promising new compounds, should receive the closest scrutiny from the agency. We must do all we can to avoid biases, inequities, and errors at this important stage in drug development, and the FDA is well suited to taking on this task.

However, the FDA may not need to be as stringent with AI and machine learning tools that are further down the pipeline. Algorithms that leverage real-world data to help match patients to clinical trials, for example, are not likely to be directly implicated in an adverse patient safety event. We certainly need better standards and more industry consensus in this area, but the FDA might not be the most effective entity to provide guidance.

Even further down the line are AI-driven apps and devices for patient self-management or general wellness; the FDA simply does not have the scope or scale to scrutinize the code behind thousands of entrants into this growing field. In these cases, academic research, clinical expertise, and the free market may be best suited to keeping poorly designed algorithms in check.

We would like to see this tiered, risk-based structure integrated into future guidance. The FDA can and should work to clearly define the different categories of AI tools and how they will be regulated. It can also help stakeholders address questions of algorithmic bias and health equity and specifically speak to the role of AI in clinical trials and regulatory submissions.

It is crucial for the industry to continue working with the FDA and other oversight bodies to create the next generation of regulation for AI and machine learning. Together, we can chart the path forward to an innovative future that brings out the best in what artificial intelligence has to offer.