FDA delivers guidance on AI and clinical decision-making

A package of new guidance documents from the FDA describes what and how the agency […]

A package of new guidance documents from the FDA describes what and how the agency plans to regulate as software designed to aid clinical decision-making—such as programs that provide documentation and diagnostic support or that call up medically relevant reference information and recommendations based on patient’s particular case.

The guidances also include updated, finalized policies for the regulation of smartphone-based medical apps, as well as manufacturers’ use of so-called “off-the-shelf” commercial software in medical devices.

The FDA defines software for clinical decision support, or CDS, as tech that can provide doctors, patients or caregivers “with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care.”

In its latest draft guidance, the agency said it plans to apply a risk-based strategy for enforcing device-related requirements. It does not intend to regulate certain types of low-risk software—such as programs designed to help inform patients and caregivers in managing non-serious conditions without the help of a doctor—especially when those users can independently check and understand the basis for the programs’ recommendations.

Instead, the FDA said it plans to focus oversight on higher-risk software functions, including those used in serious or critical situations—as well as machine learning-based algorithms, where the program’s logic and inputs may not be fully explained to the user.

RELATED: FDA lays out plans for a new review framework for AI and machine learning-based devices

For example, this would include an artificial intelligence system that identifies hospitalized patients with type 1 diabetes who may be at risk for cardiovascular events. Another would be a learning algorithm that categorizes likely cases of seasonal influenza, using electronic medical records and geographic data, by screening them out from patients with common flu or cold symptoms, the FDA said.

“Patients, their families and their health care professionals are increasingly embracing digital health technologies to inform everyday decisions, from tools that more easily report blood glucose levels to smartwatches that can detect atrial fibrillation,” Principal Deputy Commissioner Amy Abernethy said in an agency statement.

“We believe that an appropriate regulatory framework that takes into account the realities of how technology advances plays a crucial role in the efficient development of digital health technologies,” Abernethy said.

In general, one of the agency’s oversight goals is to not hinder the development of low-risk but helpful software, where new versions can be developed much faster than the traditional medical devices that have fallen under the agency’s purview. Additionally, the 21st Century Cures Act amended the types of software that would otherwise be considered a regulated device.

“We’re making clear that certain digital health technologies—such as mobile apps that are intended only for maintaining or encouraging a healthy lifestyle—generally fall outside the scope of the FDA’s regulation,” she said.

The agency updated four previous final guidances for tech-enabled products—covering mobile medical apps; general wellness and low-risk devicesoff-the-shelf software use; and guidance on data systems and medical image storage and communication devices—bringing them all in line with the new definitions set down by the Cures Act, passed in December 2016.

“These documents are critical elements of FDA’s comprehensive approach to digital health,” Abernethy said. “We are committed to promoting beneficial innovation in this space while providing appropriate oversight where it’s merited.”

Original Article: (https://www.fiercebiotech.com/medtech/fda-delivers-regulatory-guidance-ai-software-and-clinical-decisionmaking-aids)