
Discover more from AI in Business
Labeling AI Medical Devices Could Help Transparency as FDA Drafts Rules
The FDA’s approach to oversight of AI medical devices spans Intended use, how it was developed, how it performs, the logic used to generate recommendations.
By John P. Desmond, Editor, AI in Business

A food nutrition-like label describing what AI is used inside a medical device, and close examination of whether its AI algorithms pass muster, are among ideas being pursued in the context of FDA approval of medical devices incorporating AI.
The FDA put out a request for information in 2019 to explore some ground rules for AI in medical devices. The FDA saw that AI and machine learning technologies had the potential to transform health care by deriving new insights from the vast amount of data generated during the delivery of health care.
“The FDA is the furthest along on its own journey toward new rules on AI-powered medical devices,” stated Liz O’Sullivan, CEO of Parity, supplier of an algorithmic auditing platform, in a recent account in Fast Company.
Meanwhile, other governments around the world are ahead of the US in passing algorithmic governance laws, with Singapore for example issuing and updating voluntary frameworks dating back to 2019, and Japan issuing reports and national strategies to signal that rules are on the horizon. “The U.S. has lagged significantly behind our international counterparts in tackling AI safety at the federal level,” O’Sullivan stated.
Activist litigators are expected this year to ramp up activity in the courts to protect American civil liberties from the risks of AI. “Experts have long said, with some merit, that the laws to prevent algorithmic discrimination already exist in the US,” she stated. “While mainly true, we’ve not yet seen a multitude of legal challenges against AI.”
One unresolved matter is an investigation by the state of New York into potential AI discrimination by the Optum business services arm of United Healthcare, for its use of an algorithm that a study found had a bias in giving white patients priority over black patients, The bias was detected in a study published in the American Association for the Advancement of Science, according to an account in HealthcareFinance.
"We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts,” stated the authors, led by Ziad Obermeyer, a physician and researcher at Berkeley Public Health.
FDA Defines an Action Plan for Software as a Medical Device (SaMD)
The FDA has defined a five-step Software as a Medical Device (SaMD) action plan for setting rules on AI-powered medical devices. The plan calls for:
a tailored regulatory framework;
good machine learning practices;
;a patient-centered approach transparent to users;
regulatory science methods related to algorithm bias and robustness;
and real world performance.
Regarding algorithmic bias, the FDA stated,” The Agency recognizes the crucial importance for medical devices to be well-suited for a racially and ethnically diverse intended patient population, and the need for improved methodologies for the identification and improvement of machine learning algorithms.”
The FDA’s emphasis on transparency means the days of companies developing AI-enabled medical products in a closed-door environment are over. Doctors and patients need to know how the AI in the tool will be used to make life or death medical decisions.
The FDA’s approach to oversight of these products includes “steps to improve how developers communicate about four key factors: a product’s intended use, how it was developed, how well it performs, and the logic it uses to generate a result or recommendation,” according to an account from The Pew Charitable Trust written by Liz Richardson, director of the Health Care Product Project at the trust.
As for intended use, “AI developers should clearly communicate how their products should be used—such as specifying exact intended populations and clinical settings—because these factors can greatly affect their accuracy,” she stated.
For example, Mayo Clinic researchers developed an AI-enabled tool to predict atrial fibrillation using data from the general population of patients at the facility. It was highly accurate when used on that population, but when applied in higher-risk clinical scenarios, such as on patients who had just undergone a certain type of heart surgery, the results were only slightly better than random chance.
Under performance, prescribers and patients need to know that AI tools have been independently validated, how they were evaluated and how well they performed. No set standards are in place for how the products should be evaluated and no independent organization oversees their proper use, the author noted.
“Performance issues also arise when AI developers use the same data to train and validate their products,” risking inflated accuracy rates, Richardson stated.
Regarding logic, clinicians and researchers need to understand how a tool is used to reach its conclusions. Without this understanding, “They might not trust the recommendations it makes or be able to identify potential flaws in its performance,” she stated.
A set of requirements for labeling SaMD tools could provide detailed information to health care facilities before they purchase the tools. Researchers at Duke University and the Mayo Clinic, have suggested an approach “akin to a nutrition label that would describe how an AI tool was developed and tested and how it should be used,” Richardson stated.
FDA Approves GE Healthcare AI Medical Device for Endotracheal Tube Placement
The FDA continues to approve AI-enabled medical devices as it considers its rules and regulations.
In one example, GE Healthcare recently won approval for an AI algorithm that helps physicians to assess endotracheal tube placement via X-rays, according to a recent account in Radiology Business.
GE has been distributing the devices since November 2020 under the FDA’s COVID-19 guidance, to help providers manage patients with the virus. The FDA approval permits GE to continue marketing and selling the device.
“We are pleased to now have the FDA’s clearance for this important solution,” stated Jan Makela, president and CEO of imaging at GE Healthcare, in a statement. “The pandemic has proven what we already knew—that data, AI and connectivity are central to helping front line clinicians deliver intelligently efficient care.”
GE noted that research has shown that some 25 percent of patients intubated outside the operating room may have misplaced endotracheal tubes, which can lead to serious complications. About 200 hospitals have deployed the AI assistants since last year.
Read the source articles and information in Fast Company, in HealthcareFinance, in the FDA’s five-step Software as a Medical Device (SaMD) action plan, in an account from The Pew Charitable Trust and in Radiology Business.
(Write to the editor here.)