AMA Adopts Policy Aimed at Ensuring Transparency in AI Tools Used in Medical Care
CHICAGO — As augmented intelligence tools continue to emerge in medical care, the American Medical Assoc. (AMA) adopted policy during the annual meeting of its House of Delegates aimed at maximizing trust in, and increasing transparency around, how these tools arrive at their conclusions.
Specifically, the new policy calls for explainable, clinical AI tools that include safety and efficacy data. To be considered explainable, these tools should provide explanations behind their outputs that physicians, and other qualified individuals, can access to interpret and act on when deciding on the best possible care for their patients.
Furthering the AMA’s support for more oversight and regulation of augmented intelligence (AI) and machine learning algorithms used in clinical settings, the new policy calls for requiring an independent third party, such regulatory agencies or medical societies, to determine whether an algorithm is explainable, rather than relying on claims made by its developer. The policy states that explainability should not be used as a substitute for other means of establishing safety and efficacy of AI tools, such as randomized clinical trials.
Additionally, the new policy calls on AMA to collaborate with experts and interested parties to develop and disseminate a list of definitions for key concepts related to medical AI and its oversight.
“With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients when making shared decisions about their healthcare,” said Dr. Alexander Ding, an AMA board member. “The need for explainable AI tools in medicine is clear, as these decisions can have life-or-death consequences. The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible, and impactful tools used in patient care.”
The AMA Council on Science and Public Health report that served as the basis for this policy notes that, when clinical AI algorithms are not explainable, the clinician’s training and expertise is removed from decision making, and they are presented with information they may feel compelled to act upon without knowing where it came from or being able to assess accuracy of the conclusion.
The report also notes that intellectual property concerns, when provided as a rationale for not explaining how an AI device created its output, should not nullify a patient’s right to transparency and autonomy in making medical decisions. To this end, the new policy states that, while intellectual property should be afforded a certain level of protection, concerns of infringement should not outweigh the need for explainability for AI with medical applications.