Using AI Cautions

Why Every Medical Practice Should Think Twice About Using AI

The World Health Organization cautions that the rapid increase in the use of artificial intelligence or AI technology in healthcare carries potential risks that may compromise the safety of patients if not handled with caution.

Similar to virtual medical assistants, AI technology has shown great potential in streamlining administrative tasks that would otherwise be very tedious and time-consuming if done manually. However, they must carefully weigh the risks before integrating this new technology, as safeguarding patient information and well-being is paramount in the healthcare industry.

Moreover, the organization added that the potential for enhanced patient health generated an exciting buzz around emerging AI platforms like ChatGPT, Bard, and Bert. However, in pursuit of this progress, device developers and other stakeholders may overlook critical cautionary measures typically applied to new technologies.

The organization warned, “Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” the WHO warned.

Most AI technologies healthcare practices use rely on “large language model” tools or LLMs to mimic human cognitive abilities. Despite being in the experimental phase, the meteoric rise of AI has prompted the WHO to examine its potential risks to key healthcare and scientific research values. These values include transparency, inclusion, promoting public engagement, expert supervision, and rigorous evaluations.

The organization also emphasized the importance of thoroughly considering the potential risks associated with using LLMs in improving health information accessibility, helping in decision-making, or enhancing diagnostic capabilities in resource-limited areas. This cautionary approach is critical for safeguarding public health and promoting equity.

Furthermore, using AI may generate inaccurate or misleading information potentially due to biased data used for training. With its ability to appear authoritative or credible even when presenting inaccurate data, rigorous oversight is crucial to ensure LLMs’ safe, effective, and ethical implementation.

During a recent speech, Food and Drug Administration Commissioner Robert Califf cautioned that “nimble” regulation of large language models is critical to prevent the healthcare system from being “swept up quickly by something that we hardly understand.”

In testimony to a Senate subcommittee, OpenAI CEO Sam Altman expressed his agreement that regulations for AI are necessary. He acknowledged that the implications could be quite severe if it goes wrong and asserted his company’s intention to collaborate with the government to avoid such a scenario.