RISKS AMID BENEFITS

Why WHO is proposing new rules for AI in health

WHO said AI has great potential in health care but also possible pitfalls

In Summary

• The guidance outlines more than 40 recommendations for consideration

• The aims is to ensure appropriate use of large language models to promote health

WHO also suggested to developers that LMMs are designed not only by scientists and engineers.
WHO also suggested to developers that LMMs are designed not only by scientists and engineers.
Image: HANDOUT

The World Health Organisation is the latest body to suggest new rules to guide artificial intelligence in healthcare.

WHO said AI has great potential in health care but also possible pitfalls.

The guidance outlines more than 40 recommendations for consideration by governments, technology companies and healthcare providers to ensure the appropriate use of large language models (a form of AI) to promote and protect the health of populations.

LLMs can accept one or more types of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted.

LLMs have been adopted faster than any consumer application in history, with several platforms — such as ChatGPT, Bard and Bert — entering the public consciousness in 2023.

“Generative AI technologies have the potential to improve health care, but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” WHO chief scientist Dr Jeremy Farrar said.

“We need transparent information and policies to manage the design, development and use of LLMs to achieve better health outcomes and overcome persisting health inequities.”

The new WHO outlines five broad applications of LLMs for health, such as diagnosis and clinical care, for instance, responding to patients’ written queries.

AI models are also used in investigating symptoms and treatment, clerical and administrative tasks, such as documenting and summarising patient visits within electronic health records.

It said while LLMs are starting to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, biased or incomplete statements, which could harm people using such information in making health decisions.

Furthermore, LLMs may be trained on data that is of poor quality or biased, whether by race, ethnicity, ancestry, sex, gender identity or age.

WHO called for engagement with various stakeholders in all stages of development and deployment of such technologies, including their oversight and regulation.

The stakeholders include governments, technology companies, healthcare providers, patients and civil society.

“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LLMs,” said Dr Alain Labrique, WHO director for digital health and innovation in the science division.

The new WHO guidance suggested governments should invest in or provide not-for-profit or public infrastructure that requires users to adhere to ethical principles and values in exchange for access.

“Use laws, policies and regulations to ensure that LLMs and applications used in health care and medicine meet ethical obligations and human rights standards that affect, for example, a person’s dignity, autonomy or privacy,” it said.

This is irrespective of the risk or benefit associated with the AI technology.

WHO also suggested dedicated regulatory agencies to assess and approve LLMs and applications intended for use in health care or medicine, as resources permit.

“Introduce mandatory post-release auditing and impact assessments, including for data protection and human rights, by independent third parties when an LLMs is deployed on a large scale,” it said.

“The auditing and impact assessments should be published and should include outcomes and impacts disaggregated by the type of user, including for example by age, race or disability.”

It also suggested to developers that LLMs are designed not only by scientists and engineers.

“Potential users and all direct and indirect stakeholders should be engaged from the early stages of AI development,” WHO said.

“They should be engaged in a structured, inclusive, transparent design and given opportunities to raise ethical issues, voice concerns and provide input for the AI application under consideration.” 

The stakeholders cited include medical providers, scientific researchers, health care professionals and patients.

WATCH: The latest videos from the Star