NewsWHO Releases AI Ethics Guidance for Large Multi-Modal Models

WHO Releases AI Ethics Guidance for Large Multi-Modal Models

LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023.

Must Read

18 January 2024 — The World Health Organization (WHO) is releasing new guidance on the ethics and governance of large multi-modal models (LMMs) – a type of fast growing generative artificial intelligence (AI) technology with applications across health care.

The guidance outlines over 40 recommendations for consideration by governments, technology companies, and health care providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

LMMs can accept one or more type of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted. In their mimicry of human communication and ability to carry out tasks not explicitly programmed, they exhibit uniqueness. LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023.

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”

Potential benefits and risks

The new WHO AI Ethics guidance outlines five broad applications of LMMs for health:

  • Diagnosis and clinical care, such as responding to patients’ written queries;
  • Patient-guided use, such as for investigating symptoms and treatment;
  • Clerical and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
  • Medical and nursing education, including providing trainees with simulated patient encounters, and;
  • Scientific research and drug development, including to identify new compounds.

While LMMs are beginning to serve specific health-related purposes, they also pose documented risks of generating false, inaccurate, biased, or incomplete statements that could harm individuals relying on such information for health decisions. Additionally, LMMs might undergo training on data of poor quality or bias related to race, ethnicity, ancestry, sex, gender identity, or age.

The guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs. Healthcare professionals and patients may experience ‘automation bias’ with LMMS, overlooking errors or improperly delegating difficult choices to the system. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of these algorithms and the provision of health care more broadly.

To create safe and effective LMMs, WHO underlines the need for engagement of various stakeholders: governments, technology companies, healthcare providers, patients, and civil society, in all stages of development and deployment of such technologies, including their oversight and regulation.

“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs,” said Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.

Key recommendations

The new WHO guidance includes recommendations for governments, who have the primary responsibility to set standards for the development and deployment of LMMs, and their integration and use for public health and medical purposes. For example, governments should:

  • Invest in or provide not-for-profit or public infrastructure, including computing power and public data sets, accessible to developers in the public, private and not-for-profit sectors, that requires users to adhere to ethical principles and values in exchange for access.
  • Use laws, policies and regulations to ensure that LMMs and applications used in health care and medicine, irrespective of the risk or benefit associated with the AI technology, meet ethical obligations and human rights standards that affect, for example, a person’s dignity, autonomy or privacy.
  • Assign an existing or new regulatory agency to assess and approve LMMs and applications intended for use in health care or medicine – as resources permit.
  • Introduce mandatory post-release auditing and impact assessments, including for data protection and human rights, by independent third parties when an LMM is deployed on a large scale. The auditing and impact assessments should be published and should include outcomes and impacts disaggregated by the type of user, including for example by age, race or disability.

Also Read: WHO Issues Guidance on Regulating AI in Healthcare for Safety and Efficacy

The guidance also includes the following key recommendations for developers of LMMs, who should ensure that:
  • Not only scientists and engineers design LMMs. Engage potential users, stakeholders, medical providers, researchers, healthcare professionals, and patients early in AI development, fostering transparent, inclusive, and structured design, allowing them to raise ethical issues, voice concerns, and provide input for the considered AI application.
  • LMMs, designed to enhance health system capacity and advance patient interests, must perform well-defined tasks with necessary accuracy and reliability. Developers must also predict and understand potential secondary outcomes.
- Advertisement -


World Hypertension Day: Five Natural Ways to Lower Blood Pressure

One in three adults worldwide has high blood pressure according to the World Health Organization(WHO). Left untreated, the condition...
- Advertisement -