The World Health Organization warned this Tuesday about the use of Artificial Intelligence in the health area, which can generate incorrect guidelines, violate personal data or spread misinformation, requiring greater oversight by governments.

Artificial intelligence applied in health can generate incorrect guidance, breach personal data and spread misinformation. The alert is from the World Health Organization, WHO.

The UN agency asks for “caution” with the use of platforms such as ChatGPT, Bard or Bert “that imitate understandingthe processing and production of human communication”.

According to the WHO, the so-called language model tools (LLM, large language models), generated by AI, may “pose risks for human well-being and public health”.

WHO experts consider that the rapid and wide dissemination of MLLs and the growing experimental use for health-related purposes is not being accompanied by control mechanismshighlights a publication on the portal UN News.

Among the mentioned control mechanisms are the adhesion of artificial intelligence platforms to values ​​such as transparency, inclusion, oversight specialist or rigorous evaluation.

“WHO recognizes that the appropriate use of technologies, including LLM, can contribute to supporting healthcare professionalspatients, researchers and scientists”, he adds.

The new platforms can be “ua decision support tool and increasing diagnostic capacity in resource-poor settings,” but the focus “must be on protecting people’s health and reducing inequality,” he adds.

Some WHO concerns are:

  • LLMs generate answers that may seem credible and plausible to an end user. However, these answers may be completely incorrect or contain serious errors, especially for health-related topics.
  • LLMs may be trained on data for which consent may not have been previously provided. Furthermore, these tools do not necessarily protect sensitive data, including health data, that a user provides to generate a response.
  • LLMs can be used to generate and disseminate highly convincing disinformation in the form of text, audio or video, making it difficult for the public to differentiate false content from credible content.

Despite the benefits, this agency highlights the “associated risks” to the use of these tools to improve access to health information, arguing that “they need to be carefully evaluated”.

“A rash adoption of untested systems it can lead to errors by health professionals, cause harm to patients and undermine confidence in Artificial Intelligence and future technologies”, he warns.

The data used to train artificial intelligence can be biasedgenerating false or inaccurate information that may pose risks to health, equity and inclusion, he also points out.

To face these situations, the WHO proposes that the authorities of each country Analyze the benefits of Artificial Intelligence for health purposes before generalizing its use.

In this sense, the organization identified six fundamental principles which should govern: the protection of the autonomy of professionals, the promotion of human well-being, the guarantees of transparency, the promotion of responsibility, inclusion and the promotion of Artificial Intelligence.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply