The FUTURE-AI guidelines have been developed with the aim of improving the reliability, safety, and ethical aspects of artificial intelligence (AI) in healthcare. Although AI has revolutionized disease diagnosis and treatment prediction, many doctors and patients are wary of its use, mainly due to fears of bias, errors, and data privacy risks. To counter these, the FUTURE-AI Consortium of 117 experts from 50 countries has developed a set of recommendations that have been published in *BMJ*.

The guidelines put forward six key principles. Artificial intelligence has to be fair, applying the same treatment to all patients without discrimination. It has to be universal in nature, i.e., it has to be applicable in various healthcare settings and groups of patients. The AI decision-making should be transparent such that users can easily understand the rationale behind the conclusions reached. The AI tools should also be usable by healthcare providers and patients. Robustness is another key principle that ensures AI systems perform well under varying conditions. Finally, explainability is essential, providing lucid understanding of the rationale behind AI decision-making.

By following these guidelines, health AI systems can be made transparent and trustworthy. The FUTURE-AI framework is dynamic, adapting to technological developments and responding to AI-related problems that arise. These guidelines are intended to generate trust among healthcare professionals and patients in AI tools, not only making them technologically sophisticated but also clinically safe and ethically acceptable.

Source: www.news-medical.net/news/20250318/New-guidelines-to-build-trusted-AI-in-healthcare-research.aspx