United States: Public health agencies publish ‘guiding principles’ for good machine learning practices
To print this article, simply register or connect to Mondaq.com.
The United States Food and Drug Administration (FDA), Health Canada and the United Kingdom Medicines and Healthcare Products Regulatory Agency (MHRA) have jointly issued a guidance document entitled “Good Machine Learning Practices for Medical Device Development: Guiding Principles. âThe document describes 10 guiding principles surrounding the development of good machine learning practices (GMLP) to help ensure that medical devices that use artificial intelligence and machine learning (AI / ML) are safe and effective.
These Guiding Principles are part of a larger collaborative initiative between regulators and international organizations, including the International Forum of Medical Device Regulators (IMDRF) and international standards organizations, to address various issues concerning regulation of medical device software. The new guiding principles aim to identify additional avenues for collaboration while taking into account the unique nature of AI / ML products.
According to Guiding principles, developers of medical devices using AI / ML should:
- Leverage multidisciplinary expertise throughout the product lifecycle;
- Implement good software engineering and security practices, paying particular attention to data quality assurance, data management and cybersecurity practices;
- Ensure that clinical study participants and data sets are representative of the target patient population;
- Select and maintain training data sets independent of test sets;
- Select reference datasets based on the best available methods to ensure that clinically relevant and well characterized data are collected;
- Adapt the design of the model to the available data while ensuring that it reflects the intended use of the device;
- Take into account the influence of human factors when the model has a âhuman in the loopâ;
- Develop and execute test plans to demonstrate device performance under clinically relevant conditions;
- Provide clear and essential information to users in a manner appropriate to the intended audience; and
- Monitor models that have been deployed for real-world use to ensure that security and performance is maintained or improved.
The FDA seeks comments on these principles through a public register that the agency opened to comments following the publication of a discussion paper on its proposed regulatory framework for changes to artificial intelligence / machine learning (AI / ML) based software as a medical device (SaMD).
Earlier this year, the FDA released its Artificial Intelligence / Machine Learning (AI / ML) Based Software as a Medical Device (SaMD) Action Plan. And, on October 14, 2021, the agency organized a virtual public workshop designed to: (1) identify unique considerations for achieving transparency for users of AI / ML compatible medical devices and ways in which transparency could improve the safety and efficacy of such devices; and (2) gather feedback from various stakeholders on the types of information that it would be useful for a manufacturer to include in the labeling and public information of IA / ML compatible medical devices, as well as other potential information sharing mechanisms. This increased level of political regulatory commitment shows at least a desire on the part of the FDA to keep up with evolving technological functionality, making it an ever-growing presence in the FDA-regulated medical products space. The Digital Health Center of Excellence, made up of experts who provide regulatory advice and support regarding digital health technologies to the Center for Devices and Radiological Health (CDRH), is leading these efforts on behalf of the FDA.
The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your particular situation.
POPULAR ARTICLES ON: Food, Medicines, Health Care, Life Sciences of the United States