CHAI: Look for healthcare AI model ‘nutrition labels’ soon
The Coalition for Health AI this past Friday unveiled news plans for how it would certify independent artificial intelligence assurance labs.
The draft frameworks come as the group – comprising health system heavy-hitters such as Mayo Clinic, Penn Medicine and Stanford, along with Amazon, Google, Microsoft and other Big Tech giants – set a timeline for its aim to standardize the output of testing labs by grading AI and ML models with so-called CHAI Model Cards – which the group likens to ingredient and nutrition labels on food products.
CHAI leaders say the certification rubric and model card designs can be expected – after incorporating stakeholder feedback from members, partners and the public – by the end of April 2025.
WHY IT MATTERS
Created with the ANSI National Accreditation Board and several emerging quality assurance labs using ISO 17025 – it’s the predominant standard for testing and calibration laboratories worldwide – the draft CHAI certification program framework requires, among other things, mandatory disclosure of conflicts of interest between assurance labs and model developers, and the protection of data and intellectual property. (That standard was also used for ONC’s Electronic Health Record certification program.)
The testing and cert program also incorporates data quality and integrity requirements derived from FDA’s guidance on the use of high-quality real-world data, CHAI officials note, as well as testing and evaluation metrics sourced from the coalition’s various working groups – all in alignment with the National Academy of Medicine’s AI Code of Conduct.
The draft model card – designed by a workgroup comprising experts from regional health systems, electronic health record vendors, medical device makers and others – offers a standardized template designed to show a degree of transparency about algorithms, presenting certain baseline information to help end users evaluate the performance and safety of AI tools.
That information includes the identity of the AI developer, the model’s intended uses, targeted patient populations and more. Other performance metrics include security and compliance accreditations and maintenance requirements. The cards also have information about known risks and out-of-scope uses, biases and other ethical considerations.
CHAI says it will continue to engage with healthcare stakeholders – including patient advocates, small and rural health systems and tech startups for additional guidance on model card development. The group is also seeking feedback via its assurance lab and model card forms.
“The rapid evolution of AI in healthcare has created a landscape that can feel unregulated and fragmented,” said CHAI applied model card workgroup member Demetri Giannikopoulous, chief transformation officer at Aidoc, which helped lead design and development of the cards, in a statement.
“By establishing a common framework that aligns with federal regulations, we are moving beyond theoretical discussions and building the foundation for scalable, reliable, and ethical AI solutions that can be adopted across the healthcare ecosystem.”
THE LARGER TREND
Since its founding in 2021, CHAI has been working, along with others who have similar goals, to be the go-to source for trusted AI – whether that refers to patient safety, privacy and security, fairness and equity, model transparency, or utility and efficacy.
As the technology advances and proliferates, the coalition is growing too – convening an array of stakeholders from hospitals and health systems, Big Tech, startups, government agencies and patient advocates. privacy and security, fairness, transparency, usefulness, and safety of AI algorithms.
In an interview in Boston this past month at the HIMSS AI in Healthcare Forum, CHAI CEO Dr. Brian Anderson noted that the group, which he cofounded as a side project when he was still chief digital health physician at MITRE, now has more than 4,000 members from across the healthcare and technology ecosystem, a big increase from 1,300 earlier this year.
Responsibility and transparency around AI models have been core to CHAI’s mission since it was founded in 2021. Toward that end it has worked on initiatives such as its Blueprint for Trustworthy AI Implementation Guidance and Assurance in 2023 and its draft framework for responsible development and deployment in June 2024.
The new model cards were designed to comply with the HTI-1 requirements promulgated by ONC earlier this year, meant to be an easily legible starting point for organizations reviewing AI models during the procurement process, and for EHR vendors seeking to comply with the Health IT Certification Program.
Anderson noted that Micky Tripathi, Assistant Secretary for Technology Policy, National Coordinator for Health Information Technology and Acting Chief Artificial Intelligence Officer at HHS, has said previously that industry and the private sector should be the ones to define what goes into AI model cards.
“We’ve taken up that charge,” said Anderson, who says CHAI is actively seeking end user perspective and is looking for a wider variety of stakeholders and use cases as it considers the myriad applications of AI in healthcare.
The goal is to build an “easily digestible” but still “technically specific, detailed model card that the industry [can] align around,” he said.
As for AI assurance labs, Anderson notes that almost every other sector of the economy makes use of similar independent evaluators, and healthcare should have the same.
“We want to build a network of labs that are trustworthy, that are competent and capable – that model customers can turn to, that health systems can trust and technology vendors can trust as well,” he said.
ON THE RECORD
“This has been a pivotal year for CHAI on our journey to help enable trusted independent assurance of AI solutions with local monitoring and validation,” said Anderson in a statement. “We are thrilled with the progress of our CHAI workgroups who represent a diversity of perspectives and expertise across the health ecosystem.
“These frameworks for certification and basic transparency are building blocks of responsible health AI. They will help to streamline the path for AI innovation, build trust with patients and clinicians, and position health systems and solution innovators ahead of emerging state and federal regulations.”
Mike Miliard is executive editor of Healthcare IT News
Email the writer: [email protected]
Healthcare IT News is a HIMSS publication.