Under Section 1557 of the Affordable Care Act (ACA), all covered entities, including radiologists and radiology practices, are responsible for preventing discrimination in practice.
In May 2024, the Department of Health and Human Services updated the final rule published in the Federal Register to include requirements related to the use of patient care decision support tools by covered entities.[i]
The Radiological Society of North America (RSNA) recommended radiologists and radiology practices who use patient care decision support tools (including AI algorithms) ask AI vendors the following questions to ensure they remain compliant with ACA requirements:[ii]
1. Below are the answers to these questions for Harrison.ai. Does the AI software fall under the jurisdiction of Section 1557 or any state-based non-discrimination laws?
Harrison.ai Comprehensive Care aligns with ACA 1557 by ensuring controls are in place to minimize bias that could lead to discrimination. Harrison.ai is committed to transparency in algorithm development and surveillance to ensure end users of the device can remain compliant. In developing, training, and monitoring its AI models, Harrison.ai takes steps to meet all requirements set forth by the FDA to minimize bias and monitor for changes in performance that could adversely impact patients through any form of bias.
2. Does the software consider any input variables protected under Section 1557 or state non-discrimination laws, such as race, color, national origin, sex, age, or disability? If yes, please state which variables and how they are used in the tool’s decision-making process.
Harrison.ai Comprehensive Care will analyze all conformant cases regardless of race, color, national origin, or disability. Harrison.ai Comprehensive Care analyzes images for patients who are 22 years of age and older, so the software uses patient age as an input to ensure images are analyzed according to the device’s intended use.
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
3. What steps does the vendor take to mitigate potential harm to patients in protected groups?
The Harrison.ai Comprehensive Care model:
- Is trained on one of the largest datasets in the world and comes from a diverse patient population.
- Is trained to detect a comprehensive number of findings including very rare and challenging findings.
- Has been independently validated on a US population across a range of patient demographics, disease characteristics, and technical factors and does not show significantly different performance across subgroups.
- Is transparent about performance with regards to a range of demographic variables, with information readily available in Harrison.ai Radiology Performance Guides.
- Has been scrutinized by the FDA through the 510k program, which includes checks for unintended bias.
- Is used globally across a range of populations, with ongoing research and post-market surveillance to monitor effectiveness across these populations.
4. Does the vendor audit software performance to ensure it does not inadvertently discriminate against protected groups? If yes, what are the frequency and criteria of such audits?
Harrison.ai monitors model performance on an ongoing basis by:
- Running local validation studies
- Collecting feedback proactively and passively
- Monitoring customer complaints
- Conducting performance investigations.
5. How does the vendor ensure transparency around non-discrimination compliance?
Harrison.ai Comprehensive Care software meets all FDA requirements to demonstrate generalizability to the intended population. Subgroup performance is submitted and reviewed by the FDA and is readily available in performance guides.
6. Does the vendor provide training to its staff and clients on non-discrimination and best practices in
healthcare software?
Harrison.ai programmers and software validation teams are trained and follow the Good Machine Learning Practice for Medical Device Development: Guiding Principles published by FDA. These principles include ensuring the model is trained on data representative of the intended population and minimizes bias and discrimination.
Harrison.ai is also a member of AdvaMed and follows its Code of Ethics and Principles on Health Equity.
Harrison.ai trains all users on the intended use of the software and solicits feedback where the device is not performing as intended. This allows Harrison.ai to track performance across locations and use cases.
Harrison.ai is committed to ensuring controls are in place to minimize bias, ongoing transparency in model performance and monitoring, and continued efforts to identify areas where we can mitigate against potential discrimination.