Being in compliance means thinking beyond the current AI model and continuously checking if the AI systems function as intended under various scenarios, having the different human attributes and how they overlap with each other in mind! A way to do this is by creating and using a compliance checklist.
We have so far published articles on the topic of AI compliance, one that explores the need to be in compliance within the AI healthcare industry, and the other suggesting ways to achieve compliance. As AI impacts lives, many partnerships among major hospitals, educational and governmental institutions have been recently created to ensure security, privacy, and safety when implementing AI in healthcare, and to tackle AI bias. The two articles should have, hopefully, convinced you enough to take steps in ensuring AI compliance within your own organization. This final article in this series will help you get started and stay up to date with the ever changing AI regulation landscape and its respective compliance checks.
Where to check for updates on AI compliance
There are many different frameworks and approaches on how to stay in compliance, depending on the values and principles of fairness and ethics, and/or the specific national AI strategies. Applicable to all industry leaders, we are sharing with you some of the sources that are worth checking for recent news.
- UK’s national AI strategy, an official site of the UK government, sharing the latest publications. (Link)
- The official site of the European Union (Link)
- Microsoft’s AI Fairness Checklists (Link)
- IEEE’s Ethics in action website (Link)
- Ada Lovelace Institute (Link)
Compliance is a long-term commitment
Regulations within the AI industry constantly change and this pressures the healthcare AI builders to commit to continuous updates and retraining of AI models. Depending on the data sets and machine learning models’ nature, every so often they need to be monitored and adapted to stay in constant compliance.
There are many different angles on how to maintain compliance, either with the proposed and ever-changing regulations or/and with the ethical and responsible principles.
For instance, the World Health Organization released the policy brief “Ageism in artificial intelligence for health.” suggesting how AI healthcare system creators can make AI within healthcare more equitable and mitigate age bias in AI health technologies.
As AI systems are a product of their algorithms, they can draw ageist conclusions if the data that feeds the algorithms is skewed toward younger individuals. The steps towards this challenge is identifying ageism and eliminating it from AI's design, development, use, and evaluation. Same with other types of biases such as AI gender bias, AI ethics, AI data privacy, etc. There are multiple examples when underrepresented groups of people experience higher rates of mistreatment and illness due to specific bias in the AI systems. UnitedHealth's Optum AI system showed drastic AI racial bias denying care to 46% qualifying black patients only because of the inaccurate assumption in the algorithm that those who incur the higher costs need crucial care most. However, Black patients spend less on medical costs per year than white patients, leading AI to make biased decisions. Also, a 2018 research study assessing the racial disparities in AI diagnosing bipolar disorders determined that persons with African ancestry are more often misdiagnosed with another disease other than bipolar disorder compared to people of non-African ancestry. ( Check out our case study on AI in healthcare to read more)
As of these cases, governments, academia, and other relevant institutions are forming coalitions to offer possible solutions for mitigating these risks and making AI systems within healthcare to be compliant with the proposed regulations and ethical standards.
Another example is the guidance on the risks associated with AI in healthcare from the The Cloud Security Alliance (CSA), highlighting the need to address the security and privacy risks that come along with implementing AI-driven technologies. The guidance was published after a scientific study by the University of California, Berkeley stating that advances in artificial intelligence have created new threats to the privacy of people's health data.
Overall, being in compliance means investing in strategies to better understand and build trustworthy AI systems and making sure they are working as intended on a regular basis.
To conclude, being in compliance means thinking beyond the current AI model and continuously checking if the AI systems function as intended under various scenarios, having the different human attributes and how they overlap with each other in mind! A way to do this is by creating and using a compliance checklist that will be integrated within the AI systems, making sure that the organization keeps up with the most recent regulatory changes and mitigates potential bias.
Check out KOSA’s Responsible AI System that offers you manual upkeep and ensures your AI system remains secure and compliant. It is seamlessly integrated with any data sources or model frames and can future-proof your AI.
- Deploying AI in Healthcare: Separating the Hype from the Helpful
- The future of healthcare: Value creation through next-generation business models
- UK NHS pilots AI tool aimed at reducing bias in healthcare datasets
- Easing of regulations for AI-based medical devices to empower domestic market in Japan
- AI in Healthcare Presents Need for Security, Privacy Standards
- Injecting fairness into machine-learning models