Worldwide Semiannual Artificial Intelligence Tracker predicted that AI revenues, mostly from companies, will climb 16.4% only this year. And 97% of AI businesses believe that regulation will impact them says the new report done by Accenture. Also, they see 'Responsible AI' as the solution to embed trust in their systems and as foundational step for preparing to be in compliance with the regulations. This article explores the EU and US regulatory approaches and what roles are they playing in the world's AI regulations.
Why is regulating AI necessary?
The goal of AI regulation is to make sure that AI use doesn't harm people or society in general. The AI regulations are necessary because they [will] provide guidance to the companies and organizations that develop AI. They [will] also provide a clear set of rules for the people who work with these technologies.
We have covered so far multiple topics regarding AI regulations:
- Why should you care about AI regulations?
- Breaking down the AI regulations
- EU AI regulations
- AI healthcare regulations
- Medical Devices regulations
As regulations are still evolving we are sharing with you the latest updates regarding the EU AI regulations and US AI regulations and everything you should know to catch up.
Latest Updates on EU AI regulations
Apart from the published first white paper of the Artificial Intelligence Act in April 2021 and the GDPR, the draft EU AI Act proposal was said that it could start entering into force in early 2023 in a transitional period. What we know so far is that:
- The regulations will apply to providers, users, importers and distributors of AI systems.
- The regulations will apply to all organizations horizontally, meaning across all sectors, including AI developers and providers, users of AI systems, and any organizations that import or distribute AI systems in the EU.
- It imposes significant financial penalties for non-compliance.
EU Parliament proposes AI roadmap to 2030
To further expand on the rules and regulations and explore the impact of AI on EU economy and society, the EU formed the AIDA ‘Special Committee on Artificial Intelligence’ with the MEP analyzing the approach to AI of third countries, and charting the road ahead for the EU. So far, they have held several discussions, the results of which fed into a final report that aims to establish an AI Roadmap up to 2030. Some of the recommendations that were given in the discussions are stressing the promotion of a human-centric and trustworthy approach to AI.
They have also published the most recent draft report identifying policy options that could unlock AI’s potential and the benefits it can bring to the whole society; however, still leaving only high-risk AI applications to be strictly regulated.
"AI developed in the EU should be human-centric and trustworthy" , says the report on legislation for the Artificial Intelligence Act presented in May 2022 by the EU's Parliament.
More proposals for new legislatives on AI are expected to be put forward in the coming period.
Latest Updates on US AI regulations
The US government recently proposed AI ethical guidelines and sector-specific ethical principles from the Department of Defense addressing AI issues such as algorithmic accountability, facial recognition, privacy and algorithmic profiling, and transparency.
Similar to the EU, a couple of states have proposed potential laws for providing transparency about how the data may be used in technologies such as AI, such as:
- The American Data Privacy and Protection Act (ADPPA – HR 8152) introduced a new official legislative report this year calling on businesses that design and employ algorithms to be required to conduct an “algorithm design evaluation” to reduce the risk of AI bias and harm.
- The US Equal Employment Opportunity Commission (EEOC) released guidance on how AI hiring tools discriminate against people with disabilities and may be violating existing requirements under Title I of the Americans with Disabilities Act (ADA).
- Also, the US government recently formed the National Artificial Intelligence Advisory Committee (NAIAC) to address ethical AI issues ranging from workforce equity to accountability and algorithmic bias.
How are the EU and US catching up?
The European Union has been regulating AI for the past few years, but the United States is not far behind.
- The European Union has a more hands-on approach to regulation, which will hopefully help to shape and steer the development of artificial intelligence in a way that benefits all of society. On the other hand the United States has not yet released any regulations regarding the use of AI but there are some proposed bills, which aims to develop a national strategy for artificial intelligence technology.
- The EU General Data Protection Regulation (GDPR) is a set of regulations that govern the use and storage of personal data. The GDPR was created to give people more control over their information, and to simplify the regulatory environment for businesses. The US Congress has not passed any laws specifically for AI but there are some laws that regulate how companies can collect and use personal data.
Proposed guidelines and regulations are the first step in making AI be used in a responsible way and for the greater good. Both the EU and US are currently going through a detailed legislative process, during which it is likely to be amended, but it is unlikely to become binding law in two or three years from now. Even once it becomes binding, there will be a grace period of potentially 24-36 months before the main requirements will come into force.
Worldwide Semiannual Artificial Intelligence Tracker predicted that AI revenues, mostly from companies, will climb 16.4% only this year. As AI will grow, the need for making sure AI solutions are responsible will also increase, even though the official regulations may still not be in place. Nevertheless, companies should start taking into consideration how they will take care of their algorithmic affect on people, mitigate potential risks, be in compliance and be prepared for the upcoming regulations.
Having an external, independent auditor to conduct the AI software design evaluation and its impact assessment
As regulators are still figuring out the specific standards and rules for the development and use of AI, taking the necessary compliance check steps beforehand and being responsible by design is a number one priority for AI builders and users—and a source of competitive advantage. Nearly all, 97% of AI businesses believe that regulation will impact them says the new report done by Accenture. Also, they see 'Responsible AI' as the solution to embed trust in their systems and as foundational step for preparing to be in compliance with the regulations.
As new requirements emerge, you can check these steps that KOSA AI can do for you to prepare you for compliance with regulations:
- Do an impact or risk assessment to assess how the Draft Regulation might affect your business
- Ensuring compliance with the requirements of the Draft Regulation to futureproof your AI systems e.g. by advising on the design of your AI systems, creating protocols or checklists for your AI systems, assisting in the drafting and collating of the required information
- Embed AI explainability into your software.
- Future-proof your AI of unwanted current and potential bias
From a strategic business perspective, KOSA AI partners with you in assessing your technology’s impact and helps you avoid regulatory and ethical pitfalls before, during, and after a product’s launch.