Why You Should Treat the EU AI Act as a Foundation, Not an Aspiration
- By Martin Gill, Forrester
- September 16, 2024
The European Union Artificial Intelligence Act is here. It’s intended to regulate a matter of unprecedented complexity: ensuring that firms use AI in a safe, trustworthy, and human-centric manner. A rapid enforcement schedule and hefty fines for noncompliance mean that every company that deals with any form of AI should make it a priority to understand this landmark legislation. At the highest level, the E.U. AI Act:
- Has strong extraterritorial reach. Much like the GDPR, the E.U. AI Act applies to private and public entities operating in the E.U. and those supplying AI systems or general-purpose AI (GPAI) models to the E.U., regardless of where they’re headquartered.
- Applies differently to different AI actors. The E.U. AI Act establishes different obligations for actors across the AI value chain. It establishes roles like GPAI model providers, deployers (i.e., users), manufacturers, and importers.
- Embraces a pyramid-structured risk-based approach. The higher the risk of the use case, the more requirements it must comply with and the stricter the enforcement of those requirements will be. As the level of risk associated with use cases decreases, so does the number and complexity of the requirements your company must follow.
- Includes fines with teeth. Not all violations are created equal — and neither are the fines. Noncompliance with the Act’s requirements can cost large organizations up to EUR15 million or 3% of global turnover. Fines for violating the requirements of prohibited use cases are even higher: up to EUR35 million or 7% of global turnover.
Treat the Act as the foundation, not the ceiling
If we expect customers and employees to use the AI experiences we build, we have to create the right conditions to engender trust. It’s easy to think of trust as nebulous, but we can define trust as more tangible and actionable. Trust is:
The confidence in the high probability that a person or organization will spark a specific positive outcome in a relationship.
We’ve identified seven levers of trust, from accountability and consistency to empathy and transparency.
The E.U. AI Act leans heavily into the development of trustworthy AI, and the 2019 Ethics Guidelines for Trustworthy AI lay out a solid set of principles to follow. Together, they build a framework for creating trustworthy AI on a familiar set of principles, like human agency and oversight, transparency, and accountability.
But legislation is a minimum standard, not a best practice. Building trust with consumers and users will be key to developing AI experiences. For firms operating within the E.U., and even those outside, following the risk categorization and governance recommendations that the E.U. AI Act lays out is a robust, risk-oriented approach that, at a minimum, will help create safe, trustworthy, and human-centric AI experiences that cause no harm, avoid costly or embarrassing missteps, and, ideally, drive efficiency and differentiation.
Get started now
There’s a lot to do, but at a minimum:
- Build an AI compliance task force. AI compliance starts with people. Regardless of what you call it — AI committee, AI council, AI task force, or simply AI team — create a multidisciplinary team to guide your firm along the compliance journey. Look to firms such as Vodafone for inspiration.
- Choose your role in the AI value chain for each AI system and GPAI model. Is your firm a provider, a product manufacturer embedding AI in its products, or a deployer (i.e., user) of AI systems? In a perfect world, matching requirements to your firm’s specific role would be a straightforward exercise — but in practice, it’s complex.
- Develop a risk-based methodology and taxonomy for AI systems and risk classification. The E.U. AI Act is a natural starting point for compliance but consider going beyond the Act and applying the AI NIST Risk Management Framework and the new ISO 42001 standard.
Read our latest report to learn more about how to approach the act, or for help, book a guidance session.
The original article is here.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/ Andrey Grigoriev
Martin Gill, Forrester
Martin Gill is Forrester’s vice president and research director. He leads a Europe-based research team focusing on the intersection between customer experience (CX), brand, and privacy and examining how CX leaders, CMOs, and security and privacy professionals must respond to the changing nature of today’s empowered consumers.