The inside story of the EU's AI Act from the person that wrote it - with Gabriele Mazzini (Part 2)
- rs1499
- 1 day ago
- 5 min read
We sat down with Gabriele Mazzini, Architect and lead author of the EU AI Act, and currently Research Affiliate and Fellow at MIT, to get his reflections on the Act.
Below is a write-up in Gabriele's words. It's part 2 of our discussion, with Part 1 published a couple of weeks ago- it's linked here.
We hope you enjoy!
--

--
As someone closely involved in the drafting and development of the EU Artificial Intelligence Act, I’ve spent a significant amount of time grappling with one central question: how do we regulate something as transformative, versatile, and fast-evolving as artificial intelligence without stifling the very innovation we hope to cultivate?
The AI Act is, at its core, a response to emerging concern around trust in technology. It’s about ensuring that AI systems operating within the European Union meet expected standards of safety and compliance so that they can be relied upon. But as with any attempt to legislate the future, especially a specific technology (or set of technologies), the devil lies in the detail - and in the trade-offs.
One of the most visible features of the Act is its set of prohibitions. Emotion recognition, as we covered last time, often grabs the headlines, but it’s not the only one. For example, the Act includes restrictions on biometric categorisation- essentially preventing systems from using biometric data to infer characteristics such as race, political orientation, religious beliefs, or sexual preferences.
On the surface, this could be perceived as something that makes sense. These are deeply sensitive attributes, and it’s easy to imagine the harm that could come from inaccurate or inappropriate inferences. But when you dig a little deeper, you start to see the complications. What if a system needs to detect race in order to identify and mitigate racial bias? What if the inference isn’t used to discriminate, but to prevent discrimination? The rule applies regardless of context or outcome. It’s a blanket ban - one that may end up limiting the very tools we need to build fairer systems.
This highlights a broader issue with the regulatory approach: a tendency to reach for prohibitions before fully understanding the practical implications. And that, in turn, can chill innovation - particularly among the startups and smaller players who lack the resources to confidently navigate an uncertain regulatory environment or otherwise are confronted with a regulatory environment that signals strong focus only on risks.
To be clear, the Act does include some provisions aimed at supporting innovation. Articles 57 to 63 lay out a framework for innovation support, most notably the so-called AI sandbox. The sandbox is meant to provide a supervised space for companies- especially SMEs- to test their systems and work towards compliance with guidance from authorities.
In theory, it’s a promising idea. In practice? We’ll have to wait and see.
Will regulators have the capacity to offer meaningful support? Will companies want to invest their time engaging with authorities, especially when that engagement might slow down development, notably at the beginning? And will the sandbox be open and accessible enough to actually serve the companies that need it most? These are all open questions.
There are other supportive measures in the Act too: reduced conformity assessment fees for micro-enterprises, simplified documentation templates, and dedicated communication channels for SMEs. All of these help, but they don’t fundamentally shift the burden of compliance. There’s no threshold-based exemption. No “if your turnover is below X, you're exempt from Y” kind of approach. That could have made a real difference.
Personally, I think that kind of model- a tiered approach to compliance- would have been the most SME-friendly option. Yes, small companies can build harmful systems. But a more nuanced framework could have introduced proportionality without sacrificing safety. The Act, as it stands, assumes a one-size-fits-all level of responsibility. That assumption ends up favouring large, well-resourced companies and makes it harder for others to compete.
From a global perspective, this raises another important issue: competitiveness. Will European AI companies be able to compete on the world stage if they face higher regulatory burdens than their counterparts in the U.S. or Asia?
Interestingly, during the development of the Act, the question of whether companies might relocate due to regulation wasn’t a central theme. It came up, yes- but more in passing than in strategy. The dominant thinking was this: trustworthy AI is more likely to be adopted. If we build a regulatory environment that assures people- users, businesses, governments- that the AI systems they encounter are safe and fair, then we increase the chances of uptake. Regulation becomes a catalyst for adoption, not a barrier.
That’s a compelling theory. And it might prove true. But we have to ask: are the obligations we’ve created proportionate? Are they well-calibrated to encourage trust without discouraging experimentation?
Because the truth is, innovation needs space. It needs trial and error. It needs the freedom to test ideas, to fail, to pivot, and try again. Regulation that is too broad, too complex, or too inflexible can quickly become a straitjacket.
Other jurisdictions are taking different approaches. In some places- Brazil, for instance, or certain U.S. states- we’ve seen governments move to adopt frameworks that mirror the EU’s emphasis on trustworthy AI. It shows that the AI Act has ushered in an important debate about the risks of AI and has influence. But in places like Singapore and Japan, the tone is more cautious. There’s a greater focus on experimentation and support. Less emphasis on binding rules, more on enabling environments.
These are two very different strategies. Neither is perfect. And there may be others too. But when I compare them, I feel that the EU’s model, while grounded in good intentions, has grown too complex, too broad and too rigid and bureaucratic to adjust. Trustworthy AI is a noble goal, but the path to it must be navigable - especially for those domestic companies that want a fair chance to compete in the space and want to offer solutions reflecting our own culture and perspectives. We should give ourselves the opportunity to embed our values not only in the rules, but also in the technologies we can develop.
If I were advising on AI policy today, I would recommend starting sector by sector. A horizontal approach, one that spans all industries and use cases, has been a complex exercise to manage with the necessary degree of precision and inevitably ends up being too broad. Perhaps things could have been more manageable if the focus had been on very limited and clear high-risk use cases in terms of scope of application, while at the same time building a solid and functioning governance system incrementally, testing ideas and refining approaches as we went.
And if we had included real, courageous exemptions for SMEs, we could have built a framework that encourages responsible innovation without drowning early-stage companies in daunting compliance challenges.
I still believe in the value of regulating AI. I still believe that trust and innovation can go hand in hand. But we need to think more critically about how we implement that vision. Because if the cost of doing things the “right” way is too high, we risk losing our most creative minds to jurisdictions that offer more flexibility, more support, and more room to grow.