top of page
  • Writer's picture Will Paraskeva

Regulating the future: What is the EU’s new AI Act?

On the 6th of December, the EU council agreed upon the staunchest AI regulation seen to date, particularly in comparison to those introduced under the Biden administration in the US. Uniquely, the EU has adopted a risk-based approach to safeguard humanity from the perils of rapidly advancing technology. But is it enough to satiate the growing need for governance in this sector?


The EU AI Act is comprised of numerous different rules to limit AI firms from developing technologies that could infringe on human rights. Firstly, the council has adopted a risk-based approach, meaning systems that pose a higher risk face stricter regulation. As such, those developers of potentially dangerous AI programmes must conduct a “fundamental rights impact assessment” before using it. More severely, uses of AI that are deemed too risky are simply banned from the EU. AI used for emotional recognition in the workplace, social scoring (like in mainland China) and biometric categorisation to infer sensitive data are all banned under this new act. Ideally, this legislation will ensure the safety of AI while still enabling firms to continue to innovate with this powerful technology. 


The rules don’t stop there though, as under the act certain companies will be subject to intense record-keeping and transparency practices, with them being required to be listed on an EU public database to minimise risks. As a result, tech powerhouses, such as Google (who has been formally charged 3 times by the EU in anti-trust cases) and OpenAI, will be required to keep EU regulators informed on their training of generative AI models and the data they use to do so. 


The EU isn’t afraid of enforcing this act either, dealing out fines of between 7 million and 35 million depending on the severity of a company’s actions. This is notably different from the toothless AI Executive Order released under the Biden Administration which requires companies building the most advanced AI systems to perform safety tests through a practice called “red teaming”. Though it does help place some rules on generative AI in America, a senior Biden administration official noted that it has a “very high threshold” for which models are covered, ultimately meaning that most systems on the market can evade it. Furthermore, without a significant act from Congress, stiffer legislation against AI in the US remains to be seen. 


Though better than its American counterpart, the EU’s act does have some teething issues of its own. Firstly, the act is only enforceable in 2025 at the earliest. In a field that is advancing as rapidly as AI, this is a significant time to wait. Additionally, while some feel that the risk-based approach helps prevent harmless AI developers from being suffocated, other critics say that it would be better to have risk assessments for all firms in the industry.

Ultimately, the effectiveness of this act in practice remains to be seen. Will it completely crush the inventive thinking of the AI sphere, or will it save humanity from an impending doom? In the meantime, the sector will continue to pose ever-increasing regulatory challenges for administrations across the world. 

0 comments
bottom of page