It is important to mention, especially for European companies, the existence of the EU AI Act: a regulation that entered into force during 2024 and has been strongly criticised by many industry experts for restricting the scope of action of European companies in the field of AI. The risk would be to create a competitive disadvantage for local players compared to companies geographically located in areas subject to fewer restrictions (essentially, the rest of the world).
The EU Artificial Intelligence Regulation is, however, a pioneering legislative framework designed to regulate the use, development and distribution of artificial intelligence within the European Union. With the aim of creating a safe and ethical environment for AI, it positions the EU as a global leader in trustworthy AI practices. The regulation sets harmonised rules that ensure AI systems are safe, respect fundamental rights, and encourage innovation and investment within the EU single market.
Here are some of it’s main aspects:
- Who is involved? All parties that develop, distribute, import or use AI, including entities outside the EU that target this market.
- Requirements for model management: Companies must compile an inventory of their AI models, assess them for compliance, and classify them according to risk.
- Risk classifications:
- Unacceptable risk: Prohibited applications (e.g. social scoring as in China).
- High risk: Allowed but subject to strict compliance and data management requirements (e.g. credit scoring, biometric systems).
- Minimal/limited risk: Transparency requirements (e.g. AI-generated content).
- General-purpose AI (GPAI): The regulation includes specific guidelines for GPAI systems, with differentiated compliance obligations based on systemic risk.
- Penalties: Severe fines for non-compliance, ranging from €7.5 million to €35 million, or up to 7% of global annual turnover.
- Impact and preparation: Organisations must assess their existing AI practices, implement model management, design ethical AI systems, and prepare to avoid penalties. The financial sector, which makes extensive use of AI for critical operations, must be particularly proactive. The regulation has been in force since mid-2024, with compliance deadlines staggered over a period of up to 24 months.
Despite the numerous criticisms, the more optimistic observers argue that the EU AI Regulation represents an important step towards the responsible use of AI, balancing innovation with the protection of users’ rights.



