Today the European Act on Artificial Intelligence (AI Act), the world’s first comprehensive regulation on artificial intelligence, enters into force. The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights. The Regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a favourable environment for innovation and investment.
The AI Act introduces a forward-looking definition of AI, based on a European product safety and risk-based approach:
Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems are not subject to any obligations under the AI Act due to their minimal risk to citizens’ rights and safety. Companies may voluntarily adopt additional codes of conduct. Specific transparency risk: AI systems such as chatbots must clearly indicate to users that they are interacting with a machine. Some AI-generated content, including deep fakes, must be labeled as such, and users must be informed when biometric categorization or emotion recognition systems are used. In addition, providers will have to design systems so that synthetic audio, video, text, and image content is marked in a machine-readable format and detectable as artificially generated or manipulated. High risk: AI systems identified as high risk will need to comply with strict requirements, including risk mitigation systems, high-quality datasets, activity logging, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Examples of high-risk AI systems include AI systems used for recruitment, or to assess whether someone is eligible for a loan, or to operate autonomous robots. Unacceptable risk: AI systems that are deemed to pose a clear threat to people’s fundamental rights will be banned. This includes AI systems or applications that manipulate human behavior to circumvent users’ free will, such as voice-assisted toys that encourage dangerous behavior by minors, systems that enable “social scoring” by governments or companies, and some predictive policing applications. In addition, certain uses of biometric systems will be prohibited, such as emotion recognition systems used in the workplace and certain systems for categorizing people or for real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with some exceptions).
To complement this system, the AI Act also introduces rules for so-called general-purpose AI models, which are high-performance AI models designed to perform a wide variety of tasks, such as generating human-like text. General-purpose AI models are increasingly used as components of AI applications. The AI Act will ensure transparency along the value chain and address potential systemic risks of the best-performing models.
Application and compliance with AI rules
Member States have until 2 August 2025 to designate national competent authorities, which will oversee the application of the rules on AI systems and carry out market surveillance activities. The Commission AI Office will be the main implementing body of the AI Act at EU level, as well as the body responsible for the application of the general-purpose rules. AI models.
Three advisory bodies will support the implementation of the rules. The European Artificial Intelligence Council will ensure uniform application of the AI law across EU Member States and will act as the main body for cooperation between the Commission and Member States. A Scientific Panel of independent experts will provide technical advice and input on enforcement. This panel can notably issue alerts to the AI Office on risks associated with general-purpose AI models. The AI Office can also receive advice from an Advisory Forum, composed of a diverse set of stakeholders.
Companies that fail to comply with the rules will be penalized. Fines could be up to 7% of global annual turnover for violations of prohibited AI applications, up to 3% for violations of other obligations, and up to 1.5% for providing incorrect information.
Next steps
The majority of the rules of the AI Act will come into force on 2 August 2026. However, the bans on AI systems deemed to pose an unacceptable risk will apply after six months, while the rules on so-called general-purpose AI models will apply after 12 months.
To bridge the transition period before full implementation, the Commission launched the AI Pact. This initiative invites AI developers to voluntarily adopt the main obligations of the AI Act before the legal deadlines.
The Commission is also developing guidelines to define and detail how the AI Act should be implemented and to facilitate co-regulatory instruments such as standards and codes of practice. The Commission has opened a call for expressions of interest to participate in the development of the first general AI code of practice, as well as a multi-stakeholder consultation to give all stakeholders the opportunity to have their say on the first code of practice under the AI Act.
Background
On 9 December 2023, the Commission welcomed the political agreement on the AI Act. On 24 January 2024, the Commission launched a package of measures to support European startups and SMEs in developing trustworthy AI. On 29 May 2024, the Commission unveiled the AI Office. On 9 July 2024, the amended EuroHPC JU Regulation entered into force, enabling the establishment of AI factories. This allows the use of dedicated AI supercomputers for training general-purpose AI (GPAI) models.
Continued independent and evidence-based research produced by the Joint Research Centre (JRC) has played a fundamental role in shaping EU AI policies and ensuring their effective implementation.
Originally published in The European Times.
source link eu news