New AI Rules in Europe Start Today

The European Artificial Intelligence Act, which becomes operational today, marks the first broad law on Earth for regulating artificial intelligence. It will create safe and trustworthy AI in the EU, ensure that the rights of people are protected, and will instrumentalize a common market in AI within the EU, which is sure to boost growth and innovation in that technology.

image

It defines what AI is and has various rules, depending on how risky each AI system is. For example:

Low Risk: The majority of AI systems, including those already in use to suggest movies or filter spam emails, are low-risk. There are no strict rules under the AI Act for such systems since they do not seriously affect people’s rights or safety. Still, companies can voluntarily follow additional guidelines if they so wish.

Transparency Risk: AI systems communicating with a human, such as chatbots, must clearly state when they are a machine. For example, if AI is creating content, similar to deepfakes or synthetic images, it should be labeled to specify that the information is not genuine. Clear indications that a system is utilizing biometric data (e.g., face recognition) or attempting to understand emotions should be given to the user. When AI has generated content, it should be marked in a way that it is obviously such.

High Risk: AI systems that could have a major impact on people’s lives, such as those used in hiring or when deciding if someone gets a loan, are considered to be of high risk. For these kinds of high-risk systems, there will be a need to ensure conformance to very high standards of correctness and safety. This would involve the use of high-quality data, detailed records, and human oversight. Special test areas  will also be created for these high-risk AI systems to be developed and tested under controlled conditions.

Unacceptable Risk: Those AI systems that are deemed too risky to be used will, therefore, be prohibited. These include AI systems that can manipulate people’s behavior in undesirable ways, such as toys that would incentivize risky behavior among children. Also to be prohibited will be AI systems being used for governmental social scoring or certain forms of predictive policing. Some uses of the biometric system, such as emotion recognition at work, shall also be banned.

image

The Artificial Intelligence Act will further provide that general-purpose AI models are extremely capable, deployed to undertake diverse tasks such as generating human-like text, and provide for the transparency of the models and control of large-scale risks that might possibly arise from such models.

Applicability

EU member states shall establish, by August 2, 2025, national authorities that will apply the rules on AI and monitor the market. At the EU level, the supervision over the application and enforcement of those rules—in particular with respect to general-purpose AI models—will lie with the AI Office within the European Commission.

There shall be a total of three advisory bodies to aid in doing this:

1. European Artificial Intelligence Board: This board will ensure the application of the AI Act in a consistent manner across all Member States of the European Union and will facilitate the cooperation between the Commission and the member states.

2. Scientific Panel of Experts: This grouping of independent experts shall make technical advice and provide alerts to the AI Office on some risks that arise due to general purpose AI models.

3. Advisory Forum: This would be a forum consisting of different stakeholders and the same will provide guidance and feedback on how the AI Act gets implemented.

It could also imply fines for companies if they violate these rules. Violations at a serious level could result in fines of up to 7% of a company’s global annual turnover, 3% for other infringements of the rules, and 1.5% for providing incorrect information.

image

Next Steps

Most of the rules laid down in the AI Act will come into force on 2 August 2026, although restrictions on AI systems considered unacceptable risks will apply six months earlier. Rules for general purpose AI models will only apply after 12 months.

To help businesses get ready, the European Commission initiated the AI Pact. This is a voluntary program where AI developers bind themselves to some of the key obligations of the AI Act before the official deadlines.

The Commission also develops guidance on how the Act should be applied in practice and establishes standards and codes of good practice to support its implementation. Participation is called for in the development of the first general-purpose AI Code of Practice; a multi-stakeholder consultation is open for feedback.

Background Information

image

On the 9th of December 2023, the European Commission announced the political agreement on the AI Act. 24th of January 2024: the Commission presented measures to create support for European startups and SME development in trustworthy AI. May 29, 2024: AI Office inaugurated; and July 9, 2024: new regulations allowed the launch of AI factories with supercomputers that could train general-purpose AI models.

The ongoing research conducted at the Joint Research Centre has quite instrumentally given shape to the AI policies of the EU and ensured that these policies flow into appropriate implementation.

image

Government Boosts Budget for Green Energy Projects

px Harris to Booker Save Our Care Rally U S Capitol (cropped) ()

Over 100 VCs and Tech Leaders Endorses Kamala Harris Candidacy