EUROPE
European Union squares the circle on the world’s first AI rulebook

After a 36-hour negotiating marathon, EU policymakers reached a political agreement on what is set to become the global benchmark for regulating Artificial Intelligence.

The AI Act is a landmark bill to regulate Artificial Intelligence based on its capacity to cause harm. The file passed the finishing line of the legislative process as the European Commission, Council, and Parliament settled their differences in a so-called trilogue on Friday (8 December).

At the political meeting, which set a new record for interinstitutional negotiations, the main EU institutions had to go through an appealing list of 21 open issues. As Euractiv reported, the first part of the trilogue closed the parts on open source, foundation models and governance.

However, the exhausted EU officials called for a recess 22 hours after it was clear that a proposal from the Spanish EU Council presidency on sensitive law enforcement was unacceptable to left-to-centre lawmakers. The discussions picked up again on Friday morning and only ended late at night.

National security

EU countries, led by France, insisted on having a broad exemption for any AI system used for military or defence purposes, even by an external contractor. The text’s preamble will reference that this will be per the EU treaties.

Prohibited practices

The AI Act includes a list of banned applications that pose an unacceptable risk, such as manipulative techniques, systems exploiting vulnerabilities, and social scoring. MEPs added databases based on bulk scraping of facial images like Clearview AI.

Parliamentarians obtained the banning of emotion recognition in the workplace and educational institutions, with a caveat for safety reasons meant to recognise if, for example, a driver falls asleep.

Parliamentarians also introduced a ban on predictive policing software to assess an individual’s risk for committing future crimes based on personal traits.

Moreover, parliamentarians wanted to forbid the use of AI systems that categorise persons based on sensitive traits like race, political opinions or religious beliefs.

Upon insistence from European governments, Parliament dropped the ban on using real-time remote biometric identification in exchange for some narrow law enforcement exceptions, namely to prevent terrorist attacks or locate the victims or suspects of a pre-defined list of serious crimes.

Ex-post use of this technology will see a similar regime but with less strict requirements. MEPs pushed to make these exceptions apply only as strictly necessary based on national legislation and prior authorisation of an independent authority. The Commission is to oversee potential abuses.

Parliamentarians insisted that the bans should not apply only to systems used within the Union but also prevent EU-based companies from selling these prohibited applications abroad. However, this export ban was not maintained because it was considered not to have a sufficient legal basis.

High-risk use cases

The AI regulation includes a list of use cases deemed at significant risk to cause harm to people’s safety and fundamental rights. The co-legislators included a series of filtering conditions meant to capture only genuine high-risk applications.

The sensitive areas include education, employment, critical infrastructure, public services, law enforcement, border control and administration of justice.

MEPs also proposed including the recommender systems of social media deemed ‘systemic’ under the Digital Services Act, but this idea did not make it into the agreement.

Parliament managed to introduce new use cases like for AI systems predicting migration trends and border surveillance.

Law enforcement exemptions

The Council introduced several exemptions for law enforcement agencies, notably a derogation to the four-eye principle when national law deems it disproportionate and the exclusion of sensitive operation data from transparency requirements.

Providers and public bodies using high-risk systems must report it in an EU database. For police and migration control agencies, there will be a dedicated non-public section that will only be accessible to an independent supervisory authority.

In exceptional circumstances related to public security, law enforcement authorities might employ a high-risk system that has not passed the conformity assessment procedure requesting judicial authorisation.

Fundamental rights impact assessment

Centre-left MEPs introduced the obligation for public bodies and private entities providing essential public services, such as hospitals, schools, banks and assurance companies deploying high-risk systems, to conduct a fundamental right impact assessment.

Responsibility alongside the supply chain

Providers of general purpose AI systems like ChatGPT must provide all the necessary information to comply with the AI law’s obligations to downstream economic providers that create an application falling in the high-risk category.

Penalties

The administrative fines are set as a minimum sum or percentage of the company’s annual global turnover if the latter is higher.

For the most severe violations of the prohibited applications, fines can be up to 6.5% or €35 million, 3% or €15 million for violations of obligations for system and model providers, and 1.5% or half a million euros for failing to provide accurate information.

Timeline

The AI Act will apply two years after it enters into force, shortened to six months for the bans. Requirements for high-risk AI systems, powerful AI models, the conformity assessment bodies, and the governance chapter will start applying one year earlier,

[Edited by Zoran Radosavljevic]

Source: Euractiv.com

About the author

Related Post

Leave a comment

Your email address will not be published. Required fields are marked *