• World
  • Dec 11

EU agrees on landmark deal for AI regulation

• The European Union policymakers have agreed a provisional deal on landmark rules governing the use of artificial intelligence (AI), including governments’ use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.

• As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions.

• The draft regulation of the so-called AI Act aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This proposal also aims to stimulate investment and innovation on AI in Europe.

• Following the provisional agreement, work will continue at technical level in the coming weeks to finalise the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives for endorsement once this work has been concluded.

• The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025.

• Europe’s ambitious AI rules are framed as companies like OpenAI, in which Microsoft is an investor, continue to discover new uses for their technology, triggering both plaudits and concerns. Google owner Alphabet has launched a new AI model — Gemini — to rival OpenAI.

What is artificial intelligence?

• The term ‘artificial intelligence’ (AI) means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.

• AI is the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence.

• AI is expected to change the way we work and live. In view of its positive impact on the economy, the technology is being embraced by countries across the world. Its proliferation is being regarded as the fourth industrial revolution.

• AI presents enormous global opportunities: it has the potential to transform and enhance human well-being, peace and prosperity.

• AI promises to transform nearly every aspect of our economy and society. The opportunities are transformational — advancing drug discovery, making transport safer and cleaner, improving public services, speeding up and improving diagnosis and treatment of diseases like cancer and much more.

• However, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. 

• The opportunities are vast, and there is great potential for increasing the productivity of workers of all kinds. 

• However, these huge opportunities come with risks that could threaten global stability and undermine our values. 

• AI poses risks in ways that do not respect national boundaries. It is important that governments, academia, businesses, and civil society work together to navigate these risks, which are complex and hard to predict, to mitigate the potential dangers and ensure AI benefits society.

The main elements of the EU’s AI Act: 

i) Rules on high-impact general purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems.

ii) A revised system of governance with some enforcement powers at EU level extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards.

iii) Better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.

• The regulation does not apply to areas outside the scope of EU law. It will not affect member states’ competences in national security or any entity entrusted with tasks in this area.

• The AI Act will not apply to systems which are used exclusively for military or defence purposes. 

• Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons. 

• For some uses of AI, risk is deemed unacceptable and, therefore, these systems will be banned from the EU. The provisional agreement bans, for example, cognitive behavioural manipulation, the untargeted scrapping of facial images from the Internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.

Law enforcement exceptions

• Considering the specificities of law enforcement authorities and the need to preserve their ability to use AI in their vital work, several changes to the proposal were agreed relating to the use of AI systems for law enforcement purposes.

• Subject to appropriate safeguards, these changes are meant to reflect the need to respect the confidentiality of sensitive operational data in relation to their activities. 

• For example, an emergency procedure was introduced allowing law enforcement agencies to deploy a high-risk AI tool that has not passed the conformity assessment procedure in case of urgency. 

• However, a specific mechanism has been also introduced to ensure that fundamental rights will be sufficiently protected against any potential misuses of AI systems.

• Moreover, as regards the use of real-time remote biometric identification systems in publicly accessible spaces, the provisional agreement clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems. 

• The compromise agreement provides for additional safeguards and limits these exceptions to cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.

Penalties

• The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7 per cent for violations of the banned AI applications, €15 million or 3 per cent for violations of the AI Act’s obligations and €7.5 million or 1.5 per cent for the supply of incorrect information. 

• However, the provisional agreement provides for more proportionate caps on administrative fines for Small and Medium-sized Enterprises (SMEs) and startups in case of infringements of the provisions of the AI Act.

• The agreement also makes clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the AI act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.

Additional read:

What are deepfakes and synthetic media?

Synthetic media is a general term used to describe text, images, videos or voice that has been  generated using artificial intelligence (AI). It has the capability to alter the landscape of misinformation. Deepfakes are a particularly concerning type of synthetic media that utilises AI to create believable and highly realistic media.

Manorama Yearbook app is now available on Google Play Store and iOS App Store

Notes