FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

BRUSSELS, May 21 (Reuters) – European Union policymakers on Tuesday endorsed the world’s first comprehensive set of rules regulating the use of artificial intelligence (AI) in tools such as ChatGPT and in biometric surveillance that will likely come into force next month.

Here are the key points of the AI Act:

HIGH-RISK SYSTEMS

So-called high-risk AI systems – those deemed to have significant potential to harm health, safety, fundamental rights, the environment, democracy, elections and the rule of law – will have to comply with a set of requirements, such as undergoing a fundamental rights impact assessment, and obligations to gain access to the EU market.

AI systems considered to pose limited risks would be subject to very light transparency obligations, such as disclosure labels declaring that the content was AI-generated to allow users to decide on how to use it.

USE OF AI IN LAW ENFORCEMENT

The use of real-time remote biometric identification systems in public spaces by law enforcement will only be allowed to help identify victims of kidnapping, human trafficking, sexual exploitation, and to prevent a specific and present terrorist threat.

Advertisement

They will also be permitted in efforts to track down people suspected of terrorism offences, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation and environmental crime.

GENERAL PURPOSE AI SYSTEMS (GPAI) AND FOUNDATION MODELS

GPAI and foundation models will be subject to lighter transparency requirements such as drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for algorithm training.

Foundation models classed as posing a systemic risk and high-impact GPAI will have to conduct model evaluations, assess and mitigate risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.

Until harmonised EU standards are published, GPAIs with systemic risks may rely on codes of practice to comply with the regulation.

PROHIBITED AI

Advertisement

The regulation bars the following:

– Biometric categorisation systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, race.

– Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;

– Emotion recognition in the workplace and educational institutions.

– Social scoring based on social behaviour or personal characteristics.

– AI systems that manipulate human behaviour to circumvent their free will.

Advertisement

– AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation.

WHO ENFORCES THE AI ACT

An AI Office within the European Commission will enforce the rules while an AI Board with EU representatives will assist the Commission and EU countries in applying the new legislation.

SANCTIONS FOR VIOLATIONS

Depending on the infringement and the size of the company involved, fines will start from 7.5 million euros ($8 million) or 1.5 % of global annual turnover, rising to up to 35 million euros or 7% of global turnover.

(Reporting by Foo Yun Chee, Editing by Helen Popper and Ed Osmond)

Advertisement