Publications & Insights The EU AI Act: Ban on Prohibited AI Systems enters into force
Share This

The EU AI Act: Ban on Prohibited AI Systems enters into force

Wednesday, 05 February 2025

The European Union's Artificial Intelligence Act (the "EU AI Act"), which entered into force on 1 August 2024, has a graduated timeline for implementation with the majority of the Regulation due to come into force following a 2 year implementation period i.e. 2 August 2026. 

An exception to this, however, relates to the ban on “prohibited AI systems” under article 5 which will come into effect from Sunday, 2 February 2025. 

Article 5 is somewhat different to other articles of the Regulation in that it prohibits totally the placing onto the market, putting into service or use of the systems defined in Article 5. Prohibited practices are those which are said to pose intolerable risks to foundational EU values as a result of their potentially negative impacts. 

Importantly, in respect of Ireland, it should be noted that recital 40 to the Regulation sets out that several of the key provisions of the Regulation, relevant to article 5, shall not apply to Ireland. Specifically it is stated that Ireland is not bound by rules laid down in Article 5(1)(g), 5(1)(h)(i), 5(2) and 5(6) where these relate to police and judicial co-operation in criminal matters. 

Although it is not clear how this may operate in practice, and it may be the case that Ireland decides to implement these provisions in any event, this carve-out from the Regulation might afford law enforcement in Ireland with a flexibility to use certain AI systems that are not permitted to be used elsewhere in the Union for these purposes.

Prohibited AI systems are set out in article 5(1) as follows: 

(a) AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(b) AI systems that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(c) AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(d) AI Systems for making risk assessments of natural persons in order to assess or predict the risk of that person committing a criminal offence based solely on the profiling of a person or assessing their personality traits and characteristics other than those which support human assessment of the involvement of a person in criminal activity; 

(e) AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage;

(f) AI systems that infer emotions of a natural person in the areas of workplace and educational institutions other than those intended to be for medical or safety reasons;

(g) Biometric Categorisation systems which deduce or infer a person’s race, political opinions, trade union membership, religious or philosophical beliefs, sex life or orientation; 

(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: 

(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;

(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; and

(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for certain offences.

In accordance with article 99(3) of the Regulation a breach of Article 5 will carry a penalty of a fine of up to €35million or 7% of total global turnover whichever is larger. It should be noted in this regard, however, that Member States are not required to have laid down and implemented rules for penalties and fines until 2 August of this year. 

Other key dates to note: 

We have provided below a, non-exhaustive, timeline of relevant dates in relation to the implementation of the Regulation.

If you are seeking any legal advice with regards to the EU AI Act please contact Jon LegorburuBen Grogan or any member of our Artificial Intelligence (AI) Group.