Prohibited practices under the AI Act: Answered and unanswered questions in the Commission's guidelines

March 05, 2025

The EU AI Act’s prohibitions came into effect on 2 February 2025 and carry fines of 7% worldwide annual turnover for non-compliance.  The prohibitions at Article 5 and accompanying recitals (particularly recitals 28-44) set out a complex set of provisions.  The guidelines published by the Commission on 4 February 2025 (the guidelines) were welcome for those faced with navigating the prohibitions.

This blog post looks at the key questions answered (and unanswered) in relation to overall scope and application and the prohibitions on harmful subliminal manipulation / deception , exploitation of the vulnerable,  social scoring , individual risk assessment and prediction of criminal offences , emotion recognition in the workplace and education , and biometric categorisation to infer ‘sensitive’ characteristics , before providing our take  on the guidelines as a whole.

Scope and application

Do the prohibitions apply to ‘providers’ and ‘deployers’?  

The prohibitions apply to “the placing on the market, the putting into service or the use” of AI systems for specific purposes.  Article 5 does not use the defined terms ‘provider’ or ‘deployer’.  This leaves some ambiguity around whether the prohibitions simply apply to ‘providers’ and ‘deployers’, or whether their application is slightly different.

The guidelines do not address this head-on, but state that they will focus on the ‘provider’ and ‘deployer’ roles and use the terms ‘provider’ and deployer’ frequently in examples, suggesting that Article 5 applies to ‘providers’ and ‘deployers’.

While not called out in the guidelines, we would suggest that the application of Article 5 goes slightly further than the ‘provider’ role.  Article 5 applies to any person placing a prohibited system on the market or putting it into service. The ‘provider’ defined term only applies to those placing an AI system on the market or putting it into service “under its own name or trade mark”.  But for Article 5, it will not matter whether the person has put their name or trade mark on the AI system – placing the prohibited AI system on the market or putting it into service is enough.

Use of general-purpose AI systems for a prohibited purpose 

Deployers will breach Article 5 if they use a general-purpose AI system for a prohibited purpose, including by bypassing any safety guardrails.

Providers are expected to take “effective and verifiable measures to build in safeguards and prevent and mitigate such harmful behaviour and misuse to the extent they are reasonably foreseeable and the measures are feasible and proportionate”.  Providers must also exclude use for prohibited practices in their contractual relationships. 

Definition of ‘AI system’

The prohibitions apply to ‘AI systems’.  The Commission has provided separate guidelines on this definition, but they are not very illuminating.  For any tool that could be used for a prohibited practice, we would generally suggest assuming it is an AI system, given the magnitude of the fines.  Any prohibited practice is likely to present serious risks under other legislation in any event. 

Article 5(1)(a) harmful manipulation, deception, or exploitation

the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm

Chatbots gone rogue and other examples of what’s caught – subliminal, manipulative, deceptive – or a combination

Subliminal techniques may be caught by this prohibition, but it is not necessary to deploy subliminal techniques to be caught – manipulative or deceptive techniques can also be caught.  Regardless of whether a provider intends it, an AI system could learn manipulative techniques through e.g. reinforcement learning.  However, chatbots that hallucinate will not necessarily be considered deceptive if the provider has properly informed users about the system’s limitations.

Various examples are given of chatbots either gone rogue (e.g. a wellbeing chatbot advising engaging in dangerous activities) or used to manipulate intentionally (e.g. a chatbot using subliminal messaging to exploit users’ vulnerabilities through adverts).

Context on ‘material distortion of behaviour’ and ‘significant harm’

For ‘material distortion of behaviour’, the guidelines refer to the meaning in Directive 2005/29/EC (Unfair Commercial Practices Directive or UCPD) and Court of Justice of the Union Case law on the UCPD.

The guidelines also consider what is meant by “significant harm”, looking at severity, context and cumulative effects, affected persons’ vulnerability, and duration and reversibility.

Article 5(1)(b) harmful exploitation of vulnerabilities

the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm

Much of the commentary on 5(1)(a) also applies to 5(1)(b), the guidelines discuss the application of 5(1)(b) to various groups. 

Children and gaming

One example suggests that those operating in the gaming sector will need to be wary of triggering the prohibition for children, as the guidelines are clear that a game using AI “to analyse children’s individual behaviour and preferences on the basis of which it creates personalised and unpredictable rewards through addictive reinforcement schedules and dopamine-like loops to encourage excessive play and compulsive usage” could be prohibited.

Inaccessibility – not prohibited by 5(1)(b) AI Act

Applications designed in a way that makes them inaccessible will not be considered to trigger the prohibition simply because they are inaccessible.

5(1)(a) and 5(1)(b) - no clarity on targeted advertising not triggering the prohibitions

There is still little clarity on the extent to which targeted advertising could trigger the prohibitions.  The guidelines state that “Advertising techniques that use AI to personalise content based on user preferences are not inherently manipulative” if they do not meet the conditions under Article 5(1)(a) or 5(1)(b).  Compliance with the GDPR, consumer protection law, and the DSA “help to mitigate such risks” – but apparently provide no guarantee of not triggering Articles 5(1)(a) or 5(1)(b)!

Article 5(1)(c) social scoring

the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity

In-scope – examples for financial services

The guidelines give credit scoring as an example of a score, and emphasise this would be prohibited if carried out by an AI system, but only where the other elements of the prohibition were met – there is no suggestion that the AI Act prohibits credit scoring per se, just that credit scoring could fall under the prohibition.

Examples of practices that might fall under the prohibition in a financial services context include:

  • An insurance company collecting spending and other financial information from a bank which is unrelated to the determination of eligibility of candidates for life insurance, where the AI system recommends whether to refuse a contract or set higher premiums.
  • A private credit agency deciding whether an individual should obtain a loan for housing based on unrelated personal characteristics.

Not all scoring is in-scope

However, not all scoring is in scope.  Ratings that average out human-provided scores (e.g. driver ratings) are not in-scope (unless combined with other information in a way that meets the other elements of the prohibition).

Evaluation for financial fraud

Evaluation for financial fraud based on transactional behaviour and metadata in the context of the service will not be caught, provided the factors taken into account are objectively relevant to determine the risk of fraud and if the detrimental treatment is a justified and proportionate consequence.

Targeted advertising

AI-enabled targeted commercial advertising in compliance with other laws will generally be out of scope, though the guidelines do note that exploitative and unfair differential pricing could be in scope for this prohibition.

Article 5(1)(d) individual risk assessment and prediction of criminal offences

the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

The guidelines clarify the rational for this prohibition – using historical data to predict the risk of an individual committing a crime or reoffending can perpetuate or reinforce bias, and may result in crucial individual circumstances being overlooked.

Application to the private sector

While this prohibition will mainly affect law enforcement activities, the guidelines emphasise that it can also catch private actors.  For example, a private company providing crime analytics software that is asked by law enforcement to predict or assess the risk of individuals as potential offenders of human trafficking could be prohibited if all the criteria for Article 5(1)(d) were met.

Similarly, a bank could fall within the prohibition by using an AI system to screen and profile customers for anti-money laundering offences.  However, if it uses only the data as specified in the Anti-Money Laundering Regulation ((EU) 2024/1624) (AMLR) which are objective and verifiable to ensure that those singled out as suspects are reasonably likely to commit anti-money laundering offences, and where the predictions are subject to human assessment and verification in line with the AMLR, this can fall out of scope of the prohibition.

AI systems that analyse risks of crimes being committed by legal entities are out of scope, as are systems used for individual predictions of administrative offences.

Article 5(1)(f) emotion recognition in the workplace or education

the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons

Inferences must be based on biometric data

Inferences must be based on biometric data – the guidelines provide clear confirmation that only AI systems identifying emotions based on biometric data are caught by the prohibition.

An AI system inferring emotions from written text – i.e. sentiment analysis – would not fall in-scope of the prohibition.  In contrast, an AI system inferring emotions from key strokes would be caught.

Nothing much should be read into the use of ‘AI systems to infer emotions’ versus ‘emotion recognition systems’

Much confusion has been caused by the phrasing of Article 5(1)(f), which refers to “AI systems which infer emotions of natural persons”.  It does not use the term ‘emotion recognition systems’, defined at Article 3(39) as AI systems “for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data”.  The guidelines clarify that nothing in particular should be read into this discrepancy – the prohibition should be understood as including both AI systems identifying or inferring emotions or intentions.  As discussed above, the restriction of the application to AI systems using biometric data applies.

‘Readily apparent expressions’

The guidelines also expand on the recitals’ commentary on emotions or intentions not including readily apparent expressions, gestures, or movements unless used for identifying or inferring emotions.  The observation that a person is smiling is not emotion recognition.  Oddly, the guidelines suggest that a TV broadcaster using a device that allows it to track how many times its news presenters smile to the camera is not emotion recognition (though in practice, it is hard to see how such a device could be used without making inferences about emotions, which could bring it into scope).  In contrast, an AI system that infers that an employee is unhappy, sad, or angry towards a customer from body gestures, a frown, or lack of a smile is emotion recognition.

Capturing customers is out of scope (for this prohibition)

The AI system must be directed at employees to be considered used in a workplace setting – cameras used by a supermarket or bank to detect suspicious customers would not be prohibited if no employees were being tracked (though the AI system would be high-risk).

Emotion recognition systems used for addressing customers will not be caught even if they use biometric data, although they will be high-risk and could trigger the prohibitions under Article 5(1)(a) and (b).

Article 5(1)(g) biometric categorisation to infer ‘sensitive’ characteristics

the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement

Falling within the “ancillary to another commercial service and strictly necessary for objective technical reasons” exemption – virtual try-on may be out of scope

The guidelines highlight that the definition of ‘biometric categorisation system’ captures AI systems for assigning natural persons to specific categories on the basis of their biometric data.  However, there is a carve-out where the categorisation is ancillary to another commercial service and strictly necessary for objective technical reasons.  These conditions must be fulfilled cumulatively.

The guidelines clarify that filters allowing a consumer to preview a product on him or herself may be ancillary, as they can only be used in relation to the principal service (selling a product).  This will be welcome news for any using virtual try-on applications, as this brings the potential for them to fall out of scope (though it is still unclear whether virtual try-on will always be out of scope).

Our take

The guidelines are currently in draft form, though we understand that they are likely to be adopted without changes to the content once translation is complete.

They provide helpful clarifications on some key points.  For instance, we now have clarity that only AI systems using biometric data could be in scope for the prohibition on emotion recognition in the workplace, and sentiment analysis conducted on written words is not caught.  They do also leave some key areas open, e.g. not providing confirmation on where targeted advertising falls outside the scope of the prohibitions.

With any practice that risks triggering a prohibition, careful assessment, review, and documentation will be necessary to confirm that the practice is not caught.  Including an adequate triage and assessment process in AI governance programmes will be vital to ensure this is happening.  As discussed in our recent post on the Commission’s guidelines on AI systems, given the lack of clarity on the ‘AI system’ definition, consideration of the prohibitions needs to be built into the early stages of the triage process to ensure that anything potentially prohibited receives a full assessment.

The deadline for designation of market surveillance authorities is 2 August 2025, so enforcement will not begin immediately.  However, once market surveillance authorities have been designated, they will be able to impose penalties relating to non-compliance from 2 February 2025.  Affected parties could also enforce in national courts and request interim injunctions against the prohibited practices in the meantime.

For the prohibitions, there is no grandfathering provision – the prohibitions are now in effect for existing use cases, as well as on a forward-looking basis.