On 2 February 2025, the first phase of implementation of the EU AI Act came into force, setting a milestone in regulating this technology. The regulation aims to ensure the safe and ethical use of AI by establishing a number of restrictions and penalties for practices deemed harmful.
Below we look at which AI practices are prohibited, who must comply with the Act and what penalties can be applied in the event of non-compliance.

What AI practices are prohibited?

Under article 5.1 of the AI Act, the placing on the market, the putting into service and the use of AI systems that may adversely affect individuals or groups of persons are prohibited.

Prohibited practices include the following:

Subliminal and deceptive manipulation

The use of AI systems that deploy subliminal or manipulative techniques beyond a person’s consciousness is prohibited. This includes systems designed to influence decision-making in a hidden way, significantly reducing the ability of individuals to make informed decisions and leading them to choose actions they would not otherwise have chosen. If such techniques cause or are likely to cause significant harm to individuals, their use is prohibited.

Exploitation of vulnerabilities

The use of AI is prohibited in order to exploit vulnerabilities of individuals or groups of persons due to their age, disability, economic or social situation, with the objective of materially distorting their behaviour, especially if this leads to actual or likely harm.

Discriminatory social classification

The use of AI systems to evaluate or classify individuals or groups based on their social behaviour or personal characteristics, whether known, inferred or predicted, is not permitted. If the score resulting from that classification leads to unfavourable or detrimental treatment in certain social contexts, or if the impact is unjustified or disproportionate, this is considered a prohibited practice.

Predictive criminal risk assessment

AI systems designed to predict a person’s risk of committing a crime based on their profiling, personal characteristics or behavioural patterns are prohibited. Nevertheless, this prohibition does not apply to systems used to support the human assessment of the involvement of a person in a criminal activity, provided that they are based on objective and verifiable facts directly linked to a criminal activity.

Unauthorised facial recognition

The use of AI to create or expand facial recognition databases through the untargeted scraping of images from the internet or CCTV footage without appropriate consent is prohibited.

Real-time biometric identification in public spaces

The use of real-time biometric identification systems is not permitted in publicly accessible places for security or law enforcement purposes, apart from the exceptional cases provided for by law. The permitted cases include counter-terrorism and human trafficking, subject to strict conditions and supervision.

Analysis of emotions in work and educational environments

The use of AI to recognise or infer emotions of people in their workplaces or educational institutions is prohibited, except where such a system has a medical or safety justification.

Biometric categorisation based on sensitive data

The use of AI systems that categorise individually people based on their biometric data to infer sensitive information, such as their race, political opinions, trade union membership, religious or philosophical beliefs, sexual orientation or private life, is prohibited. Nevertheless, this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets or their categorisation in law enforcement contexts.

Who does the European Artificial Intelligence Act apply to?

The EU AI Act applies to all the service providers working with general-purpose AI and making it available in the European Union, regardless of where they are located. In that sense, it follows a similar logic to the General Data Protection Regulation (GDPR) since it also affects companies outside the EU that provide services within the European Union.

To have a better understanding of these and other key concepts, please check article 3 of the EU AI Act, which details the fundamental definitions governing this regulation.

Who are the service providers working with AI?

The AI Act applies to various players involved in the development, marketing and use of artificial intelligence systems.

They include:

  • Developers of general-purpose AI, as well as those responsible for its implementation and deployment.
  • Providers and deployers of AI systems, even if located outside the EU, when the results generated by these systems are used within the European Union.
  • Importers and distributors of AI systems, responsible for placing them on the EU market.
  • Manufacturers of products integrating AI, when they place it on the market under their own name or trademark.
  • Authorised representatives of AI providers not established in the EU, acting as intermediaries for marketing or distribution in Europe.
  • Persons or entities affected by the use of AI in the EU, who may also be impacted by the regulation.

Why are such AI practices prohibited?

The prohibited AI practices have been labelled as highly invasive and dangerous to people’s fundamental rights. The EU seeks to prevent AI from being used to manipulate, discriminate or violate citizens’ privacy, ensuring a balance between innovation and security.

In addition to these prohibited practices, the Act also regulates high-risk AI systems which, although not prohibited, must comply with additional transparency and oversight requirements for their implementation.

Penalties for non-compliance with the Artificial Intelligence Act

Non-compliance with the prohibition of the AI practices will be subject to administrative fines of up to 35 million euros or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

In any case, it will be up to the Member States to establish the other coercive measures and the penalty system, which may also include other types of warnings for companies working with AI.

Other requirements for complying with the AI Act

Article 4 of the AI Act stipulates that companies, as providers or deployers of AI systems, must, as of 2 February 2025, take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.

This should take into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and the people on whom the AI systems are to be used.

Next steps for AI regulation in Europe

On 2 August 2025, new measures will enter into force aimed particularly at general-purpose AI models, which will establish additional transparency and oversight requirements for such systems.

The companies must anticipate those changes and adapt their practices to avoid penalties and comply effectively with the regulation.

Legal advice on AI regulation

At AGM Abogados, our team of expert Technology, Media and Telecommunications (TMT) lawyers will help you assess the type of AI you are using or implementing, identify the associated legal risks and advise you on taking the necessary steps to comply with the European Union Artificial Intelligence Act.

We also provide training to your team regarding the legal aspects of using AI at your company and its risks, and help you draft the protocols and codes of good internal AI practices that your company needs for using this currently unstoppable and necessary technology.

We want to make sure that the journey you take when implementing technology and artificial intelligence is as safe and secure as possible. As is any journey, we sometimes stumble or stray but, if we prepare the trip beforehand, the risk will be lower.

Please contact AGM Abogados and make sure that your company complies with the applicable artificial intelligence regulations.