The rapid development of artificial intelligence has accelerated the digitalization of business processes, while simultaneously exposing new legal, ethical, and organizational risks. The European Union’s response to these challenges is the AI Act—the Regulation of the European Parliament and the Council on Artificial Intelligence, adopted in March 2024. It is the world’s first comprehensive AI regulation and, much like the GDPR, it will significantly influence how companies design, implement, and operate AI technologies.
This article explains:
What is the AI Act and what is its purpose?
The EU Artificial Intelligence Act is a regulation designed to balance innovation with the protection of fundamental rights such as privacy, equality, safety, and non‑discrimination. It was introduced in response to the rapid expansion of AI systems capable of generating content (e.g., ChatGPT), performing facial recognition, analyzing user behavior, or creating deepfakes..
Although the European Commission first proposed the AI Act on April 21, 2021, the final version was adopted only in 2024 after extensive legislative work and consultation with industry stakeholders.
The European Parliament adopted the regulation on 13 March 2024, and the Council of the European Union unanimously approved it on 21 May 2024.
The objective is clear: to enable further development of AI within the EU while ensuring that systems remain safe, transparent, and subject to meaningful human oversight.
How does the AI Act define an artificial intelligence system??
The regulation introduces a precise legal definition of an AI system. According to the AI Act, an AI system is a machine‑based system that meets several conditions.
First, it operates with a level of autonomy after deployment—meaning it can perform tasks without continuous human intervention.
Second, it demonstrates adaptiveness, learning from new data and modifying its behavior accordingly.
The regulation further states that an AI system infers how to generate outputs (e.g., recommendations, decisions, content) based on input data, and that these outputs may influence physical or digital environments.
This means the regulation applies not only to advanced generative models but also to AI‑driven decision‑automation tools in business processes, logistics, HR, finance, and beyond.

Expert commentary
„In practice, the AI Act will play the same role for artificial intelligence that the GDPR played for data—it will enforce structure, discipline, and responsible deployment. Companies that take the time now to inventory their AI use cases, classify them by risk, and establish clear human‑oversight mechanisms will gain an edge: they will implement AI faster, more safely, and with greater stakeholder trust”
Filip Kolendo
Vice President & CTO, Primesoft Poland
Risk‑based categories of AI systems
A core element of the AI Act is its risk‑based approach. The regulation groups AI systems into four categories, each associated with different obligations for organizations.
Minimal‑risk systems– these systems pose no significant risk to users or society. Examples include video‑game AI or spam filters.
No additional regulatory requirements apply, and the majority of AI systems currently on the EU market are expected to fall into this category.
Low (limited) risk systems – these include chatbots and content‑generation systems.
The regulation introduces a transparency obligation: users must be clearly informed that they are interacting with an AI system. AI‑generated content and deepfakes must also be appropriately labelled.
High‑risk systems– This is the most demanding category for companies.
It includes AI used in sensitive areas such as healthcare, transport, recruitment, credit scoring, and systems that form part of safety‑critical products regulated under EU law.
Organizations will be required to conduct risk assessments, maintain technical documentation, ensure human oversight, and monitor system performance post‑deployment.
Unacceptable‑risk systems – The AI Act explicitly prohibits practices deemed incompatible with EU fundamental rights.
These include:
• systems predicting the likelihood of criminal behavior;
• real‑time biometric identification used for social‑scoring purposes;
• systems exploiting vulnerabilities of individuals.
AI Act implementation timeline – when do obligations apply?
The AI Act entered into force on 1 August 2024, but its provisions take effect gradually to help organizations adapt.
• After 6 months: restrictions for high‑risk systems and prohibitions for unacceptable‑risk systems become binding.
• After 9 months: the European AI Office must publish its Code of Practice, particularly for GPAI model providers.
• After 12 months: new obligations apply to providers of general‑purpose AI models introduced to the EU market.
• After 18 months: the implementing act governing post‑deployment AI monitoring comes into force.
• From 2 February: a strict ban on the use and placing on the market of AI systems considered “unacceptable risk” applies across the EU.
Violations of Chapter II of the regulation may result in severe administrative fines of up to EUR 35 million or 7% of global annual turnover.
How will the AI Act operate in Poland?
As an EU Member State, Poland must implement the operational measures set out in the AI Act.
Because it is an EU regulation, it will not require transposition into national law—similar to the GDPR.
Once all provisions become fully applicable, every organization operating in Poland that designs, deploys, or uses AI systems will be required to comply with the regulation.
In parallel, Member States must establish a national oversight and enforcement system to ensure the effective application of the AI Act.
National AI authorities – Member State obligations
Each EU Member State must designate at least one national authority responsible for supervising the application of the AI Act.
These authorities will serve a function similar to GDPR supervisory authorities (e.g., the Polish Data Protection Authority), but in the domain of AI oversight.
Their responsibilities will include:
• monitoring compliance of deployed high‑risk AI systems,
• conducting inspections and investigations,
• receiving and handling submissions regarding violations,
• cooperating with other national and EU‑level bodies,
• issuing administrative fines for non‑compliance.
Poland may either establish a new dedicated AI authority or assign responsibilities to existing regulators—supported by a coordination mechanism.
Member States must also designate “notifying authorities” responsible for assessing and certifying conformity‑assessment bodies for high‑risk AI.
This will create an institutional ecosystem supporting:
• pre‑market conformity assessment,
• review of technical documentation and risk‑management systems,
• post‑deployment monitoring.
For companies, this means that engagement with national authorities will become a standard part of AI deployment, especially in regulated sectors (finance, HR, transport, healthcare, public administration). publiczna).
EU‑level cooperation – the European AI Board
National authorities will not operate in isolation.
The AI Act establishes the European Artificial Intelligence Board, which will coordinate supervision across the EU and ensure consistent interpretation and enforcement of the regulation.
The Polish authority will be required to:
• exchange information with counterparts in other Member States,
• participate in the Board’s work,
• follow common guidelines and regulatory interpretations,
• respond to EU‑level decisions, including those concerning GPAI models.
What does this mean for companies operating in Poland?
For businesses, the AI Act introduces a fundamentally new regulatory landscape.
Interaction with a national AI authority will become as important as interaction with the data‑protection authority today.
Organizations will need to adapt their AI systems to meet technical and legal requirements, and also prepare for audits, reporting obligations, investigations, and potential administrative fines.
From 2 February, companies must also comply with the AI literacy requirement (Article 4), obliging both AI providers and AI users to ensure that employees have the knowledge and skills needed to use AI tools safely and responsibly.
AI implementation will no longer be a purely technological project—it will become a multidisciplinary process involving compliance, documentation, governance, and structured cooperation with regulators.




