The EU AI Act in practice: a 2026 guide for teams shipping AI
The EU AI Act is phasing in, it reaches companies far outside the EU, and most of the work is engineering, not legal. Here's how a build team should actually think about it in 2026.
The EU AI Act is phasing in, it reaches companies far outside the EU, and most of the work is engineering, not legal. Here's how a build team should actually think about it in 2026.
Reviewed by:Dezső Mező· Founder · Engineer, DField Solutions· 14 May 2026
The EU AI Act is the first broad, horizontal law regulating artificial intelligence, and in 2026 it has stopped being a future problem. It entered into force in 2024 and its obligations are phasing in on a staggered timeline: the bans on prohibited practices applied first, obligations for general-purpose AI models followed, and the heavier high-risk-system requirements land across 2026 and 2027. This guide is the engineering-first version — what the Act means for a team that ships software, how to find where your product sits, and what to actually do. It is not legal advice; your specific classification should be confirmed with counsel. But most of the Act is an engineering and documentation exercise, and that part we can be concrete about.
This is engineering and documentation guidance from a studio that ships AI under the Act — not legal advice. Risk classification has real legal consequences; confirm yours with a qualified lawyer. What follows is how a build team should prepare so that, whatever the classification, the evidence already exists.
The most common mistake is assuming the Act is an EU-companies problem. It isn't. It applies extraterritorially: if you are a provider placing an AI system on the EU market, or a provider or deployer whose system's output is used in the EU, you are in scope regardless of where your company is incorporated. A US SaaS with European customers, a UAE platform whose AI feature serves EU users — both are inside the Act's reach.
Two roles matter for most teams. A provider develops an AI system (or has one developed) and puts it on the market under its own name. A deployer uses an AI system under its own authority in a professional context. You can be both — building an AI feature into your own product makes you a provider of that feature and a deployer of any third-party models inside it. The obligations differ by role, so the first question in any classification exercise is: for this system, which role are we in?
The Act does not regulate "AI" as one thing. It sorts AI systems into four tiers by the risk of their use case, and the obligations scale with the tier.
A small set of practices are banned outright — things like social scoring by public authorities, manipulative techniques that exploit vulnerabilities, and certain biometric-categorisation and untargeted facial-image-scraping uses. If your product does one of these, the answer is not compliance, it's redesign. Most teams will never touch this tier.
Systems used in defined sensitive contexts — for example, in employment and worker management, access to essential services, credit scoring, critical infrastructure, certain law-enforcement and migration uses, and AI that is a safety component of a regulated product. High-risk does not mean dangerous; it means the use case is one the Act lists as carrying significant risk to rights or safety. This tier carries the heaviest obligations, and landing in it changes the shape of the whole project.
Systems that interact with people, generate content, or perform emotion recognition or biometric categorisation. The obligation here is disclosure, not a conformity assessment. This is where most chatbots, assistants, content tools and recommendation features land.
Everything else — spam filters, AI in a video game, inventory forecasting. No mandatory obligations under the Act, though voluntary codes of conduct are encouraged. A large share of practical B2B AI sits here or in the limited-risk tier.
Classification is driven by the use case and context, not the technology. The same model can be minimal-risk in one product and high-risk in another. Work through it system by system, not company-wide.
Write the classification down as a short, dated document per system — the purpose, the tier, and the reasoning. That document is the first artefact a regulator, an enterprise customer's procurement team, or an answer to a due-diligence questionnaire will ask for.
For the two tiers most teams land in, the obligations are concrete and manageable.
If a system is high-risk, the obligations expand into a risk-management system maintained across the lifecycle, data-governance practices for training and testing data, detailed technical documentation, automatic logging of events, meaningful human oversight, and an appropriate level of accuracy, robustness and cybersecurity — plus a conformity assessment before the system goes on the market, and registration where required. This is not a documentation afterthought; it shapes architecture decisions. If you are plausibly high-risk, budget for it from the first sprint.
Even teams who correctly classify their core system as minimal-risk often miss that a single feature pulls them into transparency obligations. A support chatbot, an AI email drafter, a feature that generates images or summaries — each of these interacts with people or generates content, so each carries a disclosure duty. The fix is small and worth doing early: a clear "you're talking to an AI assistant" line, and a label on generated content. Retrofitting disclosure across a UI after launch is more annoying than designing it in.
The legal classification needs a lawyer. The evidence the classification depends on is built by engineers, and it's the same evidence that wins enterprise procurement reviews and answers AI-due-diligence questionnaires. Build it as you go.
Every AI engagement we ship includes a risk-classification document for the system, technical documentation kept current with the build, and the evaluation records from the eval harness we run in CI anyway. We design the transparency disclosures in from the start, and we hand you the model supply-chain notes so your due-diligence answers are ready. We are engineers, not lawyers — we build the evidence trail so that whatever your counsel confirms the classification to be, the artefacts already exist and are accurate.
If you're scoping an AI build and want the compliance layer handled inside it rather than chased afterwards, the AI service page covers how we work, and a 30-minute discovery call is the fastest way to talk through your specific system. For the wider regulatory picture, the glossary has plain-language entries on the AI Act, GDPR, NIS2 and the terms around them.

Founder, DField Solutions
I've shipped production products from fintech to creator-tooling · for startups and enterprises, from Budapest to San Francisco.
Let's talk about your project. 30 minutes, no strings.