DField SolutionsMérnöki stúdió · Budapest
Loading · Töltődik
Skip to content
Back to blog
·13 min read
AI··13 min read

The EU AI Act in practice: a 2026 guide for teams shipping AI

The EU AI Act is phasing in, it reaches companies far outside the EU, and most of the work is engineering, not legal. Here's how a build team should actually think about it in 2026.

Last verified
Listen
Dezső Mező
Founder, DField Solutions
ShareXLinkedIn#
The EU AI Act in practice: a 2026 guide for teams shipping AI

Reviewed by:Dezső Mező· Founder · Engineer, DField Solutions· 14 May 2026

The EU AI Act is the first broad, horizontal law regulating artificial intelligence, and in 2026 it has stopped being a future problem. It entered into force in 2024 and its obligations are phasing in on a staggered timeline: the bans on prohibited practices applied first, obligations for general-purpose AI models followed, and the heavier high-risk-system requirements land across 2026 and 2027. This guide is the engineering-first version — what the Act means for a team that ships software, how to find where your product sits, and what to actually do. It is not legal advice; your specific classification should be confirmed with counsel. But most of the Act is an engineering and documentation exercise, and that part we can be concrete about.

This is engineering and documentation guidance from a studio that ships AI under the Act — not legal advice. Risk classification has real legal consequences; confirm yours with a qualified lawyer. What follows is how a build team should prepare so that, whatever the classification, the evidence already exists.

Who the AI Act applies to

The most common mistake is assuming the Act is an EU-companies problem. It isn't. It applies extraterritorially: if you are a provider placing an AI system on the EU market, or a provider or deployer whose system's output is used in the EU, you are in scope regardless of where your company is incorporated. A US SaaS with European customers, a UAE platform whose AI feature serves EU users — both are inside the Act's reach.

Two roles matter for most teams. A provider develops an AI system (or has one developed) and puts it on the market under its own name. A deployer uses an AI system under its own authority in a professional context. You can be both — building an AI feature into your own product makes you a provider of that feature and a deployer of any third-party models inside it. The obligations differ by role, so the first question in any classification exercise is: for this system, which role are we in?

The four risk tiers, in plain terms

The Act does not regulate "AI" as one thing. It sorts AI systems into four tiers by the risk of their use case, and the obligations scale with the tier.

Prohibited · unacceptable risk

A small set of practices are banned outright — things like social scoring by public authorities, manipulative techniques that exploit vulnerabilities, and certain biometric-categorisation and untargeted facial-image-scraping uses. If your product does one of these, the answer is not compliance, it's redesign. Most teams will never touch this tier.

High-risk

Systems used in defined sensitive contexts — for example, in employment and worker management, access to essential services, credit scoring, critical infrastructure, certain law-enforcement and migration uses, and AI that is a safety component of a regulated product. High-risk does not mean dangerous; it means the use case is one the Act lists as carrying significant risk to rights or safety. This tier carries the heaviest obligations, and landing in it changes the shape of the whole project.

Limited-risk · transparency obligations

Systems that interact with people, generate content, or perform emotion recognition or biometric categorisation. The obligation here is disclosure, not a conformity assessment. This is where most chatbots, assistants, content tools and recommendation features land.

Minimal-risk

Everything else — spam filters, AI in a video game, inventory forecasting. No mandatory obligations under the Act, though voluntary codes of conduct are encouraged. A large share of practical B2B AI sits here or in the limited-risk tier.

How to find your tier

Classification is driven by the use case and context, not the technology. The same model can be minimal-risk in one product and high-risk in another. Work through it system by system, not company-wide.

  1. Define the system narrowly · one classification per AI system, scoped to its actual purpose. "Our AI" is too broad to classify.
  2. Check the prohibited list first · if the use case matches a banned practice, stop and redesign.
  3. Check the high-risk use cases · compare your purpose against the Act's high-risk list and its annexes. If it plausibly matches, treat it as high-risk until counsel confirms otherwise.
  4. Check the transparency triggers · does the system interact with people, generate or manipulate content, or do emotion / biometric work? If yes, you have transparency obligations.
  5. Default to minimal-risk only when nothing above matched · and document why.
  6. Re-run the check when the use case changes · a new feature or a new customer segment can move a system between tiers.

Write the classification down as a short, dated document per system — the purpose, the tier, and the reasoning. That document is the first artefact a regulator, an enterprise customer's procurement team, or an answer to a due-diligence questionnaire will ask for.

What each tier actually requires

For the two tiers most teams land in, the obligations are concrete and manageable.

Limited-risk: the transparency set

  • Tell people they're interacting with an AI system, unless it's obvious from the context.
  • Mark AI-generated or AI-manipulated content — synthetic audio, image, video and text — as artificially generated, in a machine-readable way where feasible.
  • If you run emotion recognition or biometric categorisation, inform the people exposed to it.
  • Keep it honest and current · the disclosure has to reflect what the system actually does.

High-risk: a different scope of work

If a system is high-risk, the obligations expand into a risk-management system maintained across the lifecycle, data-governance practices for training and testing data, detailed technical documentation, automatic logging of events, meaningful human oversight, and an appropriate level of accuracy, robustness and cybersecurity — plus a conformity assessment before the system goes on the market, and registration where required. This is not a documentation afterthought; it shapes architecture decisions. If you are plausibly high-risk, budget for it from the first sprint.

The transparency rules that catch everyone

Even teams who correctly classify their core system as minimal-risk often miss that a single feature pulls them into transparency obligations. A support chatbot, an AI email drafter, a feature that generates images or summaries — each of these interacts with people or generates content, so each carries a disclosure duty. The fix is small and worth doing early: a clear "you're talking to an AI assistant" line, and a label on generated content. Retrofitting disclosure across a UI after launch is more annoying than designing it in.

What an engineering team should do now

The legal classification needs a lawyer. The evidence the classification depends on is built by engineers, and it's the same evidence that wins enterprise procurement reviews and answers AI-due-diligence questionnaires. Build it as you go.

  1. Maintain a per-system classification document · purpose, tier, reasoning, date — updated when the use case changes.
  2. Keep technical documentation current · what the system does, the model(s) and data it uses, known limitations. A living doc, not a launch-day artefact.
  3. Record your evaluations · the eval harness you already run for quality doubles as your evidence of testing. Keep the results, versioned.
  4. Log meaningfully · enough event logging to reconstruct what a system did and why, scaled to the system's risk.
  5. Design transparency in · the AI disclosure and content labelling decided at design time, not bolted on.
  6. Track the model supply chain · which third-party models and providers you depend on, and what they tell you about their own Act compliance.

How DField Solutions handles the AI Act

Every AI engagement we ship includes a risk-classification document for the system, technical documentation kept current with the build, and the evaluation records from the eval harness we run in CI anyway. We design the transparency disclosures in from the start, and we hand you the model supply-chain notes so your due-diligence answers are ready. We are engineers, not lawyers — we build the evidence trail so that whatever your counsel confirms the classification to be, the artefacts already exist and are accurate.

If you're scoping an AI build and want the compliance layer handled inside it rather than chased afterwards, the AI service page covers how we work, and a 30-minute discovery call is the fastest way to talk through your specific system. For the wider regulatory picture, the glossary has plain-language entries on the AI Act, GDPR, NIS2 and the terms around them.

ShareXLinkedIn#
Dezső Mező
By

Dezső Mező

Founder, DField Solutions

I've shipped production products from fintech to creator-tooling · for startups and enterprises, from Budapest to San Francisco.

Keep reading
RELATED PROJECTS
Let's talk

Would rather build together?

Let's talk about your project. 30 minutes, no strings.