---
title: "The EU AI Act in practice: a 2026 guide for teams shipping AI"
description: "A practical, engineering-first guide to the EU AI Act in 2026 — who it applies to (including non-EU companies), the four risk tiers in plain terms, how to find your tier, the transparency rules that catch everyone, and the evidence trail your team should build now."
date: 2026-05-14
updated: 2026-05-14
author: "Dezső Mező"
tags: "AI, EU AI Act, Compliance, EU, Buyer guide, AI governance"
slug: eu-ai-act-in-practice-2026
canonical: https://dfieldsolutions.com/blog/eu-ai-act-in-practice-2026
---

# The EU AI Act in practice: a 2026 guide for teams shipping AI

The EU AI Act is phasing in, it reaches companies far outside the EU, and most of the work is engineering, not legal. Here's how a build team should actually think about it in 2026.
The EU AI Act is the first broad, horizontal law regulating artificial intelligence, and in 2026 it has stopped being a future problem. It entered into force in 2024 and its obligations are phasing in on a staggered timeline: the bans on prohibited practices applied first, obligations for general-purpose AI models followed, and the heavier high-risk-system requirements land across 2026 and 2027. This guide is the engineering-first version — what the Act means for a team that ships software, how to find where your product sits, and what to actually do. It is not legal advice; your specific classification should be confirmed with counsel. But most of the Act is an engineering and documentation exercise, and that part we can be concrete about.

**TL;DR**
- It reaches you even outside the EU · the Act applies if your AI system is placed on the EU market or its output is used in the EU. A US or UAE company with EU users is in scope.
- Four risk tiers · prohibited (banned outright), high-risk (strict obligations), limited-risk (transparency obligations), minimal-risk (no obligations). Most B2B software is limited or minimal.
- The transparency rules catch everyone · if users interact with an AI system, they must be told; AI-generated or -manipulated content must be labelled.
- High-risk is a different world · if your system falls in a high-risk use case, you owe a risk-management system, data governance, logging, human oversight and a conformity assessment.
- Do it while you build · classification, technical documentation and evaluation records are far cheaper produced alongside the code than reconstructed by an auditor afterwards.

> **NOTE:** This is engineering and documentation guidance from a studio that ships AI under the Act — not legal advice. Risk classification has real legal consequences; confirm yours with a qualified lawyer. What follows is how a build team should prepare so that, whatever the classification, the evidence already exists.

## Who the AI Act applies to

The most common mistake is assuming the Act is an EU-companies problem. It isn't. It applies extraterritorially: if you are a provider placing an AI system on the EU market, or a provider or deployer whose system's output is used in the EU, you are in scope regardless of where your company is incorporated. A US SaaS with European customers, a UAE platform whose AI feature serves EU users — both are inside the Act's reach.

Two roles matter for most teams. A provider develops an AI system (or has one developed) and puts it on the market under its own name. A deployer uses an AI system under its own authority in a professional context. You can be both — building an AI feature into your own product makes you a provider of that feature and a deployer of any third-party models inside it. The obligations differ by role, so the first question in any classification exercise is: for this system, which role are we in?

## The four risk tiers, in plain terms

The Act does not regulate "AI" as one thing. It sorts AI systems into four tiers by the risk of their use case, and the obligations scale with the tier.

### Prohibited · unacceptable risk

A small set of practices are banned outright — things like social scoring by public authorities, manipulative techniques that exploit vulnerabilities, and certain biometric-categorisation and untargeted facial-image-scraping uses. If your product does one of these, the answer is not compliance, it's redesign. Most teams will never touch this tier.

### High-risk

Systems used in defined sensitive contexts — for example, in employment and worker management, access to essential services, credit scoring, critical infrastructure, certain law-enforcement and migration uses, and AI that is a safety component of a regulated product. High-risk does not mean dangerous; it means the use case is one the Act lists as carrying significant risk to rights or safety. This tier carries the heaviest obligations, and landing in it changes the shape of the whole project.

### Limited-risk · transparency obligations

Systems that interact with people, generate content, or perform emotion recognition or biometric categorisation. The obligation here is disclosure, not a conformity assessment. This is where most chatbots, assistants, content tools and recommendation features land.

### Minimal-risk

Everything else — spam filters, AI in a video game, inventory forecasting. No mandatory obligations under the Act, though voluntary codes of conduct are encouraged. A large share of practical B2B AI sits here or in the limited-risk tier.

## How to find your tier

Classification is driven by the use case and context, not the technology. The same model can be minimal-risk in one product and high-risk in another. Work through it system by system, not company-wide.

1. Define the system narrowly · one classification per AI system, scoped to its actual purpose. "Our AI" is too broad to classify.
2. Check the prohibited list first · if the use case matches a banned practice, stop and redesign.
3. Check the high-risk use cases · compare your purpose against the Act's high-risk list and its annexes. If it plausibly matches, treat it as high-risk until counsel confirms otherwise.
4. Check the transparency triggers · does the system interact with people, generate or manipulate content, or do emotion / biometric work? If yes, you have transparency obligations.
5. Default to minimal-risk only when nothing above matched · and document why.
6. Re-run the check when the use case changes · a new feature or a new customer segment can move a system between tiers.

> **TIP:** Write the classification down as a short, dated document per system — the purpose, the tier, and the reasoning. That document is the first artefact a regulator, an enterprise customer's procurement team, or an answer to a due-diligence questionnaire will ask for.

## What each tier actually requires

For the two tiers most teams land in, the obligations are concrete and manageable.

### Limited-risk: the transparency set

- Tell people they're interacting with an AI system, unless it's obvious from the context.
- Mark AI-generated or AI-manipulated content — synthetic audio, image, video and text — as artificially generated, in a machine-readable way where feasible.
- If you run emotion recognition or biometric categorisation, inform the people exposed to it.
- Keep it honest and current · the disclosure has to reflect what the system actually does.

### High-risk: a different scope of work

If a system is high-risk, the obligations expand into a risk-management system maintained across the lifecycle, data-governance practices for training and testing data, detailed technical documentation, automatic logging of events, meaningful human oversight, and an appropriate level of accuracy, robustness and cybersecurity — plus a conformity assessment before the system goes on the market, and registration where required. This is not a documentation afterthought; it shapes architecture decisions. If you are plausibly high-risk, budget for it from the first sprint.

## The transparency rules that catch everyone

Even teams who correctly classify their core system as minimal-risk often miss that a single feature pulls them into transparency obligations. A support chatbot, an AI email drafter, a feature that generates images or summaries — each of these interacts with people or generates content, so each carries a disclosure duty. The fix is small and worth doing early: a clear "you're talking to an AI assistant" line, and a label on generated content. Retrofitting disclosure across a UI after launch is more annoying than designing it in.

## What an engineering team should do now

The legal classification needs a lawyer. The evidence the classification depends on is built by engineers, and it's the same evidence that wins enterprise procurement reviews and answers AI-due-diligence questionnaires. Build it as you go.

1. Maintain a per-system classification document · purpose, tier, reasoning, date — updated when the use case changes.
2. Keep technical documentation current · what the system does, the model(s) and data it uses, known limitations. A living doc, not a launch-day artefact.
3. Record your evaluations · the eval harness you already run for quality doubles as your evidence of testing. Keep the results, versioned.
4. Log meaningfully · enough event logging to reconstruct what a system did and why, scaled to the system's risk.
5. Design transparency in · the AI disclosure and content labelling decided at design time, not bolted on.
6. Track the model supply chain · which third-party models and providers you depend on, and what they tell you about their own Act compliance.

**By the numbers**
- Scope: Extraterritorial — non-EU companies with EU users are in scope
- Risk tiers: Prohibited · High-risk · Limited-risk · Minimal-risk
- Where most B2B software lands: Limited-risk or minimal-risk
- Limited-risk core duty: Disclose AI interaction · label AI-generated content
- Cheapest time to do the work: During the build — alongside the code

## How DField Solutions handles the AI Act

Every AI engagement we ship includes a risk-classification document for the system, technical documentation kept current with the build, and the evaluation records from the eval harness we run in CI anyway. We design the transparency disclosures in from the start, and we hand you the model supply-chain notes so your due-diligence answers are ready. We are engineers, not lawyers — we build the evidence trail so that whatever your counsel confirms the classification to be, the artefacts already exist and are accurate.

If you're scoping an AI build and want the compliance layer handled inside it rather than chased afterwards, the [AI service page](/services/ai) covers how we work, and a [30-minute discovery call](/contact) is the fastest way to talk through your specific system. For the wider regulatory picture, the [glossary](/glossary) has plain-language entries on the AI Act, GDPR, NIS2 and the terms around them.

**Key takeaways**
- The AI Act reaches non-EU companies — incorporation outside Europe is not an exemption.
- Classify per system, by use case, not company-wide — and write the reasoning down.
- Most B2B software is limited-risk or minimal-risk; the core duty is honest AI disclosure and content labelling.
- High-risk is a different scope of work that shapes architecture — budget for it from sprint one if you plausibly qualify.
- The evidence trail is engineering work, cheapest built alongside the code and reused for enterprise procurement reviews.

---

Source: https://dfieldsolutions.com/blog/eu-ai-act-in-practice-2026
Author: Dezső Mező · Founder, DField Solutions
Site: https://dfieldsolutions.com
