Skip to main content
Blog

The AI Act for a European SMB — what actually applies to you?

Published 21 March 2026

TL;DR

  • The AI Act took effect 1 August 2024 and rolls out in stages through 2027. Most practical provisions apply from 2 August 2026.
  • The regulation classifies AI systems by risk: prohibited, high-risk, transparency requirements, or minimal risk.
  • For most SMBs that use AI (ChatGPT, copilots, scan tools) what mainly applies is transparency requirements — and often nothing more.
  • You only have real work to do if you develop AI yourself or use it in high-risk applications (recruitment, credit scoring, health, education grading, biometrics).

Who governs — and who's after you

The EU AI Act (Regulation 2024/1689, "AI Act") is directly applicable law in all member states. Each member state will designate its own supervisory authorities — typically a mix of data protection regulators and a new AI authority or existing sectoral regulators.

To be clear: the AI Act isn't a "wait-and-see policies"-directive like DSA. It has concrete requirements with phased application dates, and penalty levels are high (up to €35M or 7% of global turnover for prohibited AI systems).

What counts as an "AI system" under the AI Act

The definition is intentionally broad — in practice anything that makes autonomous inferences from input data. This includes:

  • ChatGPT, Claude, Gemini and similar generative models
  • Image recognition (also in phones, security cameras)
  • Decision support in HR recruitment, credit, insurance
  • Recommendation engines (if you sell them as a service)
  • Automated translation, summarisation
  • Scanning tools like our CompliantHQ — we use AI to synthesise remediation plans

Classic rule-based systems (if-this-then-that) do NOT count as AI under the regulation's meaning. Nor do statistical models without inference.

Risk classification — where does your usage land?

The AI Act splits AI systems into four levels:

1. Prohibited (Art. 5) — applies from 2 February 2025

Outright forbidden regardless of size:

  • Social scoring systems (à la China)
  • Real-time biometrics in public spaces (with few exceptions)
  • Manipulative systems exploiting children's vulnerability
  • Emotion recognition in workplaces or schools
  • Biometric categorisation based on political/religious views

For most SMBs: you're almost certainly doing none of this.

2. High-risk (Art. 6) — applies from 2 August 2027

Listed in Annex III. Includes:

  • AI for recruitment, selection, performance evaluation
  • Consumer credit scoring
  • Healthcare diagnostics
  • Education — automatic grading, admissions systems
  • Law enforcement, border control, migration screening
  • Critical infrastructure (power grid, water supply)

For SMBs: if you use AI in your hiring process (e.g. CV screening, automatic candidate ratings) you fall under high-risk. Then you must: document the system, ensure human oversight, register, conduct risk management, etc. — significant obligations.

3. Transparency requirements (Art. 50) — apply from 2 August 2026

Much lower threshold:

  • Chatbots must inform the user they're talking to AI
  • AI-generated content (text, image, video) must be labelled as such when relevant for the reader
  • Deepfakes must be declared

For SMBs: if you use AI for customer support it must be clear it's not a human. If you publish AI-generated content on your blog or marketing it should be clear when context makes it material (a fully AI-generated "customer story" must be labelled).

4. Minimal risk — always OK

Everything else. Spam filters, recommendations, scan tools without high-risk decisions. This is where most everyday tools land.

What you actually need to do

Realistic walk-through for an SMB:

Step 1 — Inventory your AI usage. List every AI tool your company uses. ChatGPT in marketing counts. Copilot in the dev team counts. Your CRM's "smart" search often counts. Include embedded AI features in services you pay for (Salesforce Einstein, HubSpot AI, etc.).

Step 2 — Classify each usage.

  • Is it recruitment, credit scoring, or another high-risk category? → you have actual work ahead.
  • Is it a chatbot or AI-generated content facing customers? → labelling is enough.
  • Is it an internal assistant tool (code suggestions, meeting summaries, etc.)? → minimal risk, no AI Act action required.

Step 3 — Address transparency. If you have a customer-facing chatbot, ensure the first message clarifies that the customer is talking to an automated assistant. If your support escalates to a human, make the handover clear.

Step 4 — Manage employee usage. Your staff use AI daily (ChatGPT etc.). Have an internal policy covering:

  • What may / may not be input into consumer versions of AI (customer data, code base, financial figures — usually NO)
  • How AI-generated content may be used externally
  • Who's responsible if AI gives a wrong answer that leads to customer harm

This isn't strictly an AI Act requirement — but flows from GDPR responsibilities plus general sound risk management.

What if you develop your own AI

Then you'll want to read the regulation yourself. Briefly: it can be substantial depending on use case. Requirements include dataset documentation, risk and quality management, human oversight, transparency, plus registration in the EU's AI database. Plan for a lawyer specialised in AI from an early stage.

For SMBs purchasing AI tools, the supplier carries most of the compliance burden. Your requirements become lower — primarily choosing a supplier who shows documentation, plus your own transparency around usage.

Timeline you should track

Date What applies
1 Aug 2024 AI Act took effect
2 Feb 2025 Prohibited AI systems became illegal (Art. 5)
2 Aug 2025 AI literacy requirements for staff (Art. 4); GPAI models (ChatGPT, Claude, etc.) get documentation requirements
2 Aug 2026 Transparency requirements apply (Art. 50). This is when chatbot labelling becomes legal requirement.
2 Aug 2027 High-risk AI systems become fully regulated (Art. 6)

What you should do now

  1. Inventory your AI usage (1–2 hours' work if you're not a surprisingly heavy AI user).
  2. Classify by risk level — for 90% of SMBs the conclusion will be "minimal risk + some transparency requirements".
  3. Create a short internal AI policy (1–2 pages) covering what staff may do with AI tools.
  4. Set up chatbot labelling if you have AI communicating with customers.

What we don't help you with directly: AI Act compliance isn't a part of CompliantHQ today. We'll add an AI Act module later this year — until then it's manual review or a legal consultant for your specific situation. If you'd like a check-up of your cookies and accessibility (where we have tools ready) you can scan your site free below.

Run a free scan of your site →