EU AI Act compliance checker
for AI systems and AI agents
A practical readiness page for teams shipping AI into the EU. Check scope, roles, controls, evidence, and oversight, then fix the gaps before they turn into a messy scramble.
If 3 or more of these are unclear, you probably need a proper review.
Technical audit, evidence gaps, workflow controls, logging, approvals, agent governance, and a fix list your team can actually implement.
UK or non-EU teams selling, deploying, or supporting AI systems used in the EU
Product, ops, compliance, and engineering leads who need a tighter view of obligations
Teams using AI agents, workflow automation, or general-purpose AI in customer-facing or operational processes
Businesses that need an implementation plan, not a vague policy deck
10-point EU AI Act readiness checker
This is not a legal determination. It is a practical triage pass to help you spot where classification, governance, evidence, or operating controls still look thin.
System inventory and scope
Can you name every AI system or agent workflow in scope, its purpose, and where it touches EU users, workers, or customers?
You cannot classify or govern what you have not inventoried properly.
Role clarity
Do you know whether you are acting as a provider, deployer, importer, distributor, or a mix depending on the workflow?
Obligations change fast once your role changes, especially when you customise or repackage systems.
Prohibited use screening
Have you checked whether any use case could drift into prohibited practices or a clearly unacceptable-risk pattern?
This is a front-end triage step, not something to discover after launch.
Risk classification
Have you assessed whether any system could be limited-risk, high-risk, GPAI-related, or tied into regulated product or safety obligations?
Classification drives documentation, controls, and timelines.
Human oversight
Is there a real human checkpoint for sensitive outputs, approvals, escalations, and overrides?
For AI agents especially, oversight cannot just exist on paper.
Logging and evidence
Can you show logs, prompt or tool traces, approval history, incidents, and policy changes in a reviewable way?
If the evidence is weak, your governance is weak.
Data and instructions
Do you know what data enters the system, which instructions govern it, and how outputs are constrained?
Good controls depend on clear inputs, source handling, and boundaries.
Third-party model and vendor chain
Have you mapped model providers, API vendors, platform dependencies, and contract gaps?
A lot of compliance exposure sits in the supplier chain, not just your own app layer.
AI literacy and internal operating model
Have relevant staff been trained on how the system works, when to intervene, and what is not allowed?
AI literacy is not just awareness. It affects daily control quality.
Remediation plan
If gaps were found today, do you already know the owner, priority, and fix path for each one?
A checker is only useful if it leads to practical remediation.
This is about controls and delivery, not just policy wording
OpenClaw Consultant helps teams turn fuzzy compliance anxiety into an ordered delivery plan. That usually means sorting the system inventory, clarifying roles, tightening agent permissions, improving oversight, and making the evidence trail easier to review.
Legal timing, carefully stated, as at April 14, 2026
The EU AI Act applies in phases. The dates below are useful planning anchors, but timing and interpretation should still be handled carefully. The European Commission proposed simplification changes on 19 November 2025, so do not overstate certainty where your obligations depend on final interpretation, classification, or linked product rules.
AI Act entered into force
The framework formally entered into force, starting the phased application timetable.
Prohibited practices, definitions, and AI literacy
Early provisions started to bite, including banned practices, core definitions, and AI literacy requirements.
Governance rules and GPAI obligations
Governance structures and obligations around general-purpose AI models moved into clearer operational focus.
Most remaining obligations
Most of the remaining obligations are due to apply from this point, depending on role and use case.
Some Annex I-linked and product safety obligations
Certain obligations tied to product safety legislation and Annex I pathways land later.
What this means in practice
If you are still working out what is in scope, who owns which obligation, or how your agent workflows are supervised, now is the time to tighten that up. Waiting until August 2026 to start collecting evidence is the wrong way round.
A sensible next move is a scoped readiness review, then a remediation sprint focused on the workflows with the highest exposure first.
Common questions
Is this legal advice?
No. This page is about technical readiness, evidence, controls, and implementation support. Formal legal interpretation should sit with qualified counsel where needed.
Does the EU AI Act matter if our business is based in the UK?
It can. If your AI system or AI-enabled service is placed on the market, put into service, or used in ways that fall within the Act’s scope in the EU, you need to assess it properly.
Do AI agents change the compliance picture?
Often yes. Agent workflows add operational questions around autonomy, tool access, approvals, logging, and human intervention that need to be designed deliberately.
What happens if we are not ready yet?
That is normal. The useful next step is a scoped gap review, then a practical remediation plan rather than trying to solve everything at once.
Book a focused EU AI Act gap review
Tell us what you are shipping, where AI agents or models sit in the workflow, and where you are unsure. We will help you turn that into a practical audit and remediation path.