EU AI Act Readiness

EU AI Act compliance checker
for AI systems and AI agents

A practical readiness page for teams shipping AI into the EU. Check scope, roles, controls, evidence, and oversight, then fix the gaps before they turn into a messy scramble.

Updated April 2026Implementation-led supportNot legal advice
Quick signal check

If 3 or more of these are unclear, you probably need a proper review.

We know which AI systems and agent workflows are in scope
We know our likely provider or deployer role per workflow
We can evidence human oversight and operational controls
We have logs, approvals, and supplier visibility where it matters
What OCC helps with

Technical audit, evidence gaps, workflow controls, logging, approvals, agent governance, and a fix list your team can actually implement.

Who this is for

UK or non-EU teams selling, deploying, or supporting AI systems used in the EU

Who this is for

Product, ops, compliance, and engineering leads who need a tighter view of obligations

Who this is for

Teams using AI agents, workflow automation, or general-purpose AI in customer-facing or operational processes

Who this is for

Businesses that need an implementation plan, not a vague policy deck

10-point EU AI Act readiness checker

This is not a legal determination. It is a practical triage pass to help you spot where classification, governance, evidence, or operating controls still look thin.

Check 1

System inventory and scope

YesPartlyNo

Can you name every AI system or agent workflow in scope, its purpose, and where it touches EU users, workers, or customers?

You cannot classify or govern what you have not inventoried properly.

Check 2

Role clarity

YesPartlyNo

Do you know whether you are acting as a provider, deployer, importer, distributor, or a mix depending on the workflow?

Obligations change fast once your role changes, especially when you customise or repackage systems.

Check 3

Prohibited use screening

YesPartlyNo

Have you checked whether any use case could drift into prohibited practices or a clearly unacceptable-risk pattern?

This is a front-end triage step, not something to discover after launch.

Check 4

Risk classification

YesPartlyNo

Have you assessed whether any system could be limited-risk, high-risk, GPAI-related, or tied into regulated product or safety obligations?

Classification drives documentation, controls, and timelines.

Check 5

Human oversight

YesPartlyNo

Is there a real human checkpoint for sensitive outputs, approvals, escalations, and overrides?

For AI agents especially, oversight cannot just exist on paper.

Check 6

Logging and evidence

YesPartlyNo

Can you show logs, prompt or tool traces, approval history, incidents, and policy changes in a reviewable way?

If the evidence is weak, your governance is weak.

Check 7

Data and instructions

YesPartlyNo

Do you know what data enters the system, which instructions govern it, and how outputs are constrained?

Good controls depend on clear inputs, source handling, and boundaries.

Check 8

Third-party model and vendor chain

YesPartlyNo

Have you mapped model providers, API vendors, platform dependencies, and contract gaps?

A lot of compliance exposure sits in the supplier chain, not just your own app layer.

Check 9

AI literacy and internal operating model

YesPartlyNo

Have relevant staff been trained on how the system works, when to intervene, and what is not allowed?

AI literacy is not just awareness. It affects daily control quality.

Check 10

Remediation plan

YesPartlyNo

If gaps were found today, do you already know the owner, priority, and fix path for each one?

A checker is only useful if it leads to practical remediation.

Implementation-led support

This is about controls and delivery, not just policy wording

OpenClaw Consultant helps teams turn fuzzy compliance anxiety into an ordered delivery plan. That usually means sorting the system inventory, clarifying roles, tightening agent permissions, improving oversight, and making the evidence trail easier to review.

Inventory and workflow map
Role and classification review
Controls and oversight gap list
Evidence and logging recommendations
Prioritised remediation actions
Typical rough edges we see
AI agents with tool access but weak approvals or incomplete logs
Unclear boundary between provider and deployer responsibilities
Supplier stack risks buried in product or procurement decisions
Teams trying to do classification work without a clean inventory first
Policy documents that do not match how the workflow actually behaves

Legal timing, carefully stated, as at April 14, 2026

The EU AI Act applies in phases. The dates below are useful planning anchors, but timing and interpretation should still be handled carefully. The European Commission proposed simplification changes on 19 November 2025, so do not overstate certainty where your obligations depend on final interpretation, classification, or linked product rules.

1 August 2024

AI Act entered into force

The framework formally entered into force, starting the phased application timetable.

2 February 2025

Prohibited practices, definitions, and AI literacy

Early provisions started to bite, including banned practices, core definitions, and AI literacy requirements.

2 August 2025

Governance rules and GPAI obligations

Governance structures and obligations around general-purpose AI models moved into clearer operational focus.

2 August 2026

Most remaining obligations

Most of the remaining obligations are due to apply from this point, depending on role and use case.

2 August 2027

Some Annex I-linked and product safety obligations

Certain obligations tied to product safety legislation and Annex I pathways land later.

What this means in practice

If you are still working out what is in scope, who owns which obligation, or how your agent workflows are supervised, now is the time to tighten that up. Waiting until August 2026 to start collecting evidence is the wrong way round.

A sensible next move is a scoped readiness review, then a remediation sprint focused on the workflows with the highest exposure first.

Common questions

Is this legal advice?

No. This page is about technical readiness, evidence, controls, and implementation support. Formal legal interpretation should sit with qualified counsel where needed.

Does the EU AI Act matter if our business is based in the UK?

It can. If your AI system or AI-enabled service is placed on the market, put into service, or used in ways that fall within the Act’s scope in the EU, you need to assess it properly.

Do AI agents change the compliance picture?

Often yes. Agent workflows add operational questions around autonomy, tool access, approvals, logging, and human intervention that need to be designed deliberately.

What happens if we are not ready yet?

That is normal. The useful next step is a scoped gap review, then a practical remediation plan rather than trying to solve everything at once.

Lead capture

Book a focused EU AI Act gap review

Tell us what you are shipping, where AI agents or models sit in the workflow, and where you are unsure. We will help you turn that into a practical audit and remediation path.

Practical triage, not generic AI theatre
Clear view of controls, evidence, and highest-priority fixes
Helpful for AI systems, AI agents, and mixed automation workflows

No obligation. We'll reply within 24 hours.