Insights
/
Apr 22, 2026
The EU AI Act 100-Day Countdown: What SMEs Must Do Before August 2, 2026
EU AI Act SME compliance in 102 days: the 5 things SMEs need in place before the 2 August 2026 high-risk provisions go live — and why it's an opportunity.
/
AUTHOR
Team Logicfox

On 2 August 2026, the high-risk provisions of the EU AI Act become fully enforceable. Fines top out at €35 million or 7% of global turnover — whichever is greater. That's 102 days from the time of writing.
Before any SME founder assumes this is an enterprise problem, the scope clause is worth reading twice. If any EU resident meaningfully interacts with an AI tool a business operates, that business is in scope. A UK accounting firm with three clients in Dublin. A London D2C brand that ships to Paris. A small SaaS with a free-tier user in Madrid. All in scope.
Most of what the Act asks for is what good AI engineering looks like anyway. The deadline is less a burden than a forcing function.
Most SMEs are somewhere between "we heard about this" and "we'll deal with it later." EU AI Act SME compliance doesn't have to mean twenty pages of legalese or a £40k consulting engagement. It means five things a business needs in place, a clear reason each one matters, and a path that gets there inside 100 days without grinding product velocity to a halt. More on that below.
1. What Actually Changes on 2 August 2026
The Act has been rolling out in phases. Prohibitions and foundational AI literacy requirements went live in February 2025. What switches on in August 2026 is the meat of the regime: high-risk system obligations, conformity assessments, post-market monitoring, and the full penalty framework.
What's Already Live vs What's Coming
Feb 2025 (live now) | Aug 2026 (in 102 days) | |
|---|---|---|
Scope | Prohibited AI practices, foundational AI literacy | Full high-risk regime across deployers and providers |
Obligations | Remove/disallow banned practices, train staff on AI basics | Risk classification, transparency, human oversight, impact assessment, post-market monitoring, incident reporting |
Documentation | Minimal | Technical file, conformity declaration, oversight evidence |
Penalties | Up to €35M / 7% turnover for prohibited practices | Up to €35M / 7% for prohibited; €15M / 3% for other infringements; €7.5M / 1% for misleading info |
Who's exposed | Everyone using AI in the EU | Any business where an EU resident interacts meaningfully with its AI |
"High-risk" is broader than most SME founders realise. It includes AI used in recruitment and HR decisions, credit scoring, employee management, critical infrastructure, and customer-facing systems that influence consequential decisions. A chatbot that qualifies leads and routes them to a sales pipeline isn't obviously high-risk. A chatbot that evaluates loan applicants is. The line between the two is where most SMEs will find themselves standing, squinting.
The other thing people miss: the Act applies extraterritorially. An EU office is not required for liability. Exposure is defined by where users, employees, or the people affected by an AI's outputs sit — not where the company is incorporated. For a London business with European customers, this is not theoretical.
➡️ The EU's "Digital Omnibus" reforms in late 2025 did carve out lighter-touch requirements for SMEs and small mid-caps: simplified technical documentation, reduced post-market monitoring templates, and lower penalty thresholds. Not the same engineering standard as OpenAI — but a standard, and higher than most SME teams assume.
2. The Five Obligations Every SME Needs Ready
If a business does nothing else in the next 100 days, these five items are the floor.
2.1 An AI Literacy Policy
The single cheapest item on the list and the one most SMEs ignore. Every employee who uses or is affected by an AI system must have "a sufficient level of AI literacy." In practice that means a written policy, a training record, and a defensible story about how staff are kept up to date.
It does not mean a 60-minute e-learning course no one watches. A one-page policy, a 20-minute onboarding session, and a Slack channel where staff can ask questions are enough for a small team — provided it can be proven to have happened.
2.2 Transparency Disclosures
Anywhere AI interacts with a human, a disclosure is required. For chatbots, a line of interface copy. For AI-generated content, a disclosure in the output itself. For AI-influenced decisions (a CV screened by a model, a loan application scored by an agent), a clear notice to the affected person.
The bar here is clarity, not legalese. "This conversation is with an AI assistant. A human can take over at any time — just ask" clears the bar. A 900-word privacy-policy bolt-on does not.
2.3 Human Oversight Procedures
Where the regulation and the state of the art collide in a useful way. For any high-risk use, a business must demonstrate meaningful human oversight — the ability for a trained person to intervene, override, or halt the AI system before it causes harm.
Human-in-the-loop (HITL) has long been a "nice to have" bolted on after the fact. Under the Act it becomes a design requirement. Not "someone can check the logs later" — a defined checkpoint, a routed approval, a time-boxed decision window, and an audit log of every intervention. Regulators will ask to see the oversight pattern, not just the policy.
HITL stops being a nice pattern and becomes a design requirement. The regulation reaches directly into the architecture.
2.4 Impact Assessments
Before a high-risk AI system is deployed, a fundamental rights and conformity assessment is required — a structured document identifying risks to users, the measures taken to mitigate them, and the residual risks that remain.
For SMEs, the Digital Omnibus allows a "simplified" version. A 3–5 page document covering: what the system does, who it affects, the main risks (discrimination, error, opacity, misuse), the controls in place, and the monitoring plan. Done once per system, refreshed annually, this is entirely tractable.
2.5 Incident Response
If an AI does something harmful — or is suspected to have — a defined process is required for detecting, containing, notifying the relevant authorities within tight deadlines (as short as 72 hours for some incidents), and documenting the fix.
This is adjacent to what any competent ops team already has in place for outages and security incidents. The difference is that AI incident response must explicitly include behavioural failures (the model started giving biased outputs) not just technical ones (the server went down).
3. Why This Is Actually an Opportunity for SMEs
Here is the counterintuitive part. A widely cited March 2026 survey of 650 enterprise technology leaders found that 78% have AI agent pilots running, but fewer than 15% reach production. The gap is almost never about model quality. It is about the exact things the EU AI Act is now mandating: oversight, reliability, documentation, incident response.
AI projects fail in production for the same reasons they fail compliance audits. Fix one, and most of the other is already fixed.
The businesses that clear the compliance bar aren't the ones being slowed down — they are the ones who finally ship. For an SME, this is a rare moment where regulation and commercial reality point in the same direction. The work required to meet August 2 is almost identical to the work required to go from "impressive demo" to "reliable revenue-generating system." Businesses that invest in this now come out of the year with:
Agents that don't silently fail in front of customers
Clear cost controls and usage visibility (because it had to be documented anyway)
Defensible audit trails that double as sales assets when enterprise buyers ask "how do you handle governance?"
A compliance story that lets them close deals with EU clients competitors can't touch
SMEs that treat the Act as an opportunity to build the infrastructure they always needed tend to come out the other side moving faster, not slower.
4. A 100-Day SME Compliance Roadmap
The roadmap below assumes one part-time owner for this work plus access to whoever builds or buys the AI tools. One part-time owner is enough if the programme starts now. It is not enough if it starts in July.
Days 1–30 — Inventory and literacy. List every AI tool, API, or workflow currently in use — including the shadow ones adopted without a purchase order. Score each one against the high-risk criteria. Draft and distribute the AI literacy policy. Run the first training session and record attendance. Unglamorous, and the single biggest lever on the whole programme.
Days 31–60 — Oversight and transparency. For every system flagged as high-risk or adjacent, add disclosure copy and a human-oversight checkpoint. Map the approval routes: who decides, within what window, with what fallback if they don't respond. Wire up logging for every intervention. For custom agents this often means retrofitting a HITL gate — easier than it sounds if the agent was built with the pattern in mind, and painful if it wasn't.
Days 61–90 — Assessments and documentation. Write the simplified impact assessments for each high-risk system. Document the model supply chain (whose models, under what terms, with what data). Stand up the incident response playbook and run one tabletop exercise against it.
Days 91–102 — Rehearsal and handover. Walk the whole package through with a second pair of eyes. Stress-test the oversight paths. Confirm the literacy policy has actually been read, not just sent. Archive everything somewhere it can be found in a year when the first post-market monitoring review lands.
None of these steps are individually hard. The failure mode is trying to do them all in the final two weeks of July — precisely when every other SME will also be doing it.
5. The Mistakes SMEs Are Making Right Now
Three patterns show up in almost every inbound call this quarter.
Mistake one: assuming UK-based means out of scope. The Act's extraterritorial reach is the single most misunderstood part of the regime. If any EU resident interacts with a business's AI, that business is in scope — full stop.
Mistake two: treating compliance as a legal problem, not an engineering one. Policy documents do not stop an AI agent from taking a bad action. Architecture does. Oversight, logging, and fail-safe behaviour needs to live in the code, not just in a Word doc. If legal leads this workstream alone, the result is a binder that does not match what the software actually does — and that gap is exactly what regulators look for.
Mistake three: waiting for a Big Four template. A £40k compliance engagement is available for any SME that wants one. Most SMEs do not need it. What is needed is a small team that understands both the regulation and how the AI systems actually work, embedded alongside the builders.
6. Where LogicFox Fits
LogicFox is a London-based AI automation agency built specifically for SMEs. The agents we ship already include HITL approval gates, cost tracking, audit logs, and self-healing behaviour as part of every engagement — not because the EU AI Act forced us to, but because that is what production AI actually requires. The August deadline simply makes it official.
For an independent read on where a business's AI stack stands against the Act — what's high-risk, what's in scope, and what needs to change inside 100 days — the shortest path is our AI Strategy & Consulting engagement. For a team mid-build on an agent that needs HITL and oversight retrofitted before August, AI Agent Development is the faster route. And for any business that has inherited a sprawl of automations over the last two years and does not know what half of them do, Process Optimisation is where we'd start.
102 days is a tight window but it is a real one. The SMEs that use it well will end the year with AI systems that are faster, safer, and easier to sell to EU buyers than those of the competitors who waited.
👉 Book a 30-minute EU AI Act readiness call →
Further Reading
The Agentic Value Chain: How Google Is Closing the Loop From Intent to Execution — the strategic context for why agents now need governance, not just capability.
2026 Automation Stack: Building Smarter Businesses with Modular AI Workflows — the modular architecture that makes compliance retrofits tractable.
➡️ EU AI Act enforcement of Chapter V — primary source for the obligations landing on 2 August 2026.
This article is general guidance, not legal advice. Specific obligations depend on how AI is used, who the users are, and the nature of the decisions a system influences. Qualified legal counsel alongside an engineering read is recommended for high-risk use cases.
/
BLOG



