Skip to main content
Attestix
146 Days Until EU AI Act Enforcement - Is Your AI Ready?

146 Days Until EU AI Act Enforcement - Is Your AI Ready?

On August 2, 2026, the world's first comprehensive AI regulation goes into full enforcement. The EU AI Act will apply to every AI system serving users in the European Union, regardless of where the company behind it is headquartered. That is 146 days from today.

Most AI companies are not ready. According to a 2025 Stanford HAI survey, fewer than 20% of AI startups have begun any formal compliance planning for the EU AI Act. If you are building, shipping, or selling AI products, this is your wake-up call.

What is the EU AI Act?

Think of it as GDPR, but for artificial intelligence.

The General Data Protection Regulation (GDPR) reshaped how every tech company handles personal data. The EU AI Act does the same thing for AI systems. It establishes rules for how AI can be built, deployed, and monitored - with significant penalties for companies that do not comply.

The regulation was formally adopted in August 2024, with a phased rollout. The first wave of prohibitions on certain AI practices took effect in February 2025. The full enforcement, covering high-risk AI systems, transparency obligations, and documentation requirements, begins on August 2, 2026.

This is not a draft. It is not a proposal. It is law.

Who does it affect?

If your AI system is used by anyone in the EU, the Act applies to you. Full stop.

This catches a lot of companies off guard. The EU AI Act has the same extraterritorial reach as GDPR. Here are a few scenarios where it applies:

  • A US-based SaaS company whose customers include European businesses. If your product uses AI for recommendations, scoring, or decision-making, you are in scope.
  • An Indian fintech offering credit assessment tools to users in Germany or France. Your AI system is classified as high-risk under the Act.
  • A Canadian hiring platform using AI to screen resumes for companies with EU employees. That is a high-risk AI system under the regulation.
  • A startup anywhere in the world that deploys a chatbot accessible to EU visitors. You have transparency obligations.

The key principle: if the output of your AI system affects people in the EU, you need to comply.

The four risk tiers

The EU AI Act does not treat all AI the same. It classifies AI systems into four risk tiers, and your obligations depend on which tier your system falls into.

Unacceptable risk (banned outright)

These AI applications are prohibited entirely within the EU:

  • Social scoring by governments or private companies (ranking citizens based on behavior)
  • Manipulative AI that exploits vulnerabilities of specific groups (children, elderly, disabled individuals)
  • Real-time biometric surveillance in public spaces (with narrow exceptions for law enforcement)
  • Emotion recognition in workplaces and educational institutions

If your product does any of these things, it cannot be offered in the EU. Period.

High risk (strict obligations)

This is where most of the regulatory weight falls. High-risk AI systems include:

  • Hiring and recruitment tools that screen, rank, or evaluate candidates
  • Credit scoring and financial risk assessment
  • Medical diagnosis and clinical decision support
  • Educational assessment and student evaluation
  • Immigration and border control systems
  • Law enforcement prediction and profiling tools
  • Critical infrastructure management

If your AI system falls into this category, you must meet extensive documentation, testing, monitoring, and transparency requirements before you can deploy it in the EU.

Limited risk (disclosure required)

These systems have lighter obligations, primarily around transparency:

  • Chatbots and conversational AI - users must be told they are interacting with AI
  • Content generation tools - AI-generated content must be labeled
  • Deepfake detection and generation - clear disclosure is required

The main requirement here is honesty. Users need to know when they are dealing with AI.

Minimal risk (no specific obligations)

The vast majority of AI systems fall here:

  • Spam filters
  • Recommendation engines for content or products
  • AI-powered search
  • Video game AI

These systems can operate without additional regulatory obligations under the Act.

What are the fines?

The penalties are designed to hurt. They follow the same model as GDPR but with higher ceilings.

Violation Maximum Fine
Prohibited AI practices EUR 35 million or 7% of global annual revenue
High-risk system violations EUR 15 million or 3% of global annual revenue
Providing incorrect information EUR 7.5 million or 1% of global annual revenue

To put this in context: in 2023, Meta was fined EUR 1.2 billion under GDPR for data transfer violations. The EU AI Act's maximum penalties are even steeper. For a company with EUR 1 billion in annual revenue, a prohibited-practices violation could cost EUR 70 million.

These are not theoretical numbers. EU regulators have shown with GDPR that they are willing to enforce aggressively, and the newly established AI Office in Brussels is already staffing up.

What you need to do before August 2

Whether your AI system is high-risk or limited-risk, here is a practical checklist to start preparing.

1. Classify your AI systems

Go through every AI-powered feature in your product. Map each one to the risk tiers described above. Be honest about where your systems land. A customer support chatbot is limited risk. A loan approval algorithm is high risk. Misclassification will not protect you.

2. Document your training data

High-risk systems require detailed documentation of the data used to train and validate your models. This includes data sources, preprocessing steps, known biases, and the measures you took to address them. If you cannot explain where your training data came from, you have a problem.

3. Document your model decisions

Regulators want to understand how your AI system makes decisions. This does not mean you need to open-source your models. It means you need clear technical documentation that describes what your system does, how it was tested, its known limitations, and its intended use cases.

4. Implement human oversight

High-risk AI systems must have meaningful human oversight. This means a real person can intervene, override, or shut down the AI system when needed. "A human reviewed the output" is not sufficient. You need documented processes, clear escalation paths, and evidence that the oversight actually works.

5. Create an auditable compliance record

This is the part that trips up most companies. The EU AI Act requires that your compliance documentation be available for regulatory inspection. Not a slide deck. Not a one-time assessment. An ongoing, verifiable record that proves your system meets the requirements.

6. Set up post-market monitoring

Compliance is not a one-time event. You must monitor your AI system after deployment, track incidents, and report serious issues to regulators. This means logging, alerting, and regular reassessment of your risk classification.

The compliance tooling gap

Here is the uncomfortable truth: most existing compliance tools were not built for the EU AI Act.

Current AI governance platforms like Vanta, Credo AI, and Holistic AI do valuable work. They help organizations assess risk, document policies, and generate compliance reports. But their output is typically PDF dashboards, spreadsheet exports, and human-readable reports.

The EU AI Act requires something different. Article 11 mandates technical documentation that is "drawn up in such a way as to demonstrate" compliance. Article 12 requires automatic logging that enables post-market monitoring. Article 49 requires registration in an EU-wide database with structured data.

Regulators will not accept a PDF. They need machine-readable evidence that can be independently verified and audited at scale. As the number of AI systems subject to the Act grows into the hundreds of thousands, manual review of compliance documents simply will not scale.

This creates a gap between what current tools produce and what regulators will actually need.

How Attestix fills the gap

Attestix is an open-source attestation infrastructure designed specifically for this problem. Instead of generating static reports, Attestix creates verifiable, machine-readable compliance records that regulators can independently validate.

Here is what that looks like in practice:

Automated risk classification - Attestix analyzes your AI system and maps it to the EU AI Act risk tiers. Instead of guessing, you get a structured classification backed by the actual regulatory text.

Machine-readable compliance documentation - Instead of PDFs, Attestix generates structured digital credentials that encode your compliance status. These are not just files. They are cryptographically signed records that anyone can verify without trusting the company that created them.

Verifiable proof - Every compliance record created by Attestix includes a cryptographic signature. This means a regulator, auditor, or business partner can verify that the record is authentic, unaltered, and was created by the entity claiming it. Think of it as a digital notary for your AI compliance.

Tamper-evident audit trail - Attestix maintains a chain of records that proves when compliance was established, when it was reassessed, and whether anything changed. If a regulator asks "were you compliant on June 15?", you can prove it.

Open source and free to start - Attestix is available on PyPI, works with any AI agent framework, and costs nothing to get started. There is no vendor lock-in and no black box.

Start now, not in July

146 days sounds like a lot of time. It is not.

GDPR taught us that companies that waited until the last minute faced the highest risk of fines and the most expensive scramble to comply. The organizations that started early had smoother transitions, lower costs, and fewer enforcement actions.

The EU AI Act will follow the same pattern. Start your risk classification today. Begin documenting your training data and model decisions this month. Set up compliance infrastructure before the summer rush.

If you want to see where your AI systems stand, try the compliance checker. It takes five minutes and gives you a clear picture of your risk classification and next steps.

For a deeper dive into building compliance into your AI pipeline, check out the getting started guide.

The clock is ticking. 146 days. Make them count.

Ready to add verifiable trust to your AI agents?