Matei Cananau

EU AI Act Technical Auditor

Your AI may already be blocking enterprise deals.

I classify your system under the EU AI Act so that investors, customers, and procurement teams say yes. Tailored for AI startups building products used in hiring, finance, healthcare, or decision-making systems.

EU AI Act Compliance

Most companies do not know if their AI systems are high-risk.

Getting AI system classification wrong makes you lose deals and face expensive rework.

Most high-risk obligations begin applying from August 2026, but diligence pressure and remediation cost hit earlier.

Prohibited practices face the largest penalties. Classification mistakes still create procurement friction, investor concern, and expensive remediation work long before a fine arrives.

I map your product against Annex III of the EU AI Act and deliver a written risk position your CTO, legal team, and investors can use.

>70%

of companies say they are not prepared for the EU AI Act.

Source: Littler 2025 European Employer Survey Report

Do not spend €150,000 or more fixing compliance gaps discovered during due diligence or regulatory review. Early classification costs a fraction of that.

Technical Audit

Software cannot find your AI risk.

But a technical audit can.

Compliance software

  • Checks what you declare
  • Misses AI hidden in APIs and pipelines
  • Produces generic reports
  • Assumes you understand the law

Technical AI audit

  • Full architecture review
  • Discovery of undocumented AI components
  • Mapping of system intent to Annex III risk
  • Clear written classification for investors and regulators

Result: A written AI risk classification based on Articles 5 and Annex III of the EU AI Act.

How the audit works:

I inspect your product architecture, models, data flows, and external APIs.

AtechnicalclassificationofyourAIsystemundertheEUAIAct

01

Triage: Article 5 & Annex III

We review your AI architecture and product roadmap. Determine whether your system falls under prohibited practices or high-risk classification.

02

Documentation: Article 11 & Annex III

Your system architecture is translated into the technical documentation required by the EU AI Act. Every component is mapped to regulatory obligations.

03

Readiness: Investor and regulatory review

You receive a board ready classification memo explaining whether your AI triggers prohibited or high-risk obligations.

Matei Cananau at Handelshögskolan SSE

Matei Cananau

EU AI Act technical auditor

You get a clear answer with proof: prohibited, high-risk, or safe.

Most compliance experts come from law or policy.
My background is machine learning and enterprise AI.

I audit your models, pipelines, and data flows against Articles 5, 11, and Annex III of the EU AI Act.
You receive a technical classification of your system that investors, legal teams, and regulators can understand.

3+ years of experience teaching and building AI solutions for enterprises
KTH MSc. in Machine Learning, thesis on AI alignment & architecture
EU AI Act specialization, AI literacy, technical documentation

Audience

This engagement is
executive-grade.

I provide technical audits of AI systems and Article 4 AI literacy workshops and documentation for scale-ups and enterprises.

Perfect for

  • VC backed scale-ups with AI inside the core product
  • Enterprises deploying high-risk AI systems
  • CTOs or Heads of AI needing technical EU AI Act classification
  • Legal teams preparing engineering for the 2026 conformity deadline
  • Companies needing Article 4 AI literacy workshops and documentation

Not for

  • Pre revenue startups running LLM wrappers
  • Hobby projects with no real users or data risk
  • Companies that only want a compliance PDF to check a box
  • Teams unwilling to open their architecture for technical review

Pricing

Let us talk AI.

Flat, transparent, European pricing.

AI Risk Classification Memo

Know your AI risk before it blocks deals

€3,000

Best for: AI systems with unclear EU AI Act exposure

  • Clear classification: prohibited, high-risk, or safe
  • Annex III risk determination
  • Article 5 prohibited practice check
  • Short memo for investors and procurement
  • Clear next steps if action is required
  • Fee credited toward full audit
  • Timeline: 1 week
Get your AI risk classification

High-Risk Readiness Audit

Pass due diligence and unblock enterprise deals

€10,000

Best for: AI systems likely to be high-risk

  • Full system architecture review
  • Confirmed Annex III classification
  • Articles 9–15 gap analysis
  • Technical documentation requirements (Article 11)
  • Compliance roadmap with ownership
  • Audit memo for investors, legal, and procurement
  • Timeline: 2–3 weeks
Get your AI risk classification

AI Literacy & Documentation

Documented AI literacy according the EU AI Act

€3,000

Best for: compliance with Article 4 AI literacy

  • AI literacy workshop for technical and non-technical staff
  • Clear explanation of your EU AI Act obligations
  • Staff assessment and literacy testing framework
  • Written documentation proving workforce AI literacy
  • Evidence ready for regulators, investors, and procurement
  • Timeline: 90 minutes
Get your AI risk classification

Common Questions

Everything you need to know.

Prohibited AI practices under Article 5 can lead to fines of up to €35,000,000 or 7% of global annual turnover. Other violations carry lower but still significant penalties. Companies deploying AI in the EU must demonstrate compliance with the Act’s requirements.

Yes. The Act applies to companies that develop, deploy, or integrate AI systems into their products. If your product uses external AI APIs such as OpenAI or other providers, you still remain responsible for how that AI is deployed and used in your system.

High-risk systems are AI systems used in certain regulated use cases defined in Annex III, including hiring, finance, education, healthcare, biometric identification, and critical infrastructure. These systems must meet strict requirements including documentation, risk management, human oversight, and conformity assessment.

Yes. Article 4 requires companies to ensure that employees interacting with AI systems have adequate AI literacy. Organizations must be able to demonstrate that staff understand the capabilities, risks, and legal obligations associated with AI use.

Most compliance software relies on questionnaires and self reported inputs. It cannot analyze real system architecture, data pipelines, or hidden AI usage inside APIs and integrations. A technical audit reviews the actual system design to determine legal classification and risk.

Legal teams interpret the law. They usually do not analyze model architecture, training data, or system pipelines. EU AI Act compliance depends on how the system is technically built. A technical audit connects the engineering reality with the legal requirements.

Increasingly yes. During due diligence, investors often review regulatory exposure related to AI systems. Clear classification and documentation reduce legal risk and prevent compliance issues from delaying funding rounds or acquisitions.