Compliance Framework

EU AI Act — govern the code that governs AI.

The EU AI Act creates binding requirements for high-risk AI systems — and those requirements translate to code-level obligations. MergeGuide includes policies to guide implementation of EU AI Act requirements where AI system code is written, not just documented.

Who the EU AI Act affects

Regulation (EU) 2024/1689 applies to providers and deployers of AI systems placed on the EU market, regardless of where those systems are developed. It applies extraterritorially: US companies building AI products used by EU residents are subject to it.

High-risk AI systems — as defined in Annex III — face the most substantial obligations: mandatory conformity assessments, human oversight requirements, technical documentation, logging, and accuracy/robustness standards.

The developer connection: EU AI Act compliance isn't just legal and documentation work — it requires demonstrable technical controls in the AI system's code. Data quality controls, logging, bias detection, and human oversight mechanisms must be implemented and evidenced.

AI risk classification

Prohibited
Unacceptable risk AI

Social scoring, real-time biometric surveillance, subliminal manipulation. MergeGuide includes policy templates that flag code patterns associated with prohibited AI practices — these serve as review triggers, not definitive legal determinations.

High risk
Requires conformity assessment

Biometrics, critical infrastructure, education, employment, essential services, law enforcement, border control, justice. Full MergeGuide policy coverage applies.

Limited risk
Transparency obligations

Chatbots, deepfakes, emotion recognition. MergeGuide includes policies to guide implementation of disclosure requirements.

Minimal risk
No specific requirements

Most AI applications. Voluntary code of conduct applies.

High-risk AI system requirements — code-level detection

📊

Data quality (Art. 10)

High-risk AI systems must use training data meeting specific quality criteria. MergeGuide detects code patterns that bypass data validation, accept uncleaned inputs for model training, or lack appropriate data provenance tracking.

📝

Logging (Art. 12)

High-risk AI systems must automatically log events to enable national authority investigation. MergeGuide includes policies to guide implementation of required logging in AI inference code and detects disabled or insufficient logging.

👤

Human oversight (Art. 14)

High-risk systems must be designed to allow human oversight and intervention. MergeGuide detects fully-automated decision paths in high-risk AI code that lack override or review mechanisms.

🎯

Accuracy & robustness (Art. 15)

High-risk AI systems must achieve appropriate accuracy levels and be resilient to errors. MergeGuide detects missing input validation, lack of confidence thresholds, and absence of fallback behavior.

🔐

Cybersecurity (Art. 15)

High-risk AI systems must be resilient against adversarial attacks. MergeGuide detects common robustness anti-patterns in AI system code — including unsafe deserialization of model data, command injection in AI pipelines, and weak integrity verification — as a starting point for organizations implementing Art. 15 requirements.

📋

Technical documentation (Art. 11)

OSCAL Component Definition export from MergeGuide provides the machine-readable technical documentation baseline required for Annex IV conformity assessment submission.

EU AI Act + GDPR: the common EU AI stack

EU AI system providers are almost always running both EU AI Act and GDPR obligations simultaneously — they share significant overlap in data governance requirements. PolicyMerge handles both together.

Where EU AI Act and GDPR overlap

  • Data minimization — both require training/inference data limited to what's necessary
  • Purpose limitation — AI system use must align with data collection purpose
  • Data subject rights — automated decision-making rights under both
  • Privacy by design — technical controls required in both frameworks
  • Data protection impact assessments — required for high-risk AI and high-risk processing

PolicyMerge resolution

When EU AI Act Article 10 (data quality) and GDPR Article 5 (data minimization) conflict, PolicyMerge applies the stricter requirement — whichever demands more robust data controls. Single policy set. Single enforcement layer. Single audit trail.

  • Unified policy set across EU AI Act, GDPR, and SOC 2
  • Overlap matrix shows which framework drives each control
  • Evidence artifacts mapped to all applicable regulations
  • OSCAL export covers all frameworks simultaneously

Building high-risk AI systems for the EU?

See how MergeGuide guides implementation of EU AI Act technical requirements alongside GDPR and your other compliance obligations.

Book a demo Talk to sales