The EU AI Act creates binding requirements for high-risk AI systems — and those requirements translate to code-level obligations. MergeGuide includes policies to guide implementation of EU AI Act requirements where AI system code is written, not just documented.
Regulation (EU) 2024/1689 applies to providers and deployers of AI systems placed on the EU market, regardless of where those systems are developed. It applies extraterritorially: US companies building AI products used by EU residents are subject to it.
High-risk AI systems — as defined in Annex III — face the most substantial obligations: mandatory conformity assessments, human oversight requirements, technical documentation, logging, and accuracy/robustness standards.
The developer connection: EU AI Act compliance isn't just legal and documentation work — it requires demonstrable technical controls in the AI system's code. Data quality controls, logging, bias detection, and human oversight mechanisms must be implemented and evidenced.
Social scoring, real-time biometric surveillance, subliminal manipulation. MergeGuide includes policy templates that flag code patterns associated with prohibited AI practices — these serve as review triggers, not definitive legal determinations.
Biometrics, critical infrastructure, education, employment, essential services, law enforcement, border control, justice. Full MergeGuide policy coverage applies.
Chatbots, deepfakes, emotion recognition. MergeGuide includes policies to guide implementation of disclosure requirements.
Most AI applications. Voluntary code of conduct applies.
High-risk AI systems must use training data meeting specific quality criteria. MergeGuide detects code patterns that bypass data validation, accept uncleaned inputs for model training, or lack appropriate data provenance tracking.
High-risk AI systems must automatically log events to enable national authority investigation. MergeGuide includes policies to guide implementation of required logging in AI inference code and detects disabled or insufficient logging.
High-risk systems must be designed to allow human oversight and intervention. MergeGuide detects fully-automated decision paths in high-risk AI code that lack override or review mechanisms.
High-risk AI systems must achieve appropriate accuracy levels and be resilient to errors. MergeGuide detects missing input validation, lack of confidence thresholds, and absence of fallback behavior.
High-risk AI systems must be resilient against adversarial attacks. MergeGuide detects common robustness anti-patterns in AI system code — including unsafe deserialization of model data, command injection in AI pipelines, and weak integrity verification — as a starting point for organizations implementing Art. 15 requirements.
OSCAL Component Definition export from MergeGuide provides the machine-readable technical documentation baseline required for Annex IV conformity assessment submission.
EU AI system providers are almost always running both EU AI Act and GDPR obligations simultaneously — they share significant overlap in data governance requirements. PolicyMerge handles both together.
When EU AI Act Article 10 (data quality) and GDPR Article 5 (data minimization) conflict, PolicyMerge applies the stricter requirement — whichever demands more robust data controls. Single policy set. Single enforcement layer. Single audit trail.
See how MergeGuide guides implementation of EU AI Act technical requirements alongside GDPR and your other compliance obligations.