The European Union’s landmark AI Act — the world’s first comprehensive legal framework for artificial intelligence — has officially entered into force, with enforcement of its high-risk AI provisions now carrying fines of up to €35 million or 7% of global annual turnover. For the thousands of businesses that use or develop AI systems touching European users, compliance is no longer optional.
What the AI Act Actually Regulates
The EU AI Act takes a risk-based approach, categorizing AI systems into four tiers: unacceptable risk (banned outright), high risk (heavily regulated), limited risk (transparency requirements), and minimal risk (largely unregulated). Banned systems include social scoring by governments, real-time biometric surveillance in public spaces, and AI that manipulates users through subliminal techniques.
High-risk categories — which face the most substantial compliance obligations — include AI used in hiring decisions, credit scoring, medical diagnosis, critical infrastructure management, law enforcement, and education assessment. Companies deploying AI in any of these areas must maintain extensive documentation, conduct conformity assessments, register in an EU database, and ensure human oversight mechanisms.
Timeline for Compliance
The Act takes a phased approach to enforcement. Banned AI practices became prohibited six months after the Act entered into force. Obligations for general-purpose AI models (like GPT-4 and Claude) apply twelve months after entry into force. High-risk AI system requirements phase in over 24 to 36 months, giving companies time to build compliance programs.
What Businesses Must Do Now
Legal experts advise companies to begin with a comprehensive AI inventory — documenting all AI systems in use, their functions, and which risk tier they fall into. Systems touching EU users or operating in the EU are in scope regardless of where the company is headquartered. This means US, UK, and Asian companies with EU operations or customers must comply.
Key obligations for high-risk systems include data governance policies, technical documentation of model design and training, automatic logging of operations, transparency measures for users, human oversight protocols, and cybersecurity robustness requirements.
Penalties and Enforcement
Enforcement will be handled by national market surveillance authorities in each EU member state, coordinated by a newly established European AI Office. Penalties are tiered by violation severity, with prohibited practices attracting the highest fines. Companies should note that the Act’s extraterritorial reach makes compliance necessary for any organization with EU market exposure, and several major companies have already designated EU AI compliance officers in anticipation.