The EU AI Act: what it means for your business
The EU AI Act is the world's first law that regulates artificial intelligence. It classifies AI systems by risk level, and each category comes with its own set of rules. The first bans took effect in February 2025. Here's what it means for your business, from risk classification to concrete compliance steps.
Why does this matter?
The Act doesn't ban AI. It sets boundaries for AI governance. Some uses are off-limits entirely, others need strict oversight, and most face transparency requirements at minimum. The logic is straightforward: the higher the risk to people, the tighter the rules.
For businesses, this means "we'll figure out compliance later" is no longer an option. The timelines are concrete, the obligations are real, and the EU AI Office is already operational.
Gallery
The four risk tiers
The Act uses a risk classification framework with four tiers. Each comes with different obligations, from outright bans to basic transparency requirements. Where your AI system sits in this framework determines what you need to do.
Unacceptable risk (banned outright)
AI that manipulates behaviour, exploits vulnerabilities, or removes human agency entirely. Already banned since February 2025.
- Social scoring systems, government or private
- Real-time face scanning in public spaces
- Emotion recognition in workplaces and schools
High risk (heavily regulated)
AI that makes decisions affecting people's lives. Requires risk assessments, human oversight, documentation, and ongoing monitoring.
- Hiring and recruitment, credit scoring, medical devices
- Law enforcement, border control, access to housing or insurance
In practice, this means designing the experience around human control. Oversight built into the product from the start, as part of the architecture.
Limited risk (transparency obligations)
People need to know when they're dealing with AI. Disclosure and labelling required.
- Generative AI (ChatGPT, Copilot, custom LLMs) must disclose AI-generated content
- Foundation models face red-teaming and safety evaluations
- Copyright compliance mandatory for training data
Minimal risk
Most AI applications fall here. No specific obligations, but the general principles apply: safety, non-discrimination, transparency.
Gallery
Who needs to pay attention?
Any organisation using AI in customer-facing applications needs to understand where it sits in the risk framework.
Transport, logistics and industrial. For AI in logistics and manufacturing, the Act classifies systems as high-risk when they function as a safety component in critical infrastructure. A predictive maintenance system that decides whether a machine is safe to operate qualifies. A dashboard that suggests optimal delivery routes probably doesn't. The distinction matters, and it's not always obvious.
Public sector. AI governance in the public sector means municipalities using AI for resource allocation or citizen services need to demonstrate fairness through documented logic and audit trails.
HR and social services. When it comes to AI in recruitment, CV screening, candidate ranking, and benefits allocation are all explicitly listed as high-risk. Any AI that touches people's access to work or support falls under the strictest requirements.
SaaS. Enterprise and public sector buyers will start asking about compliance. Worth having answers ready before the first procurement questionnaire lands.
E-commerce sits lower on the risk scale, until AI starts making pricing decisions or profiling customers. Museums and cultural institutions are similar: low risk unless visitor data feeds into profiling.
Healthcare, financial services and education face comparable scrutiny wherever AI influences diagnostics, credit scoring, or grading. If AI plays a role in decisions that affect people's access or opportunities, the Act applies.
Scroll gallery
What does this mean in practice?
AI systems need documentation that can survive an audit. How the system works, what data it uses, what risks exist, how they're mitigated. High-risk AI systems need to be audit-ready.
Human override is a legal requirement for any AI system that affects access to services, employment, education, or legal standing. A person needs to be able to review and override decisions. Fully automated decision-making in these areas requires explicit safeguards. This is where most organisations underestimate the work. The override needs to be designed into the workflow from day one.
Transparency matters too. People need to know when they're talking to AI, and AI-generated content needs labelling. For anyone already invested in customer experience, this should be a natural part of the design process.
Most coverage frames this as a legal story. We think it's a design story too. Building products around real user needs already covers a lot of what the Act asks for: understanding how decisions affect people, keeping humans in the loop, being transparent about what's automated. Organisations that have been doing human-centered design well will find that compliance mostly validates the approach they were already taking.
Five things to do this quarter
- Map AI usage. List every AI system in the organisation. Internal tools, customer-facing features, third-party integrations. We can't assess risk without knowing what's running.
- Classify the risk. For each system, determine which category it falls into. Worth being rigorous here. "It's just a chatbot" sounds harmless until it turns out to be processing sensitive customer data.
- Audit vendors. Their compliance posture becomes our problem when their tools touch our customers.
- Review transparency practices. Are users aware when they're interacting with AI? Is AI-generated content labelled? If not, that's a design challenge worth starting now. Both a legal requirement and good CX practice.
- Make it cross-functional. This belongs with design, product, and operations as much as legal. Organisations that silo it will miss the design implications entirely.
Gallery
The bigger picture
Europe decided to regulate AI but for those of us already building products around real people, the rules are mostly common sense.
The AI Act puts into law what good design practice already looks like: understand the impact of our systems, keep people in control. For organisations that already think in terms of customer experience and human-centered design, this is familiar ground.
Aiming simple for EU AI Act compliance gets you legal cover. Designing with these principles at the core gets a product people trust. The question is whether we treat this as a checklist or as the design brief it actually is.