Skip to content
Screen07 2
2 minute read - by Ferre Lambert

What is WebMCP: how AI agents make sense of your website

AI agents are starting to act on behalf of users. They search, compare products, submit forms, complete checkouts. From the outside this looks like a natural evolution of how people use the web. Under the surface, there's a structural problem that most websites aren't ready for.

How AI agents interact with websites today

Most AI agents navigate websites the same way a person would: by reading the screen. They interpret buttons, parse page structures, and try to follow the same flows you'd click through manually.

That approach is more fragile than it appears. A single layout change, a renamed button, or a redesigned page can break the entire flow, not because the underlying service changed, but because the surface did. What looks like intelligent automation is often guesswork layered on top of an interface built for human eyes.

This is the normal operating condition of most web automation today.

Gallery

What WebMCP proposes

WebMCP is a W3C proposal, currently backed by Microsoft and Google, that takes a different approach. Rather than requiring agents to navigate interfaces, it allows websites to expose their core actions as callable JavaScript tools.

Instead of an agent trying to locate and click an "Add to cart" button, the website declares that adding to cart is an action that can be invoked directly. The agent calls it. The service executes it. The interaction no longer depends on the interface staying the same.

That's not a cosmetic difference. It's the difference between an agent that reads your website and one that understands what your website can do.

WebMCP is still in early stages and hasn't been adopted as a standard yet. But the direction it points is meaningful: move from interfaces that have to be interpreted to capabilities that can be called directly.

Gallery

Why this connects to accessibility

This isn't a problem AI agents invented. Assistive technologies have navigated a version of it for a long time.

Screen readers and voice interfaces don't experience websites visually. They depend on semantic structure, properly labelled elements, defined roles, explicit relationships between content, to function. When that structure is missing or inconsistent, they have to guess, and when they guess, they fail in predictable ways: misread labels, skipped interactions, broken flows.

The parallel to AI agents is direct. Both are trying to interact with a service through an interface that wasn't designed with them in mind. Both benefit from the same underlying thing: capabilities defined explicitly rather than implied through design.

Making structure explicit is something accessibility practice has argued for years. AI agents are arriving at the same conclusion from a different direction.

The websites that perform best will be the ones that don't just look good, but communicate clearly with machines and assistive technology.
Ferre

Ferre Lambert

Accessibility Engineer

Prepare for what's next

Is your website ready for AI agents?

AI
accessibility

Author

Ferre

Ferre Lambert

Ferre is an accessibility engineer and developer who combines technical expertise with a strong focus on inclusive design. He ensures digital products are accessible, compliant and built to work seamlessly for every user.