Something subtle but seismic is happening in enterprise software. For years, SaaS thrived on UX—well-crafted interfaces designed to guide humans through increasingly complex workflows. But generative AI changes the equation. It no longer needs users to click, scroll, or select. Agents can now understand intent, access data directly, and take action—without ever showing a screen. And this, quietly, is terrifying legacy vendors.
What’s at stake is the interface itself. The defining feature of SaaS—the user interface—is becoming optional. Satya Nadella captured the anxiety when he spoke of “the end of software as we know it,” a world where the user interface dissolves and agents act autonomously. This isn’t just a UI shift—it’s an inversion of the human-machine relationship. We used to adapt to software. Now software adapts to us.
The ripple effects are brutal. If agents can perform actions directly via APIs or orchestration layers, the interface becomes a bottleneck—not a value driver. The entire premise of SaaS is challenged. What used to be the product—the interface—is now reduced to a thin layer between data and automation. The value shifts to what’s beneath: data structures, process logic, execution capacity. In this world, software that can’t be called by an agent risks irrelevance.
This explains the panic. Major players in enterprise software are pouring billions into repositioning themselves as agentic platforms. They don’t just want agents—they want to host and orchestrate them. The fear is simple: if agents live elsewhere, SaaS becomes commoditized middleware. That’s why one firm spent 2.85 billion to acquire a startup specializing in workplace automation. Why another committed 2 billion per year to AI investments. Why internal KPIs at a hyperscaler are now 80% focused on sales. It’s not about product maturity—it’s about narrative control and land grab.
But for all the noise, most embedded agents remain unimpressive. They automate narrow tasks, inside single domains, under tightly scoped rules. They work—barely—because the scope is limited. As soon as complexity increases, or cross-system coordination is needed, the cracks appear. Most platforms weren’t designed for agents. They were designed for humans. So they struggle to scale agentic logic without expensive rewrites. This is why these former SaaS giants are now desperately trying to centralize data within their own environments, and position themselves as the connective tissue between all enterprise tools—hoping to stay relevant in a world that’s quickly outgrowing them.
And yet, nothing is settled. Just as legacy software vendors try to retrofit their architectures for agents, a new front is opening—not in the code, but in the experience. OpenAI just made its biggest acquisition to date: spending $6.5B to acquire “io,” the stealth startup led by legendary designer Jony Ive. The move isn’t just about hardware—it’s about redefining how we interact with AI. Ive’s team includes ex-Apple industrial design leads, and their ambition is clear: to imagine entirely new form factors, new rituals, new interfaces for AI-native interaction. It’s not a pivot away from screens—it’s a reset of the way we relate to intelligence.
In a space crowded with patchwork integrations and retrofitted workflows, the biggest breakthroughs may come not from better orchestration—but from new touchpoints altogether. The game is wide open. And the next wave won’t just change how AI performs. It will change how it feels.
Learn more on the latest evolution of AI by subscribing to our Gen AI newsletter.