Log in to save this article and keep your favorite resources in one place.
Artificial intelligence may be advancing at a breakneck pace, but for many companies the promised productivity gains remain elusive. Despite massive investments and relentless hype, enterprise leaders are confronting a paradox: AI capabilities are exploding while measurable business impact often lags.
According to Ferose V R, Senior Vice President and Head of SAP Institute for Product and Engineering, the root cause isn’t technological limitations. It’s human systems.
Speaking about the current state of enterprise AI adoption, Ferose framed the moment through a provocative metaphor: AI is in its adolescence.
The idea draws inspiration from an essay by Dario Amodei, co-founder of Anthropic, titled The Adolescence of Technology. The 20,000-word essay attempts to explain the current stage of artificial intelligence development. Ferose distills the thesis into four simple observations.
“AI is growing very quickly, it’s not very mature, it’s not very reliable, and it doesn’t understand human values,” he said.
That combination creates a volatile moment for businesses attempting to integrate AI at scale.
The Age of AI “Quantum Jumps”
Enterprise leaders have witnessed unprecedented technological acceleration since late 2022, when ChatGPT launched and brought generative AI into the mainstream.
Ferose described the past three years as an era of “quantum jumps”—sudden leaps in capability that historically might have taken a decade but now occur annually.
One example he cited involved the release of new model architectures that dramatically reduced computing requirements for AI systems, triggering massive market reactions across technology stocks.
More recently, the release of Claude Code demonstrated how autonomous AI agents can orchestrate complex software development tasks.
Ferose recounted the story of an Austrian developer who used a network of AI agents to build a compiler in just two weeks. “All he did was API calls,” Ferose explained. “In two weeks, he built something that would have taken 20 years.”
Such advances suggest the arrival of what he called the “third era of software programming,” in which humans orchestrate AI agents rather than writing code directly.
While the productivity implications appear enormous, many organizations aren’t seeing those gains.
The Productivity Paradox
Despite soaring enterprise spending on AI tools and platforms, recent research paints a more sobering picture.
An analysis reported by The New York Times found that while roughly 90% of companies are investing in AI, many are struggling to achieve measurable returns.
A survey conducted by the National Bureau of Economic Research found that roughly 80% of firms reported AI had produced little impact on productivity or employment.
Ferose called this gap between expectation and outcome “the elephant in the room.” “Everybody is saying productivity has increased dramatically,” he said. “But companies are saying they are not seeing the outcome.”
That disconnect has prompted a deeper look into what’s actually blocking progress.
The conclusion: the biggest barriers to AI adoption aren’t technological—they’re organizational.
The Human Bottleneck
AI systems evolve exponentially. Humans don’t.
“Human beings do not learn exponentially,” Ferose said. “We get overwhelmed, we get tired, we get confused.” This mismatch creates friction inside organizations attempting to deploy AI at scale.
Executives often assume that AI adoption is primarily a technical challenge—choosing the right models, platforms, or infrastructure. In reality, the harder problem lies in how people behave inside complex systems.
Ferose illustrated this with a seemingly mundane example: airport baggage systems.
After landing on a trip, he waited hours for luggage because a conveyor belt malfunctioned. The problem wasn’t necessarily mechanical complexity—it was organizational paralysis. Everyone was waiting for someone else to make a decision.
“In a complex system, when things break down, a command-and-control system does not work,” he said. That insight has direct implications for enterprise AI initiatives.
Why Command and Control Fails in the AI Era
Traditional corporate hierarchies rely heavily on command-and-control management: leaders set direction, and employees execute.
But AI-driven environments move too quickly for centralized decision-making.
“If the person on the ground was empowered to make decisions, the problem could have been fixed very quickly,” Ferose explained.
Instead, organizations often claim to empower employees while continuing to operate through hierarchical approval structures. The result is stalled experimentation and slow implementation.
In smaller, more nimble organizations where teams can move independently, AI adoption tends to progress faster. Large enterprises, by contrast, struggle with coordination and bureaucracy.
Enthusiasm Without Readiness
Another barrier Ferose sees repeatedly is the gap between excitement and capability.
“People confuse enthusiasm with readiness,” he said.
Executives declare their companies are “all in” on AI, but employees often lack the training, skills, or psychological comfort to adopt new tools effectively.
That hesitation is understandable.
“If you are told that if you do AI very quickly you will become redundant,” Ferose noted, “do you think people will accept it?”
Workers naturally prioritize job security and stability before embracing transformative technologies.
Without addressing that social dynamic, adoption stalls.
When Failure Is too Expensive
One of the most counterintuitive insights from enterprise AI adoption is the relationship between experimentation and risk.
Adoption accelerates when failure is inexpensive—and collapses when mistakes carry high stakes.
“If you say, ‘This is the most important thing we are going to do and if you fail you are in trouble,’ people freeze,” Ferose said.
But when experimentation is safe, innovation accelerates. That dynamic explains why startups often move faster than large enterprises: failure is expected rather than punished.
The Four Types of AI Employees
Ferose described a framework that maps employees along two axes: AI capability and AI commitment.
The result is four archetypes commonly found within organizations:
Committed champions possess both strong AI skills and high motivation. These individuals drive meaningful experimentation and innovation.
Motivated builders are enthusiastic but lack technical expertise. Their energy often produces numerous experiments that never reach production.
Unaligned accelerators have strong skills but pursue projects independently, creating fragmentation.
Disengaged observers remain skeptical or uninvolved.
Most companies assume the entire workforce belongs in the first category, but in reality, only a small minority does.
“When everybody jumps in, everybody goes in four different directions,” Ferose said. “You create nothing but chaos.”
The Use-Case Illusion
One of the most visible symptoms of this organizational confusion is the explosion of AI use cases. Many companies proudly report hundreds of AI initiatives underway. But those projects often produce little to no value.
“Every company has hundreds of use cases but zero adoption,” Ferose said. “You have to solve a customer business problem, not create an AI use case. When AI becomes the hammer, everything starts to look like a nail.”
The Rise of “Workslop”
Generative AI has also produced a new phenomenon called “workslop,” a flood of low-value content generated by machines.
Professional platforms are now filled with AI-written articles, reports, and marketing material. But research suggests the trend may actually reduce productivity rather than increase it.
“When everybody becomes a writer overnight, the signal disappears,” Ferose said.
Even OpenAI’s Sam Altman has acknowledged the paradox of AI-generated perfection. Today’s models can generate flawless images or text—but perfection itself can feel artificial.
Human imperfection, ironically, remains one of the most powerful differentiators.
The Social Barrier To AI
The deeper challenge facing enterprises is psychological rather than technical.
“People get it intellectually,” Ferose said. “But they get stalled socially.”
The broader public conversation around AI reflects this tension.
Recently, Microsoft CEO Satya Nadella noted that one of the biggest challenges facing the technology is public acceptance of its benefits.
Within organizations, similar anxieties surface privately, even if employees rarely voice them openly. Fear of job displacement, uncertainty about skill requirements, and confusion about strategy all contribute to slow adoption.
Innovation Requires Imbalance
Large corporations are designed for consistency and efficiency. Innovation, however, thrives on variation.
“Large companies are designed to eliminate variance,” Ferose said. “But innovation happens when there is variation.”
Corporate incentive systems often reward compliance and predictability—exactly the opposite of what experimentation requires.
This tension explains why organizations frequently fall into what Ferose called “innovation theater.”
During previous waves of corporate innovation, companies enthusiastically adopted frameworks like design thinking which focused more on workshops and sticky notes than real customer problems. Unfortunately, AI risks repeating that pattern.
“Let’s build 100 use cases,” he said. “That becomes innovation theater.”
How Enterprises Can Break Through
Despite these challenges, Ferose believes organizations can unlock AI’s potential by adopting a more deliberate strategy.
The first step is recognizing that innovation will never be evenly distributed.
“Some people love to experiment,” he said. “Most people don’t.”
Rather than forcing companywide transformation immediately, organizations should start with small groups of committed champions. Those teams should be given freedom, resources, and minimal bureaucracy.
“Start innovating at the edges,” Ferose advised. “Make it successful, make it big enough, and eventually the core moves to the edge.”
Another critical ingredient is psychological safety. Employees must feel secure enough to experiment without fearing job loss or punishment for failure.
Finally, companies must remove friction for their most capable innovators.
Once early successes demonstrate clear business value—such as significant revenue from a handful of AI products—momentum will follow naturally.
“People follow success stories,” he said. “Show them the first two use cases that create $100 million in revenue, and everybody will jump.”
The Joy of Imbalance
The irony of AI transformation is that progress often requires accepting imbalance.
Not everyone will innovate at the same pace. Not every department will adopt AI simultaneously.
But that uneven distribution is precisely what allows breakthroughs to emerge.
In Ferose’s view, the companies that succeed in the AI era won’t necessarily be those with the most advanced technology. They will be the ones that understand the human dynamics behind it.
“AI is very predictable,” he said. “Human beings are not.”
And for enterprise leaders navigating the next phase of the AI revolution, that may be the most important lesson of all.
You Might Be Interested In
Log in to save this article and keep your favorite resources in one place.
Log in to save this article and keep your favorite resources in one place.
Log in to save this article and keep your favorite resources in one place.
Log in to save this article and keep your favorite resources in one place.