ASUG News + Views
Five Strategies Enterprise Architects Can Implement to Redesign Work for the AI Era
Vadim Rizov Mar 17, 2026
Bookmark
Share Article:

At a moment when the volume of new information about AI can be overwhelming, Signal and Cipher’s CEO & Chief Futurist Ian Beacraft believes it’s important to find and maintain a sense of perspective. 

Speaking at the Next Generation SAP Enterprise Architect Learning Forum in February, Beacraft used the introduction of the steam engine during the Industrial Revolution as an analogy for our present moment, putting the audience in the position of a construction worker confronting a then-new tool for the first time and figuring out how to use it to gain professional advantage. “A crane or steam shovel doesn’t just move earth faster” on its own, he explained. “You have to actually redesign all the things that happen around it for it to be effective,” such as redesigning job sites and reallocating crew assignments. 

The moment we occupy now, Beacraft said, is “just like the Industrial Revolution, where our value went from our physical labor to our mental labor.” So, it’s not surprising that similar growing pains are being experienced by employees navigating the introduction of AI at their workplaces; it's during this early period that initial increases in efficiency can often be deceptive. “I might be able to work faster or at higher volume in certain areas, but just because I’m doing that doesn’t mean I’m getting that net gain,” Beacraft observed. Instead, potentially, “I’m pushing friction somewhere else in the system.” 

How can employees actually realize the value promised by AI? Beacraft suggested five strategies:

  1. Continuously redesigning AI workflow, restoring worker agency
  2. Leveraging human institutional knowledge
  3. Shifting from declarative intent prompts to imperative prompts
  4. Using architected information for organizational systems (AIOS) to create a governance layer
  5. Crafting new, more meaningful productivity metrics

Designing the Work

How best to re-engineer workflows affected by AI implementation? The answer can’t just be one way: Workers don’t just adjust to the AI, but then themselves adjust AI to suit their workflows as its operators, engineers or architects. Beacraft stressed the distinction between the latter two roles: “Engineering is, ‘Can I design these workflows from first principles?’ Architecting is, ‘Are we optimizing—for the right metrics, the right KPIs, the right form of value—in the first place?’” 

Because of the necessity of that work to calibrate and optimize AI for work, Beacraft argued that generalized anxiety about job loss due to AI implementation is misplaced: “AI will be mostly focusing on doing the work and, to some extent, engineering workflows. This gives us the ability to look at it from a much higher angle.” 

He compared the adjustment process necessary to the practice required to become a successful musician: “It takes hundreds of reps to ingrain it in you, but if you spend enough deliberate practice on it, you’re unstoppable.” Once workers adjust to agentic assistance, people can expand their work to focus on redesigning how their work actually works. “We are no longer responsible for staying in those silos. We individually have to be thinking about how the work needs to be redesigned at all times. Every single one of us is in R&D. There’s no one who’s left out of that remit.”

Human Institutional Knowledge

Before establishing agentic workflows, leveraging the human instincts and disciplines present within one's workforce is essential. “The most valuable intelligence you have in your organization is not in your data sets, it’s not in your standard operating procedures (SOPs),” Beacraft said. “It’s inside the heads of the people that live inside your organization. It’s the habits, the rules of thumb, the guardrails, the notes, Slack threads—all this data across your organization that basically tells your organization how it runs.” 

Focusing on the comparative merits of different LLMs is missing the larger picture, he said: “The real power comes when you start to dissect the X factor of your company. How do you articulate the way your teams work that is different than the way the standard operating procedures might actually indicate?” These granular, real-world learnings can create data sets that AI agents work with more effectively.

Declarative Intent Prompts Versus Imperative Prompts

Transferring that institutional knowledge is only the beginning. To begin thinking like an architect, Beacraft suggested starting with a simple question: What kind of bottlenecks in workflow handoffs still exist solely due to technological constraints that no longer apply? Specifically, “If AI could do 80% of the work that we do today, what would we be designing for our humans to do instead?” Then, once those questions generate new answers, how can workplaces adjust to the rapid changes that result? 

As Beacraft noted, with tools that improve every six months, AI’s impact on workplaces is accelerating: “How do we make sure that our roles, our governance, our decision-making, are keeping pace?”

One solution involves shifting from a declarative intent prompt model to an imperative prompt model. In the current landscape, users often offer detailed declarative prompts to AI, constraining its effectiveness by treating it as a junior developer with limited operating approaches. “Oftentimes it would get pretty far, but then something would break,” Beacraft observed. “It would have to come to me, and I would be responsible for debugging. I would be responsible for course-correcting.” 

As a counter-example of a productive imperative prompt, Beacraft offered the following: “Create an integration design that maps sales order fields from Salesforce to SAP S/4HANA. Define the middleware routing logic and write error handling sequence.” What that prompt translates to for the AI agent is, after the prompt is completed, “Check your work. Now, generate test payloads for each edge case and validate against SAP S/4HANA posting logic until all [the payloads] pass. Do not come back to me until you run this over and over and all of those criteria have been successfully matched.” 

In this process, “I have now taken a machine that never gets tired, does not give up until success has been met, and I’ve given it conditions that aren’t so restrictive. It still knows what success looks like, how to avoid failure, and it can use all the tools at its disposal to get there.”

Creating an AI Constitution

Another key component of successful AI implementation is creating a governance layer known as “architected information for organizational systems,” or AIOS. In Beacraft’s analogy, the AIOS is the constitution for your AI, and your agents are the cabinet. This defines the boundaries while agents get things done. A constitution is something that changes slowly. It informs what the principles are of what you can and can not do, but it doesn’t inform exactly what you have to say. 

As an example of an AIOS in practice, Beacraft offered a recent case study from GitHub, where one bot discovered that another one had accidentally disclosed a secret key. The error was immediately caught and addressed, then a new policy was proposed and integrated within seconds. “Nobody in our organization told it to do that,” Beacraft noted. “It was not pre-programmed into the system. There was no particular source logic for this process at all. The systems just knew to do this based on the fact that the governance layer had already provided all the logic for how to understand these types of scenarios. Something like this allows a very small team to work incredibly prolifically.” 

As another case study, Beacraft cited a prototype built at a meeting over an afternoon with a group of regional banking executives, none of whom had any AI prompt training prior to the session. The team then launched into a design session that researched a financial services prototype, building it alongside a brand and communications plan, including a whole year of podcast scripts, the brand design guidebook, and a prototype. A process that once might have taken six weeks could now be tested in an afternoon. 

“We’re now in a world where the prototype costs less than the meeting to think about it,” Beacraft observed. “One person with three or four hours banging on a prototype can come to the table and not just say, ‘Hey, I have an idea,’ but, ‘Here is the experience. Here it is in practice and application.’ At that point, your ideas come in contact with reality, and you’re dealing with data, not assumptions.”

Meaningful Metrics

Creating metrics that meaningfully measure AI’s input requires thinking beyond metrics that measure the efficiency and scale of well-defined stable systems. “Every time someone has an AI implementation, they’re always focused on, ‘Where’s the ROI in five minutes? Why aren’t we 10 times faster right now?’” Beacraft noted. 

More meaningful metrics might look like considerations of prototyping rates: “How many prototypes are coming from an individual or an organization or a team? What’s your prototype-to-commercialization ratio? How many of those are actually being deployed? These are metrics that are leading indicators of progress down the road. Ultimately, if you build the systems, you show what’s valued, you show what behaviors are going to be rewarded, so that we can all move forward into this new terrain and collectively build that future.”

For more coverage of the Next Generation SAP Enterprise Architect Summit, subscribe to ASUG's First Five newsletter and stay tuned for more customer stories, expert interviews, guest perspectives, and session recaps in the weeks ahead.

You Might Be Interested In


Insights Included in Membership
View All Insights
Bookmark
Bookmark
Bookmark
Bookmark