If the events of 2020 have highlighted anything for SAP environments, it’s the need for rapid-scale flexibility and reliable performance of underlying infrastructure. While not a new concept, operational shifts of the past several months have put the performance capabilities of even the most robust environments to the test. Unanticipated demand spikes, sudden shifts in access requirements, disruptions in supply chain procedures, and a heightened reliance on data access—all of these have placed unforeseen demands on the SAP application’s ability to meet user demand to maintain a competitive advantage for the business.

In this five-part series, Brian Sturgis and Rob McLaughlin, fellow SAP specialists at Dell Technologies, laid out the four big ideas dominating interactions with SAP users as the effects of a global pandemic make their mark on everyday business operations. In the second post, we explored the heightened focus on the cloud and the need to build a cloud-first strategy.

In this third post, we check in with Brian to delve more deeply into the scale and performance conversation to understand how it’s evolving and how recent experiences can better prepare you for long-term SAP success.

In recent months, how has the scale and performance conversation changed for SAP users?

Brian: For years, we have discussed the importance of information lifecycle management (ILM) for SAP environments. ILM is comprised of many different facets, one of which is data archiving, which involves removing data from the main database and storing it elsewhere in a secondary, related database. Among other benefits, data archiving allows you to reduce the size of in-memory SAP HANA databases upon migration from classic RDBMS-based SAP to SAP HANA. Memory is expensive, so reducing database size helps minimize cost, particularly during current times, when SAP performance and access are so essential and cost concerns are at the forefront.

Brian Sturgis, SAP Infrastructure Specialist Data-Centric Workloads, SAP Presales North America, Dell Technologies

While some of the SAP teams I deal with have employed data archiving, most have not. The main reason typically involves pushback from the business and the SAP users within the organization because archiving can be disruptive to implement and can alter typical search/inquiry processes for less frequently accessed data that has been archived. So, while IT is often a proponent of data archiving, the business users do not often see how benefits outweigh the costs.

For SAP S/4HANA, however, SAP has responded with an alternative approach known as native storage extensions (NSE). NSEs allow you to store more frequently accessed “hot” data in memory and less frequently accessed “warm” data in storage, all while being managed as a single, unified database. The SAP software determines automatically which data is hot and which data is warm and distributes it accordingly. The location of the data—be it in memory or in storage—is transparent to business users. And, unlike data archiving, search/inquiry processes are not affected.   

But how does this actually benefit the customer in terms of scale and cost reduction?

Brian: Here’s a real-life example. A Dell Technologies customer recently came to us with requirements for an 8TB SAP HANA database, with the data and work area combined and that would likely be scaling even larger than that. One option—and certainly the most expensive one—was to host that database completely in RAM with an 8-socket, 12TB RAM server.

As an alternative, we presented the possibility of using a 4-socket, 6TB server with SAP NSEs. With this option, only a little over 4TB of RAM—comprised of SAP HANA hot data, work area, and a buffer cache—would be needed, while the remaining 4TB of the 8TB SAP HANA database would reside in storage as warm data. And, as we explained, the SAP HANA database could then grow to nearly 12TB before the 6TB of RAM in the server would be maxed out.

Does that reduce cost? Absolutely. For SAP HANA, think of RAM as the expensive, beachfront property and storage as the less expensive homes a couple blocks off the beach that don’t have quite the view and require a little walking to get down to the ocean. If part of the “group” for which you are making reservations will stay beachfront, and part is fine with being a few blocks off the beach, then this becomes an ideal solution with significant cost savings. These savings can then be invested into future vacations, better nightlife, or a rainy-day fund.

This is a great example of how innovative thinking can benefit a business during times of change and ultimately leave them in a stronger position for the long term.

OK, but what about those customers who need maximum performance for their SAP HANA databases but are still cost-sensitive?

Brian: Of course, NSEs are not going to be right for everyone, but innovative performance options exist. When we think of RAM, we typically think of DRAM (dynamic random-access memory). But recently, a new class of memory, known as Intel Optane persistent memory, or PMem, has been approved for SAP HANA configurations. PMem is indeed memory, so it is nearly as performant as DRAM. However, it is less expensive than DRAM and permits, in many cases, the size of memory to be larger when compared to all-DRAM. It also enables faster SAP HANA node restart times.

Analysis of a specific SAP HANA workload is typically needed to determine the fit and feasibility of PMem, but it is not uncommon to use a 1 to 2 DRAM-to-PMem ratio for HANA server. For example, in a 4-socket server, you could have 3TB of DRAM and 6TB of PMem, which yields a total of 9TB to host the SAP HANA database. When compared with a 9TB-plus all-DRAM server, this option is typically much less expensive, yet nearly as high performing.

As an SAP specialist, how does this environment affect your solution approach? What are you doing differently to respond to scale and performance requirements?

Brian: One thing has continued to stay the same, for sure—individual business requirements drive the solution. Each SAP environment is different, and it is important to take into account all relevant variables, like current infrastructure, strategy, past experiences, company standards, database sizes, growth rates, cost and budget concerns, performance demands, and so on, before we engage heavily in solutioning. No two solution approaches are alike.

What is changing today is the technology evolution within and around SAP. New options, techniques, and the like continue to emerge. Some of this is driven by recent global events, and some are the normal course of innovation and business transformation. It is imperative that we as specialists stay informed, conversant, and adept at designing SAP infrastructure solutions accordingly. It is our job to ensure that SAP users are aware of the options available to them. Within the past few months, since starting this blog series, several of my contacts have reached out wanting to discuss SAP data tiering techniques using NSEs, PMem, etc. This is an exciting trend and I hope readers here will reach out if they have questions of their own.

ASUG members can register for the Executive Exchange Virtual Roundtable: Refreshed Strategy on Digital Transformation Initiatives on Nov. 19.

Like what you’re reading?

Become a member and get access to all ASUG benefits including news, resources, webcasts, chapter events, and much more!

Learn more

Already an ASUG member? Log in