Guest Column | July 25, 2022

Considerations For Refreshing Your Data Center

By Michael McNerney, Supermicro

Supermicro Data Center

With new technology coming out increasingly fast, it can leave many IT leaders asking, “Do I need to refresh my Datacenter equipment?”

The answer almost always is yes, for several reasons. IT departments need to satisfy increased user demands, new applications, workload scaling, and a reduction in environmental impact. When the latest CPU and GPU technologies appear in servers and storage systems, applications should run faster, servers will use less electricity, and new software can become part of the IT offerings to users. To be certain it is time for a refresh, here are some things to keep in mind as you progress through your process.

Before a hardware refresh is attempted, it is important to know what needs to be refreshed. Someone familiar with the hardware and software in a data center should perform an audit to understand the existing infrastructure. By testing hardware capabilities and measuring which systems are being stressed, which are “zombies,” and which are at the end of their service life, you can prioritize your refresh needs. For example, the current systems may not perform as required due to additional software being run or larger data sets being stored than expected.

One consideration when refreshing data center hardware is looking at server designs optimized for specific workloads. With increasing flexibility from the CPU manufacturers in making a range of processors available, servers are doing the same. For example, a server designed for HPC may require the fastest CPU performance but not require significant amounts of internal storage devices. Another workload where IO is critical may be able to use less performant CPUs but have more storage devices and faster IO. Yet other requirements for application may require significant amounts of memory, possibly using an advanced tiered memory approach. As part of a data center refresh, opportunities exist for maximizing application performance by selecting the exact match of servers and workloads.

In many cases, the performance of an application may not need to be improved. Still, newer CPUs in servers may allow more applications to run on the same server with similar if not stronger performance. Since newer data center CPUs will typically have more cores and/or higher performance per core, an application may be able to share a single newer CPU with another application. When jumping a generation since the performance increase may not be significant, it is important to examine the increase in the number of cores and the GHz per core in comparison to the older systems. For example, if calculated by #cores x Base GHz, the total performance would be about 22% higher from 2nd Gen Intel Xeon Scalable processors to the third generation. In addition, newer systems have significantly increased memory capacity and performance. The current memory performance for the latest servers is DDR4-3200, which means that memory can perform up to 3,200,000,000 data transfers per second, 20% faster than the previous generation.

Energy efficiency is also a critical reason to refresh a data center. Each generation of CPUs and GPUs performs more work per watt of electricity used than the previous generation. This ratio can be measured by running benchmarks and measuring the power used during the benchmark or application (the audit that you did earlier may help here as well). When refreshing the hardware in a data center, the higher performance/watt has many benefits. For instance, the amount of power used will be less if running the same workloads, resulting in a lower PUE for the data center. Or server consolidation can be accomplished since more applications can be run on fewer servers as described above. This approach decreases the number of servers, and, in some cases, an older dual-socket system can be replaced with a more performant single-socket system. The resulting amount of electricity used will be significantly less but still maintain SLAs from the data center.

Early access to upcoming systems is an essential step in the process. Once the type of systems is determined to be the most optimized for the required applications, they are selected -- based on CPU capabilities, memory footprint, IO performance and capacity, and networking speeds. Early access to hardware can be an extremely valuable part of the process. For example, early access to systems will give developers an understanding of compatibility with previous generations of CPUs and systems. In addition, benchmarks will provide developers and IT administrators insight into how much faster applications will execute.

While a server refresh in a data center will bring immediate benefits, you have to make sure to find the right process for your IT program. Additional considerations should include how quickly these new systems can be set up and running. Total IT solutions, a combination of hardware and software tuned and tested together, will result in a more seamless method to get new hardware integrated into an existing data center or for expansion reasons. Understanding workloads and how newer systems can reduce their environmental impact while still delivering performance will result in a better and more streamlined data center.

About The Author

Michael McNerney is VP of Marketing and Network Security at Supermicro. Michael has over two decades of experience working in the enterprise hardware industry, with a proven track record of leading product strategy and software design. Prior to Supermicro, he also held leadership roles at Sun Microsystems and Hewlett-Packard.