Guest Column | July 26, 2021

Versatility Is Vital For Data Center Design

By Michael McNerney, Supermicro

3 Engineers IN Data Center

With large companies like Google, Microsoft, and Facebook building more of their own data centers annually, it's natural that they want to design their facilities to handle any and all possible storage or compute needs they might have. But the different tasks a modern data center is expected to perform for an enterprise vary widely and require vastly different resources. In addition, the increase in complex cyber threats also makes it increasingly important that the data center can secure sensitive information over the long haul.

These intersecting trends and priorities are leading these facilities to expand beyond what many think of as a traditional data center.

With the explosion of data being generated at or near the edge of almost everything, the definition of a "data center" is moving away from simply meaning a massive building that houses 1000s of servers and storage systems. It now also incorporates smaller data centers form factors that exist closer to the edge of the network – from a handful of servers in an SMB's ventilated closet to a single system housed in a cell tower that processes data from passing cell phones. Today's data centers can even live within a manufacturing plant, communicating in real-time with robots and production lines.

This transformation in how data centers are utilized also has led to a change in their design. When planning for new server and storage systems for a data center buildout or refresh, it is critical to understand the scale-up or scale-out nature of the applications involved. This process is vital in implementing an adequate infrastructure for a smooth running and productive data center suited for a business's specific needs.

Data centers located at the edge may need servers designed to withstand harsher environmental conditions. This could range from high temperature swings, uncontrolled humidity, extreme shaking/vibrations, and even contaminated air. In many cases, the systems must be self-cooled (no fans) and have to run on limited amounts of power. Such servers that collect and filter sensor data, and those that reside as part of the telco infrastructures, are very different from the types of systems installed in air-conditioned data centers.

Servers that sit at the edge, compared to those in a large data center contain no rotating mechanical parts such as fans or hard disk and have fewer connectivity options. And larger systems that are located in mini data centers will have front serviceability due to area constraints.

While the location is critical to the functionality and infrastructure of a data center, the system's performance is just as equally important. A data center designed to service the backend of mobile apps will need systems optimized for low-latency responses. One interesting aspect of data center transformation has been the emphasis on limiting latency through edge computing architecture. While hyper-scale data centers offer several significant advantages, some companies are better served by smaller facilities that can be located closer to end users, which cuts down on latency. All to say, these systems' performance is measured in how quickly an application can respond to a network request and not require large amounts of memory or floating-point performance. The time to respond to a request is paramount to keep users' SLAs guaranteed.

Research and engineering departments, who need intensive simulations, 3D renderings, or AI calculations, will want a completely different type of computing solution. These clusters will typically
require faster cores measured by GHz and floating-point performance. And since having more simulations results in more accurate data, large amounts of system memory are also paramount to running complex simulations and real-time analytics.

High-density computing systems are ideal for these applications, such as server blades specialized for HPC and data analytics workloads. Systems that can accommodate multiple GPU systems are excellent choices for Artificial Intelligence and Machine Learning. Conversely, a data center running enterprise applications (ERP, HR, CRM) will need systems that can quickly analyze large amounts of data. These setups don't require minimal latency, but correct and up-to-date data cannot be compromised. Systems that contain various amounts of CPUs, memory, storage, and networking options are required to maintain SLAs for organizations that depend on an internal data center to provide information

As you can imagine, all of these different system needs and application requirements make for a complicated balance when designing a data center. And the solution is not as simple as simply "getting the best of all worlds." While under-provisioning leads to performance bottlenecks and limitations, overprovisioning will mean idling systems and wasted resources. A wide variety of systems purposefully chosen for specific needs is the best approach to get optimal data center performance and efficiency. Understanding the workloads and where bottlenecks will appear should be considered up-front, and the proper infrastructure was chosen based on application profiles.

While the term "data center" is thought of as a physical location, locating the required processing power in an optimal location to acquire and process data is really about determining the most optimized placement for the necessary processing power. While large-scale corporate or cloud data centers can process massive amounts of data, the closer to the data that the processing exists can affect the entire upstream and downstream communications and responsiveness. Different workloads for different companies require different types of data centers.

Trying to approach data center design with a "one size fits all" mindset will ultimately lead to inefficient operations or wasted money – both unappealing options.

About The Author

Michael McNerney is Vice President of Marketing and Network Security at Supermicro.