Data Centers: Powering the Backbone of Our Digital World

By Matt Mazdeh | May 18, 2021

Innovative connector solutions are needed to address space and heat limitations for high-density data centers as digital growth continues to escalate. More power and higher speeds are needed but keeping heat and energy use down is a pressing priority. 

The COVID-19 global pandemic has dramatically altered the way we live, work, and connect. Across the planet, in-person interactions, events, and work functions are being replaced by new digital experiences. Although data growth was continuously rising previous to the pandemic, individuals, businesses, organizations, and governments worldwide are now generating and consuming staggering amounts of data thanks to increased use of IoT technologies, e-commerce, social media, telehealth, digital learning, video streaming, gaming, and AI. As a result of the new adoption and increased use of technology, millions of additional digital transactions are being conducted every minute requiring more data centers worldwide.

 

This rapidly multiplying digital traffic is driving exponential expansion of the global datasphere. In 2018, International Data Corporation (IDC) predicted that data use would grow to 175 zettabytes by 2025. A report by Sensor Tower found that global data consumption by cellphone use alone grew by 52% in the first quarter of 2020. This is sending demand for computing capacity — and the energy to power it — sky-high.

To boost current and future computing performance, massive numbers of specialized application-specific integrated circuits (ASICs), graphics processing units (GPUs), field-programmable gate arrays (FGPAs), and classic central processing units (CPUs) are being deployed. Today’s data centers contain anywhere from tens to hundreds of thousands of servers, supported by a network of switches, routers, and cooling equipment, all of which require substantial electricity. In fact, extremely large hyperscale facilities have power draws ranging from 10MW to 70MW.

More Power Density, Same Space

Prior to the pandemic, IDC reported that energy consumption per server grew by 9% annually across the globe. In the United States, data center power consumption is projected to double every five years. Yet, even as the power density per square foot of data center floor climbs, the space allotted for both the power supply and the critical output connector remains unchanged.

Back in the pioneering days of data centers, server system infrastructure requirements called for 400W to 600W power supplies with inputs and outputs (I/Os) using four to six power blades rated at 30A per blade. To meet the needs of today’s hyperscale and telecommunications environments, power supply manufacturers now need to triple that amount while staying within the same space. Solutions are needed that incorporate six to eight power blades, each capable of handling 70A to 80A, while generating no more than a 30° rise in temperature. As a result, connector companies face increased pressure to provide power I/Os capable of carrying triple the current in the same space.

Power is delivered to data centers via the same grid that provides power to homes and businesses. However, while homes in the U.S. generally receive power at 220V, data centers typically receive power at 10kV+ to accommodate the massive amount required to run the processors that power the internet.

Typically, bulk power is delivered to racks containing 30–50 1U servers. Increasingly, these servers are being powered by 3kW power supply units (PSUs) in rack scale architectures. Power shelves are used to aggregate power supplies to meet the needs of components inside the servers and switches in the rack.

The Challenge Heats Up 

Figuring out how to optimize power supplies within tightly controlled spaces creates one challenge, and containing the additional heat generated by the increase in power density poses another. Several sources generate heat in data centers. Heat occurs as a natural part of the process of converting power from AC to DC and vice versa. Even in smaller data centers, let alone hyperscale facilities, the servers, routers, switches, and other rack-mounted data center components all generate heat. The design of a PSU’s printed circuit board (PCB), including the copper layers, thickness of the layers, and footprint size, can also contribute heat, as do all the fans required to cool components. Obviously, no connector supplier wants their connector to serve as a heat sink. Frequently, however, that is precisely what thermal evaluations reveal is happening when PCBs transfer heat to the connector.

Minimizing Energy Consumption and Cost 

Energy costs are a major expense for data centers of any size, so it’s no surprise that minimizing energy consumption in the face of exploding computing demand is a top priority. The design and implementation of cost-efficient heating and cooling systems are proving increasingly critical to both sustainability and market competitiveness.

The space and heat constraints complicate the challenge of developing viable solutions for the data center ecosystem, but the potential benefits are well worth the effort. After all, recent results showed that millions of dollars can be saved with an approximately 7% improvement in energy efficiency.

A variety of approaches are being considered or introduced. While they enable incremental improvements, most still have drawbacks in terms of scale or reliability. For example, some manufacturers have begun integrating vents into rack housings. The vents combat overheating by allowing heat to escape but are insufficient in high-density environments. Advances in copper alloy materials have enabled increased conductivity, but they still lag behind in terms of increased power requirements. Improvements in contact design help alleviate power loss, but they are not a reliable solution for meeting high-density requirements. In addition, connector designers are now getting requests to decrease the centerline spacing between power contacts, which is creating new mutual or joule heating issues and adding yet another layer of complexity to the data center power challenge.

Molex and other industry leaders continue to explore innovative solutions as the need to address space and heat limitations for high density data centers takes on greater importance in our digitally driven world.

For more information, visit Molex online.

Like this article? Check out our other Connection Basics, Networking and Data Centers articles, our Datacom/Telecom Market Page, and our 2021 and 2020 Article Archives.

Matt Mazdeh
Latest posts by Matt Mazdeh (see all)
Get the Latest News
x