Open Compute Project Initiative Drives Efficiency in New Data Center Designs
The OPC has grown into a collaborative community in which interconnect suppliers work with technology innovators to drive innovation and efficiency in data center infrastructure through open-source hardware.
The Open Compute Project (OCP) is an initiative that aims to design and share open-source hardware technologies, including interconnects, to help create more efficient data centers. The need is urgent to refine the complex IT infrastructure needed to process, store, and transfer the massive and growing volumes of data used for artificial intelligence, streaming media, cloud computing, industrial automation, transportation, digital finance, and other computing activities. Thousands of concrete fortresses around the world already house servers, routers, and storage devices, and humanity’s increasing reliance on data-intensive activities has led to a global boom in data center development. Supersized facilities called hyperscale data centers now account for 37% of data center capacity — and growing. Bloomberg estimates the data center industry will expand by an average of 15% every year from 2022 to reach a valuation of $89.3 billion by 2030.
While these facilities support the development of exciting new technologies, they are a growing environmental threat. Data centers consume enormous amounts of land, resources, and infrastructure. In 2021, Google’s data centers alone consumed 4.3 billion gallons of water. Agriculture lands are being converted to warehouse parks. The energy use of data centers is staggering; an analysis of just one product, NVIDIA’s H100 AI GPU, deployed in data centers around the world (and with another 3.5 million units expected to ship in 2024) estimates a global power consumption of 13,000 GWh of electricity annually, “greater than the power consumption of some entire countries.”
In 2022, Meta’s electricity use surpassed 115,000 GWh (115 terawatt hours), a 22% year-over-year increase. Back in 2011, when the company (then known as Facebook) used 53 GWh of electricity, it founded the OCP as an industry-wide collaborative effort to improve data center efficiency, with a focus on optimizing the hardware used in its data centers. In the big picture, Meta’s energy use demonstrates that data center growth is far surpassing efforts to operate these facilities efficiently. However, it could be much worse; Meta’s OCP-designed data center prototype was nearly 40% more energy efficient to build and 24% less costly to operate than the company’s previous generation datacenters.
To reduce energy use, OCP focuses on bringing efficiency to servers, storage, networking, and facility design. Although it is not a standards body, its members collaborate on the development of specifications and designs that are openly shared with the community. Meta and other players in the tech industry are joined by research and academic partners. In January 2024, OCP announced an alliance with Infrastructure Masons, the global nonprofit dedicated to reducing the carbon impact of digital infrastructure. Hardware suppliers play a key role in developing the high-speed, high-density connectivity products to help meet OCP’s goals. Interconnect suppliers such as Amphenol, Avnet, Bel, BizLink, HARTING, JPC Connectivity, Molex, Positronic, Phoenix Contact, Samtec, TE Connectivity, and others contribute expertise and offer OCP-compliant products.
OCP’s open model encourages a diverse ecosystem of hardware vendors, allowing data center designers to choose from a range of OCP-compliant products designed with a focus on energy efficiency, scalability, and cost-effectiveness. Products include servers, storage solutions, networking equipment, interconnects, and specialized hardware for AI and machine learning workloads. OCP’s influence extends beyond traditional data centers, with efforts to address edge computing and other emerging technologies.
The result of designing with OCP hardware is enhanced performance and scalability compared to traditional solutions. OCP products are designed to be modular and flexible, allowing for seamless expansion and adaptation to growing data demands. Cost savings can be significant: OCP hardware is designed to optimize efficiency, reduce power consumption, and minimize operational costs.
“Bel Fuse was a founding member of OCP where our best-in-class power solutions were quickly adopted for data center deployment due to our leading efficiencies and power densities. For example, early on in OCP V2 deployment we were solving power zone requirements exceeding 80kW within a 5OU space,” -Ian Warner, director of business development – AC/DC products at Bel Fuse Inc. “As the community moves to OrV3, we continue leading the way with 8OU Titanium+ Power solutions up to 192kWs. These achievements became possible because of our collaboration with leading data center integrators like Circle-B.”
Data centers designed to OCP standards can experience substantial cost savings while improving performance. Innovations surrounding interconnects in data center design focus on reducing the energy needed to mitigate the heat produced by high power/high speed architectures.
Hasan Ali, associate new power development manager at Molex, gave a presentation on pluggable optics solutions for data centers at the 2023 OCP Global Summit. Ali focused on power trends, noting that as networks and servers reach 112G and 224G data rates, energy consumption rises and the need to cool modules intensifies. Molex’s solutions focus on SMT connectors and bi-pass cable solutions. “The good thing is that doubling the bandwidth, the power use doesn’t double,” Ali said. “In the current approach, the system is taking input from the case temperature to determine what the thermal health of the module will be. If the module is running at 95 C, the system takes that as an input to turn the fans up to cool the module better.”
The location of the temperature monitor within the system varies, as does the distribution of heat within the module; this aspect is not standardized. Ali notes that using the module case temperature as the proxy doesn’t take into consideration temperature variation near the internal components, which leaves some unused margin within modules, while running fans at excessive levels. The position of the temperature monitor must be balanced with the heat sink location and EMI concerns, with the critical goal of protecting the connection at the I/O modules. As high-power modules reach 40 watts, utilizing that margin can access efficiencies and prevent failures without needing to overdesign systems. (See the thermal innovations white paper produced by Molex, Amphenol, TE Connectivity, and others.)
OCP-aligned working groups are already looking at yet higher power modules at the 50+ watt range, considering liquid cooling solutions as well as traditional air cooling methods, and exploring development possibilities of optics. Nathan Tracy, TE Connectivity’s technologist, systems architecture and master of industry, said that the rise of AI and its extreme demand for computing power brings new challenges to data center design. As part of the OIF (Optic Implementers Forum), TE focuses on optical solutions for temperature monitoring, co-packaging, management, power reduction, electrical interfaces, lower cost, and lower latency using co-packaged, near packaged and pluggable optical modules.
“To develop the solutions we need for next-generation optics, we are looking at seven specifications we have finished or are in the process of defining for 112G. What this does is allow the industry to use the interface that is most cost-effective for the link they are trying to implement, with the lowest power solution in terms of electrical interface. One of the projects we are working on now is a linear optics project. Linear optics are not new, but they are a challenging area,” said Tracy, noting the projects goals are to identify an electrical channel that can drive the optical budget in a way to lower energy use, lower costs, and improve latency. “If we are going to support the hyperscale industry, then we need to get to a broadly interoperable solution. It’s a challenging problem and as we look forward to 200G, a common interface specification will allow the industry to put a common communications understanding on all the hosts that are built in the world to take the deployment challenge off the table, so the industry can unify around energy efficiency. We are getting closer and closer to where coherent optics are going to deliver that value.”
Like this article? Check out our other Data Centers, High-Speed articles, our Datacom Market Page, and our 2023 and 2024 Article Archives.
Subscribe to our weekly e-newsletters, follow us on LinkedIn, Twitter, and Facebook, and check out our eBook archives for more applicable, expert-informed connectivity content.
- Materials Make the Difference in EV Charging - January 21, 2025
- 2024 Connector Industry Acquisitions - January 21, 2025
- Meet the Connector: VITA 67 - January 14, 2025