How to Increase Speed and Efficiency in Data Centers

By Contributed Article | July 14, 2015

Cloud computing and bandwidth-intensive applications have made the data center more important than ever, and managers want to squeeze every last bit of performance out of its architecture, even down to the connector level.

 

TE's advanced backplane connectors

Today’s most advanced backplane connectors support next-generation speeds. (Photo courtesy of TE Connectivity)

In today’s environment, data centers are gaining importance due to the trend of outsourcing access to data (through the Cloud) while continuously supporting bandwidth-intensive applications (such as video). Data center managers want to squeeze every last bit of performance out of the data center architecture, even down to the connector level. Network equipment manufacturers need to consider five key criteria in choosing input/output (I/O) connectors that maximize speed and efficiency in data centers – flexibility, cost, thermal management, density, and electrical performance. They also must optimize these five criteria in their equipment’s backplane and power connectors.

Flexibility

The I/O connector should offer maximum flexibility in the choice of cable type needed for each application. For example, suppose there’s a rack of servers that all connect to a top-of-rack switch. Most of these connections are fairly short – typically three meters or less – so it’s less expensive to use copper cable. But some connections may be longer and require optical cable. By using a pluggable form factor connector such as SFP+, SFP28, QSFP+, or QSFP28, the manufacturer gives the data center operator the ability to choose the right cable to meet specific needs.

Cost

Based on industry trends, a server’s interconnect might be 1Gb/s, but in some of the more demanding applications servers now support 10Gb/s or even 40Gb/s. The 40Gb/s connections have been around for a couple of years, but the latest trend is to go to a 25Gb/s solution. The 40Gb/s solution implements four lanes of data at 10Gb/s each, so the manufacturer can build “intelligent” equipment that can take the data, break it up over four lanes, and then reassemble the stream into 40Gb/s. In contrast, 25Gb/s uses a single lane, so it has lower overhead and makes for easier implementation in the server and the switch.

Thermal Management

When you take a copper cable assembly and replace it with an optical module, the signal is converted from electrical to optical, so the module is now dissipating power. This may be less critical on a server where there are only one or two interconnects, but it’s a significant factor on a switch where there might be up to 48 interconnects. Thermal management becomes critically important because now the equipment has 48 little heaters adding to the heat already generated from internal components.

With optical interconnects, manufacturers need to optimize for a new set of dynamics, and they need optical modules that dissipate less power and I/O connectors that can help to manage that thermal load.

Density

On switches, connectors must be as small as possible to provide the highest I/O density while still accommodating optical modules with the above-mentioned thermal loads. Customers desire 24, 48, or even more connections in a 1RU chassis. One way the industry has responded is with the new μQSFP connector (micro QSFP). An industry consortium is now defining this new connector standard to enable not only higher density, but also better thermal management, enabling up to 72 ports per 1RU chassis.

Electrical Performance

Although standards dictate the overall performance of an interconnect channel (loss of host + connector + cable assembly, etc.), connector manufacturers also differentiate their products by delivering enhanced signal integrity performance. For example, a better-performing connector or cable assembly provides more design margin to the equipment designer to enable longer channel reaches or lower-cost PCB materials. Connectors are being shipped today with multiple 25Gb/s pairs for 25, 100, and 400Gb/s applications, and they are in development or shipping now with 50Gb/s pairs as well.

Backplane Connectors

As equipment needs to support higher densities of I/O performance, its backplane also must support the increasing aggregate data rate. With a line card that supports 24 or 48 100-Gigabit ports, a backplane connector with adequate capacity is needed. Equipment manufacturers need next-generation backplane connectors that support 10, 25, 50Gb/s, and beyond of bandwidth per differential pair.

In fact, the backplane is the first thing equipment designers think about. They’re going to sell this equipment to large network providers, who want that equipment to last for as many years as possible. If they can design a backplane chassis so it can support a first-generation line card at 10Gb/s, and a second-generation line card can plug into the same chassis at 25Gb/s, then 50Gb/s, then 100Gb/s, the same equipment can be retained in that data center for a long time – only the line cards need to be replaced.

Power Architectures

The equipment development engineer is also focused on the power delivery architecture. As discussed, higher bandwidth and higher I/O density lead to higher power requirements. Connector suppliers enable these power architectures with higher-density, lower-loss (voltage drop) power connector systems for bus bar, backplane, or cabled power delivery architectures.

Connectors matter in data center equipment designs. By using the above criteria, network equipment makers can have a significant impact on their products’ efficiency and performance. The newest generation of electrical connectors allows equipment developers to keep up with the challenging demands of our highly connected world.

Author Nathan Tracy has more than 30 years of experience in technology development, marketing, sales, and business development for TE Connectivity. Currently, he is a technologist on the system architecture team and the manager of industry standards, driving standards activities and working with key customers on new system architectures for the data communications market. 

Get the Latest News
x