Big Data, Big Challenges

By Contributed Article | June 08, 2016

Increasing demands of growing data volumes force IT professionals to explore better connection options.



Big DataAsk five people to define “Big Data” and you are likely to get five different definitions – ranging from broad references about analyzing numerous data sets for patterns and trends, down to very precise specifications for data set size or compute cycles.

Whatever your perception, true Big Data analysis puts huge demands on interconnections among the computing and storage resources required to support it. Anyone involved in implementing or managing server or internet connectivity at any level has a vested interest in understanding the 3 V’s of Big Data – volume, variety, and velocity.

Evolving and Emerging Options

“As systems handle more traffic – for Big Data, streaming video, or whatever reason – systems designers want ever-faster, smaller, and cheaper interconnection solutions,” says Bob Hult, director of product technology at Bishop & Associates Inc., a market research firm specializing in the global electronic connector market. “But along with that desire for higher speed comes added concerns about factors that can impact signal integrity, such as noise, crosstalk, skew, attenuation, EMC/EMI.”

To that end, major connector suppliers such as TE Connectivity, Molex, and Amphenol are addressing such needs with a variety of options for higher-speed backplane connectors up to 56Gb/s and I/O connectors up to 100Gb/s. These leading-edge solutions enhance capabilities for high-performance computing, data center networking, and routers, as well as the networked storage systems and external storage systems so important to Big Data analysis.

The backplane connectors incorporate special contact and grounding features to protect signal integrity despite the higher transmission speeds. Most also offer multiple packaging styles for flexible mounting options – traditional, co-planar, orthogonal mid-plane, orthogonal direct-mate, etc. Some even offer the option to switch from a solid PCB backplane to a cable backplane solution to improve airflow or extend the distance between connections.

Quad small form-factor pluggable (QSFP) and microQSFP I/O form factors and the connectors that are used in them offer space-saving options for server-to-switch connections operating at up to 100Gb/s (four lanes x 25Gb/s). Copper and fiber optic cable formats let users balance costs against cable length considerations.

The most-recently released microQSFP Specification V 2.0 for direct-attach copper cable assemblies, optical modules, and active optical cable assemblies provides 33% higher density than standard QSFP devices – up to 72 ports in a 1RU line card. It also includes enhanced thermal management features to accommodate the effects of higher transmission rates and volumes. According to the microQSFP Multi-Source Agreement working group, the solution enables up to 7.2 Terabits per second (Tb/s) per 1RU line card (72 four-channel ports operating at 25Gb/s per channel).

Even though 100Gb/s capabilities are now being deployed with QSFP and microQSFP, efforts for higher-speed solutions are already in the works. The recently formed QSFP-DD Multi Source Agreement (MSA) Group is focused on developing high-speed double-density QSFP-DD interfaces capable of speeds up to 400Gb/s. The CFP MSA Group is also working on a 400Gb/s CFP8 form factor proposal.

Copper vs. Fiber

Until now, cost has been a big reason to live with the physical limitations of copper interconnections in the data center. But that could be changing for IT environments scaling up to handle Big Data, as fiber optic solutions become more affordable from both an installation cost and the cost per kW of power for operation.

According to Lisa Huff, senior analyst at Discerning Analytics LLC and telecom director for Bishop & Associates Inc., the cost differential between copper wiring and fiber optic connections within the data center continues to drop toward the point where it is easier to justify a switch to fiber. While the cost per port for short-reach fiber was nearly double the cost of copper five years ago, that differential is now down to only 1.3 times the cost of copper.

Cost Differential LOMF and Copper Channels

Figure 1: Cost Differential Between LOMF and Copper Channels
The differential in the total installed cost per port for short-range laser-optimized multimode fiber versus copper channels has dropped significantly in recent years, from just over $3,000 to about $260.
(Image courtesy of Discerning Analytics, LLC)

Calculating the power and cooling savings that result from the optical solution’s operating costs helps narrow that gap even more. In a hypothetical comparison Huff developed for a 192-port rack configuration, the copper solution requires 460W more per rack than for a comparable short-reach optical solution.

In fact, Huff notes that among a group of recently surveyed data center managers, the percentage of copper cabling in their data centers has dropped from 67% to 59% over the past year.

Speed vs. Structure

The trend of cloud computing growth – up 28% annually for 2015, according to data from Synergy Research Group – will certainly maintain the pressure on cloud providers and hardware suppliers to keep pace with the volume and velocity of Big Data. “But the cloud market already tends to be on the leading edge anyway,” says Huff. “It’s the enterprise IT personnel who are going to need to look at not only higher data rates to handle Big Data, but also different types of equipment and networks.”

Huff suggests that one of the storage system changes needed to support the demands of Big Data is a transition from hierarchical networks to a leaf-and-spine architecture, a type of switch fabric approach. “While leaf-and-spine is not a full mesh topology with direct connections from every resource to every other resource,” she explains, “its partial mesh topology provides direct access from every leaf switch controlling multiple resources to every spine switch on the network. This approach offers a practical way to make many more connections among switches and from highly virtualized servers to the switches.”

Author Peter Antoniewicz is a veteran freelance writer who covers high-tech and business-to-business topics for a variety of corporate, agency, and editorial clients. He has written on subjects ranging from embedded computing, electronics, and software to ag-chem, medical imaging, engineered solutions, industrial products, and business services. Email him at [email protected].

Sign Up for Updates

Get the Latest News