High-Performance Computers Offer Interconnect Opportunities

By Robert Hult | March 04, 2013

High-Performance Computers Offer Interconnect Opportunities

High-Performance Computers Offer Interconnect OpportunitiesThe evolution of supercomputers has drawn the roadmap of computing performance that has filtered down to desktop, laptop, and even tablet devices today. In the 1970s, Seymour Cray began designing machines that were optimized specifically for immense number-crunching applications by pushing the existing limits of system design, components, and software. Early on, he recognized the twin challenges of thermal management and signal skew, as well as establishing the principals of signal integrity that have become essential design elements of today’s high-speed systems.

Supercomputers entered mainstream recognition when the IBM Watson computer trounced the human champs of the “Jeopardy!” game show in 2011. Watson is made up of a cluster of 90 IBM Power 750 servers in 10 racks with a total of 2880 POWER 7 processor cores and 16 Terabytes of RAM. Watson can process 500 gigabytes (the equivalent of a million books) per second, with an estimated hardware cost of about $3 million.

According to Encyclopedia Britannica Online, the term supercomputer refers to any class of extremely powerful computers and is commonly applied to the fastest high-performance systems available at any given time. Such computers are used primarily for scientific and engineering work requiring exceedingly high-speed computations. Performance of these machines is measured by running standardized benchmark programs to determine the maximum number of floating point operations per second (also known as “flops”) it is capable of calculating. Today’s top 500 global supercomputers perform at least 76.5 Tflops, and this benchmark increases each year. The Watson machine, performing at 80 Tflops, would place 94th on the top 500 supercomputers list.

Supercomputers have become a key research tool in advanced applications where a large number of variables or probabilities will influence the result. These include molecular dynamics, seismic analysis, high velocity financial trading, fluid dynamics, medical imaging analysis, physical cosmology, and thermal nuclear device simulation. Supercomputers have been used in climate model simulation to predict the degree of human influence on the environment.

A good example of a high-end supercomputer is the Cray Jaguar XT5-HE, which is installed at Oak Ridge National Laboratories in Tennessee. It consists of 224,162 AMD X86_64 Opteron 6 core 2600 MHz processors. It is capable of running at 1,759 Tflops (trillion calculations per second).

The IBM Sequoia supercomputer, housed at Lawrence Livermore National Laboratory in California, currently holds the title of fastest computer in the world. It can crunch 16.32 quadrillion calculations per second (16.32 petaflops). This level of supercomputing power is used by the US National Nuclear Security Administration to simulate nuclear weapon performance, as well as the level of deterioration of obsolete nuclear devices.

Intense competition between countries continues to push high-performance computer technology to new levels, as advanced machines can quickly fall from the leading edge to obsolescence in as little as three years.

The top 10 supercomputers in the world today include:

  • Sequoia at Lawrence Livermore National Laboratory in California (US)
  • K Computer at RIKEN Advanced Institute for Computational Science (Japan)
  • Mira at Argonne National Laboratory in Illinois (US)
  • SuperMUC at Leibniz Supercomputing Centre in Garching (Germany)
  • Tianhe-1A at National Supercomputing Center in Tianjin (China)
  • Jaguar at Oak Ridge National Laboratory in Tennessee (US)
  • Fermi at CINECA in Bologna (Italy)
  • JuQueen at Forschungszentrum Juelich in Julich (Germany)
  • Curie thin nodes at CEA/TGCC-GENCI in Bruyeres-le-Chatel (France)
  • Nebulae at National Supercomputing Center in Shenzhen (China)

The new Titan supercomputer from Cray at Oak Ridge National Labs will replace the Jaguar. Once it’s fully operational, it will crank out 20 petaflops (20 quadrillion floating point operations per second), with 18,688 nodes, each with 16 core AMD Opteron 6274 processors, for a total of 299,008 cores. It will also have access to 700 Terabytes of memory, and draw 8,209 kW of power. China is in the process of building the Tianhe-2 machine, which is expected to operate at up to 100 petaflops when it becomes operational in 2015.

Advances in chip technology, system architecture, and the degree of parallelism drive the incredible computing power of these machines. These highly scalable machines grow performance by increasing the number of operations per cycle per processor, the number of processors per node, and the total number of nodes in the system. The use of parallel execution technology allows the system to partition immense calculation problems into much smaller individual tasks and assigns them to multiple processors. The results of these simultaneous computations are then reassembled into a final solution. The use of more and faster processors working in parallel results in greater computing power. Needless to say, developing the algorithms capable of managing this process is a major factor in achieving the intended results.

So what are the challenges that face designers of supercomputers today? Not surprisingly, many are very similar to those that impact commercial computing equipment, including packaging density, increasing bandwidth, power, and thermal management.

Supercomputers are typically very large installations often consisting of many refrigerator-sized racks. These clustered machines, often utilizing commercial off-the-shelf servers running Linux operating systems, may be located at one site or dispersed and interconnected by high-speed optical links. Reducing the size of these machines improves the productivity of the facility, and as system speeds increase, longer signal paths introduce loss, distortion, and skew.

The early Cray machines were circular, specifically to minimize the distance between nodes.

The energy consumed by a large supercomputer can dwarf the demands of all but the largest data centers. Large installations may consist of dozens of cabinets covering a basketball-court-sized equipment floor. An individual rack can draw up to 100 kilowatts, which mandates the use of exotic cooling systems that add to the equipment size as well as the total operational cost. Cooling strategies that range from chilled air to liquid piped across the backplane interface to structures that maintain proper junction temperatures of each processor prevent the system from overheating. One of the evolving trends is “green” supercomputers that run more efficiently and consume less energy. Reducing the enormous and costly amount of energy required to operate these machines has become a major factor influencing their design.

Enabling these machines to achieve ever-higher performance involves solving the problems associated with additional issues, including:

  • Latency: Waiting for access to memory or other parts of the machine
  • Overhead: Machine resources required to efficiently manage the parallelism process, which drains resources from problem solving
  • Starvation: Insufficient work being done due to an imbalance of parallel resources
  • Contention: Delays due to priority access to shared resources, including limitations of network bandwidth
  • I/O bandwidth

The connectors used in these incredible machines must support the solutions to these unique challenges.

High-speed communication among the racks and the cloud is essential. As bandwidth increases, connector pin counts go up, putting upward pressure on signal density. I/O panel space is limited, making high-density interfaces a priority, but it raises questions about crosstalk and mechanical robustness. SFP+, QSFP+, and CXP have become common I/O interfaces on these machines. Mini SAS HD is another candidate for I/O applications. Optical transceivers and cable assemblies, as well as active optical cables, are assuming a greater role in the interconnect scheme of HPC equipment.

InfiniBand is currently the most commonly used interconnect, with gigabit Ethernet running a close second. 10G Ethernet is being rapidly adopted in this industry segment.

Access to memory is a critical requirement for these advanced machines, which often contain a combination of standard disk and emerging flash memory. DDR3 will transition to DDR4 over the next few years. High-speed parallel access memory technology has the ability to keep pace with the demands of the thousands of computing cores, increasing the efficiency of the machine.

Supercomputers typically consist of modular collections of many smaller computing elements that must communicate with as little delay or distortion as possible.

These requirements make high-speed backplane connectors that feature high pin counts with tightly controlled impedance essential in rack and blade server applications. Many of these connectors feature integrated low-speed, high-speed, and power contacts.

Liquid cooling of daughtercards is becoming more common, which requires “no-drip” liquid interfaces that allow blind mating of the daughtercard into the backplane. These hybrid liquid and electronic connectors must be compatible to ensure proper performance.

Connectors used in several levels of power distribution range from primary 440 volt AC to low-voltage, high-current DC interfaces. These interfaces must offer a low profile in order to maintain system packaging density and minimize obstruction of cooling airflow while supporting increased current loads. Their extreme operating costs require connectors that are designed for maximum reliability.

Proving that supercomputers are no longer a small niche in the world of computing devices, the recent SC12 supercomputer show in Salt Lake City, Utah, attracted 334 exhibitors and more than 9,600 attendees from 54 countries. Although the cost of these machines runs into the millions of dollars, their usefulness and value have created a growing market for computing hardware, software, support equipment, and connectors. While supercomputers represent a relatively small part of the global connector market, they provide valuable insight on emerging performance and packaging trends, as well as a test bed for next-generation interfaces.

Robert Hult
Get the Latest News
x