How Will Connectors Deliver Terabit Speeds?
As the IoT drives the demand for faster networks, the need for one-terabit speeds is just around the corner.
The march to ever-faster data transfer has been an assumed objective since the development of the first computer. Information transmitted at the rate of kilobits per second (kb/s) evolved to megabits per second (Mb/s), which defines the transfer rate of many communication and computing devices today. Each major transition to higher speeds generated warnings that the domination of copper interconnects was coming to the end and that fiber optics would soon rule the world. The laws of physics seemed to indicate that beyond a few Gb/s, channels longer than a few inches of copper would be attenuated and distorted to the point of being useless. It didn’t quite work out that way.
As signal speeds increased, engineers continued to find ways to extend the life of copper, confounding the experts. Similar to most industries, designers and manufacturers of electronic equipment do all they can to reduce risk. In many cases, that includes staying with a known technology as long as possible. The performance and manufacturing processes associated with copper interconnects, which range from cable assemblies to foil traces embedded into a printed circuit board (PCB), have been highly refined and thoroughly documented over many years. The desire to continue the use of copper over an alternative, which could introduce a new set of unknowns, provided a strong incentive to stay with the devil engineers knew.
Circuit designers recognized that, starting at about 1Gb/s, circuits behave as a transmission line rather than follow the rule of Ohm’s law. This realization ushered in several design changes. Circuits began to be matched to a controlled impedance. Single-ended signaling gave way to low-voltage differential signaling. Much greater attention was given to the routing of signal lines and the ground plane in PCB design. More layers were dedicated to signal isolation and power distribution. Plated-through holes became smaller and were back-drilled to minimize stubs. Standard FR-4 epoxy board materials were replaced with higher performance and higher cost laminates, and features such as the surface roughness of copper traces, as well as the moisture absorption of laminates, became hot topics at industry seminars.
Semiconductor manufacturers contributed major improvements to enable ever-faster transfer rates. Chips began to integrate signal conditioning features such as compensation and equalization. Retimers and forward error correction (FEC) greatly extended the length and fidelity of copper high-speed channels. Eye diagrams defined acceptable channel performance, while S-Parameter data became a critical requirement to accurately simulate high-speed circuits. All of these innovations pushed the practical bandwidth of copper channels to 50+Gb/s. In response, engineers stopped trying to forecast the demise of copper.
So, where does the industry go from here? There is little doubt about continuing demand for even faster channels. Supercomputers are obvious candidates for faster speeds, but high-speed communications networks in telecom and datacenter applications represent the largest market applications. Global annual IP traffic has already exceeded one zettabyte (i.e., one sextillion, 1021, or one thousand trillion bytes), and will only continue to grow. A combination of streaming HD video, cloud computing, and the millions of new devices that will connect on the Internet of Things will demand faster networks. In fact, 100Gb Ethernet (GbE) is already evolving to 200 and 400GbE, while the Ethernet roadmap projects one terabit Ethernet sometime after 2020.
In the short term, the transition from non-return-to-zero (NRZ) signaling to PAM4 signaling will allow designers to stay in their comfort zone and provide more time to learn what it takes to design reliable 50+Gb/s NRZ signals. In the future, 100Gb NRZ signaling may be a possibility, but there is no clear consensus at this time. Designers who must deliver 100Gb/s today are using aggregated channels to achieve these levels.
Flagship backplane and mezzanine connectors from several leading suppliers have demonstrated the ability to operate at 56Gb/s using both Pam4 as well as NRZ. Comments made at the recent DesignCon 2017 conference indicated that these manufacturers expect at least one more significant upgrade in current backplane connector technology.
Pluggable I/O continues to be a focus of attention due to demand for ever-faster data transfer rates in smaller faceplates. Suppliers are responding with extensions and modifications of existing pluggable I/O, such as SFP and QSFP. QSFP28 (4 x 28Gb/s), for instance, is a logical selection to achieve 100Gb/s Ethernet today. TE Connectivity has tooled their microQSFP, which packs four 28Gb/s channels in a package slightly larger than an SFP connector, to achieve greater packaging density. Additionally, a new double-density QSFP sports eight 25Gb/s channels NRZ for 200Gb/s applications, or eight 50Gb/s PAM4 channels to achieve 400Gb/s aggregated. The CDFP pluggable is a 16-lane by 25Gb/s connector that delivers 400Gb/s and is compatible with direct copper, as well as single- and multi-mode fiber interfaces.
Thermal challenges associated with packing high-speed circuitry in smaller envelopes introduce additional design challenges. Pluggable connector manufacturers are responding with thermally enhanced PCB cages that feature integrated heat sinks and vented housings.
Suppliers are constantly pushing the perceived limits of copper. The recently introduced OSFP pluggable provides eight channels of 50Gb/s to achieve an aggregated 400Gb/s. The reduced form factor enables the mounting of up to 32 OSFP ports on a standard 1U panel. The result is a total I/O capability of 12.8Tbs/s. That may satisfy demand for at least the next one or two generations of equipment. Beyond that, fiber may be the only practical solution.
Optical transmission will likely become the solution of choice as we move past 100Gb/s channels. The CFP8 pluggable optoelectronic transceiver module has already been demonstrated to deliver 400Gb/s PAM4. In addition to greater signal integrity, optical signals can be propagated much further than electrical signals. The cable diameter of fiber is much smaller than that of an equivalent copper cable, which is an important attribute in large datacenters where cable trays are overflowing. Signal latency, crosstalk, and skew also become less-significant factors in optical channels.
Terabit data transmission is coming. Recently announced interconnect technology can get there by aggregating multiple channels to support evolving Ethernet, Infiniband, and INCITS standards. The future may eventually demand single-Tb/s channels. If so, research in materials, advanced software, silicon photonics, and signal conditioning will make it happen, and connector manufacturers will play an integral role in bringing this technology into reality.
- In New Product Development, Timing is Everything - May 23, 2023
- Signal Integrity: The Key to Successful High-Speed Circuit Design - April 25, 2023
- OFC Celebrates High-Performance Optical Interconnects - March 21, 2023