As data rates increase, designers are encountering performance limits, signal loss, and higher costs, and congested PCBs. High-speed internal cable assemblies offer a potential solution.
The increased complexity of enterprise computing equipment is one of the many factors that make today’s computing environment challenging. These complexity issues are further exacerbated by the performance limits designers are encountering in standard build materials as data rates increase.
Increasing data rates require faster rise times, which incur more loss on high-speed signals. Higher-performance laminate materials can be used to compensate, but they come with a significant cost penalty, and the advantages may still not be enough. Newer processors and motherboards are handling more and more I/O, contributing to PCB trace and layout density. PCI Express (PCIe) channels have increased, and some processors now have as many as 40 bi-directional PCIe channels. In addition to PCIe, onboard storage applications may be trying to route high-speed SAS. The PCIe/SAS I/O signals can go much farther distances within a server chassis, incurring the associated losses. PCIe retimers are an option, but increase cost and complexity. Also, retimers do not provide an adequate solution for high-performance computing, since they increase the latency in the signal transmission.
These competing needs can add a lot of congestion to a PCB design. Increasing layer count can help with routing, but it also impacts cost and may decrease performance. Decreasing trace width is not a good option, as it furth