A Guide to Connector Specsmanship

By Dr. Bob Mroczkowski | November 04, 2008

Dr. Bob on A Guide to Connector Specsmanship

I opened my last article, “To Fail or Not to Fail, That is the Question,” with the following statement:

“It is probably not necessary to say that the opinions Max Peel and I express in our articles for ConnectorSupplier.com are our personal opinions, but opinions based on our many years of experience in dealing with the design, materials, testing, and failure analysis of connectors.”

That statement is an important context for this article as well.

Product specifications are documents that describe the performance capabilities of a product with reference to a specific testing protocol. This protocol is intended to simulate an application—in this case, a connector application. To flesh out that claim, let me revisit some of the comments made in the first article in this series on connector testing, “Connector Test Programs.” In that article I identified three components of a test program—conditioning, exposure, and measurement. Each of these components must be taken into consideration in any attempt to simulate a connector application. This article will provide a general discussion of such considerations.

Conditioning is a procedure intended to put the connector into a state representative of a particular stage of the life of the application. The most common and most straightforward conditioning procedure is mating and unmating the connector. The basic reason for using a connector is the need for separability, the mating and unmating of the connector for a variety of reasons. Separability may be required for manufacturing reasons, testing of various components or subsystems prior to the assembly of a complete system, or for facilitating upgrades or repairs of systems. Such applications will generally require a low number of mating cycles. Separability may be required for products that are portable. A notebook computer that travels between home and the office and uses diifferent printers and peripherals will require hundreds of mating cycles.

Mating and unmating a connector simulates at least two application conditions: durability and mating/unmating force as a function of product lifetime. Durability, the effect of mating and unmating on the contact finish due to wear processes is arguably the more significant of the two simulations. Simulation of an application under this guideline requires a determination about how many mating cycles are representative of the application; a hundred or a few hundred cycles is typical. The number determines the range of applications that would fall within the scope of the specification.

Selection of exposure conditions is much more complex, and includes two major aspects, the conditions of exposure, which relate to the application “environment,” and the duration of the exposure, which relates to the intended product lifetime. The issue with conditions of exposure is the appropriateness of the simulation of the application environment. The concern with duration is the level of knowledge available to allow for definition of a relationship between time in test and lifetime in the field; in other words, an acceleration factor.

Simulation of application thermal conditions is relatively straightforward with respect to both environment and duration. Increasing the test temperature over that of the application ambient provides an acceleration of the process. An acceleration factor can be realized using some form of Arrhenius equation, an equation relating the variation of a thermally activated process to an activation energy, typical of the process, the temperature, and the time.

Simulation of corrosion processes is far more complex. Clearly the corrosion mechanisms that are active in a given application environment depend on the composition of the environment, the temperature, and the humidity. Fortunately, based on work begun in the ‘70s by many researchers from a variety of companies, there are corrosion exposures that are taken as correctly simulating application environments for noble metal finished connectors. The extensive database developed by these workers has also allowed determination of acceleration factors for those environments.

Exposures to assess mechanical stability, however, are not as well defined. Shock, either mechanical or thermal, and vibration, the two major exposures employed, are not well characterized with respect to simulation of application environments. This limitation, of course, means that acceleration factors are not available.

The limitations discussed with respect to exposures do not mean that meaningful testing protocols cannot be developed. It simply means that the effects of the limitations must be considered in determining the value of the testing results. As mentioned in the first article in this series, EIA 364D test protocols are commonly used for connector testing in the United States. The EIA protocols are, for the most part, consistent with IEC (International Electrotechnical Commission) protocols.

The most common, and arguably most important, measurement in connector testing protocols is contact resistance; in particular, Low Level Contact Resistance (LLCR). In essence, LLCR measurements are performed at a sufficiently low open circuit voltage, 20 millivolts, at which electrical disruption of any surface films cannot take place. Thus, the resistance measurement is sensitive to any effects of surface films and contaminants. Two approaches to using resistance measurements are an allowable change in contact resistance during testing, and a “failure criterion” for maximum contact resistance after testing. Connector testing protocols use the allowable change, Delta R, methodology, due to a general lack of knowledge of appropriate failure criteria for contact resistance. A common Delta R criterion for a wide range of connector applications (other than power/high-current) is 10 milliohms maximum.

With this, background attention turns to how the results of product specification testing can be used. Two possible uses are comparative evaluation and performance assessment. Both uses require an evaluation of the test protocols used, but at different levels.

Comparative evaluation, that is comparison of two different products or “equivalent” products from different manufacturers, is the simplest and most common application. The concern, as in any comparison process, is to ensure that the test protocols and results are equivalent. That is, are the test protocols consistent for all products being compared. For example, in conditioning, do all the protocols call out the same number of conditioning mating/unmating cycles? Are the corrosion exposures and durations the same for all protocols, i.e. for noble metal finished connectors, are the exposures all the same class, say Class III FMG, and the same exposure, for example, 10 days, equivalent to five years in the field? Does this consistency apply over all the test groups? Finally, are the Delta R criteria the same, say 10 milliohms at the end of test sequence for each relevant test group? If the answer to all of these questions is yes, and the integrity of the testing laboratory is verified, if the products meet all requirements, then, they can be considered equivalent. This does not mean, however, that they are appropriate for the intended application—performance assessment.

Performance assessment requires a somewhat more sophisticated evaluation of the test protocols. For example, if the intended number of product mating cycles over the product life is 500, and the test specification conditioning calls out 100 cycles, no decision can be made. On the other hand, if the protocol calls out 500 or 1,000 cycles, the testing validates an acceptance of performance assessment with respect to durability.

Similarly, if the application environment is known to be Class III for a 20-year life in the field, and the test protocols call out Class II for five days, no decision can be made. A Class III exposure for 10 days would, however, validate performance if the Delta R criterion used in the protocol is known to be acceptable for the intended application.

Connector testing programs are too expensive and time-consuming to miss taking full advantage of the information such a program can provide. Testing protocols should be inclusive of as wide a range of connector applications as possible; for example, test to Class III for 10 days rather than Class II for five days, to be of the most value to connector users. An additional benefit would be to provide contact resistance data distributions rather than a maximum Delta R to allow a more definitive statistical assessment of contact resistance stability.

The final selection of a connector product can be determined by the tests and resulting spec sheet the connector manufacturer provides. Know your “specsmanship” and you’ll be able to read behind the lines to make the most informed and insightful decision and ultimately get the best performance from your product.

Dr. Bob Mroczkowski
Get the Latest News
Download Bishop's new Top 100 Connector Manufacturers research report
eBook 2024 The Robots are Coming
x