ConnectivityDesign engineeringElectricalElectronicsMedical and healthcare

SFP, SFP-DD, QSFP, QSFP-DD & OSFP Pluggable interconnects – a 2021 update

The 21-year-old Pluggable interconnect family is now being used for new accelerator, AV, camera, machine vision, medical, memory, NVM, and proprietary interface links. New market segment expansion seems likely to continue with link shipments rising and healthy unit forecasting. New higher speed 106G and 112G per lane pluggable products will be coming out this year, supporting traditional network and fabric systems like Ethernet IEEE802.3ck and InfiniBand NDR. HPC and OIF are pioneering 224G per lane CEI specifications and early product development.

New copper DACs and optical AOC products and usage are still expanding and supporting new applications with 25G, 56G, 100G and developing 200G per lane Links. Internal IO twin-axial cables are used in many types of modules, switches, and servers instead of PCB circuits for 106G, 112G, 128G, and 224G per lane speed rates. These are chip to bulkhead internal copper jumper cables and cabled backplanes. They are harmonized and optimized with external copper DACs working with critical electrical parameter budgets per standard and corporate specs.

Various new and established electronic market segments, standards, Consortia are employing new technology devices/products to enable the dramatic rise and steady use of internet traffic. Expeditiously, sometimes different market segments borrow from each other with successful connector families like the Pluggable multi-lane types such as the SFP, SFP-DD, QSFP, QSFP-DD, and OSFP. These connectors support respectively 1, 2, 4, 8, 8, and double-stack 8over8 = 16 lane links in various topologies with various reach requirements. They support new 400G, 800G, 1T, 1.6T, and 3.2T Links.

SFP-DD112 PAM4 single stack 1×1 cage & port receptacle connector and two Lane 224G Link DAC cable: (Image: Mellanox)

Some newer medical, AV, memory, storage, and accelerator applications

Developers of new medical instruments, such as advanced MRI machines, often use QSFP25 copper cables. For example, some are developing higher-speed IO links by using 56G per lane SerDes transmission links with QSFP56 copper cables. There are also plans for 112G lane interconnects. Most of these link reaches are short and occur at intervals. However, they’re well shielded and use non-magnetic packaging.

There are also new industrial automation and government applications that regularly use QSFP56 interconnects.

The 100GigEVision interface and standard supports advanced and complex machine vision installations, which employ specified four-lane QSFP28 connectors and cables. Additionally, the popular Coax-Express 2.0 Audio-Video IO standard has added a new Link media option using 4 x 28G = 100G QSFP pluggable copper and optical cable assembly options. These support industrial camera vision network Links. Another AV interface, the popular CameraLink-2/3 specifications, have used four-lane QSFP28 cables for several years and are now including the new QSFP56G for next-generation development.

Developers of new memory systems, such as the GenZ IO interface and others, are using QSFP-DD and OSFP for 8-lane, Intra-Rack, and Intra-POD external short reach 25 and 56Gbps per lane applications. This group is planning to release its 112G per lane spec in 2021.

Native NVMe-oF Drive interfaces use 4 lanes of 25G each, including QSFP connectivity options for external Intra-Rack and Inter-Switch reaches. There’s also an SNIA Technical Native NVMe-oF Specification v1.01 that ties it with the NVMe Express specification and IEEE 802.3 Ethernet.

There are several new standards and proprietary accelerator GPU-based add-in modules and sources. These products have been using 4 lane QSFP28 and QSFP56 outboard connectivity and other lane count pluggable types like SFP56. It seems Nvidia’s GPU accelerator interface, NVLink, can use QSFP-DD or OSFP 8 lane single cable assembly link options for intra-rack and inter-switch links.

Four QSFP-DD112 PAM4 1×1 cage & receptacle connectors for four 800G ports (Image: Bittware)

Newer telecom applications

Telecom central office and Optical Transport systems switches have been a long time using QSFP, QSFP-DD and OSFP, SFP, SFP-DD cables based on the OIF CEI-25, CEI-56, and CEI 112G specifications. The OIF are pioneering the new 224G per lane CEI spec. This market also uses the established CFP module form-factors and interconnect, which can be reviewed separately.

Enterprise datacenter applications are mature, but networks and campuses are still growing out with QSFP28, and QSFP56 interconnect business for Ethernet and FibreChannel SAN, Cool and Cold Storage networks partially driven by installed base demand.

SAS continues to use HDminiSAS interconnect and increasingly QSFP connectors for 48G per lane new applications. Many leading network equipment companies have 100G per lane product development well along, as the IEEE 802.3ck 106G spec is going thru final comment processing with the release in 2021.  A new IEEE 802.3xx 212G per lane and more study groups get the organization ready for the actual specification full development and release. There has been discussion on the merits of using 1 meter long 24/26AWG twin-axial QSFP224 cables for short passive reach. Also, a strong look at Active Copper cables is going to achieve longer reaches. Recently, Spectra7 announced their new PAM4 112G per lane reach extender, active and tiny chip GC1122, which will be commonly embedded in QSFP112, QSFP-DD112, and OSFP112 cable plugs, which support 400G and 800G four-meter long Links using 200mW per lane. Active Copper 112G cables may become 50% of the TAM usage versus Passive or AOC type.

Evolving cloud datacenter applications

Newly announced InfiniBand NDR 100G per lane HPC fabric network interconnect products by Nvidia/Mellanox were recently in live demos and showcased at the recent SC2020 virtual HPC conference. Primary applications are 1x100G SFP, 4x100G = 400G QSFP, 8x100G = 800G QSFP-DD and OSFP 8 lane Links. The new product family includes Passive & Active Copper Cables, AOCs, and Optical modules options supporting different DC reaches topologies/applications with different reaches. Product production ramp-up is planned for starting later in 2021.

OSFP112 PAM4 ER 80km optical module. (Image: Fujitsu)

It appears that the IBTA spec group is adopting some inputs and clauses from the OIF CEI 112G per lane electrical interface specification and the IEEE-802.3ck 106G per lane Ethernet copper specification, all being finalized in 2021. IBTA is likely to have 112G Plugfests this year. Their roadmap now includes the newest GDR 200G per lane interface and performance target. The combined IBTA and RoCE performance compliance and interoperability plugfest testing are key for validating how real products work together. Maybe this will be expanded to include some Accelerator IO interfaces? Striving for best balance, latency, and harmonization of key parameters like COM, BER, IL, ERL, and sometimes FEC need to be always optimized.

Next-gen leadership HPC interfaces like Slingshot, NumaLink, and dozens more 8 lane IO interfaces are also vetting and using 112G per lane QSFP-DD, and OSFP interconnects as they are 8 lane Links. It appears SNIA is supporting the development of new pluggable SFF connector interconnects and form-factor documents.

Fortunately, practically all QSFP and other pluggable cable assembly PCB plugs have EPROM chips mounted on each one. These are actually smart cable assemblies as these EPROM chips have various memory-mapping functionality that includes identifying itself relative to Link type, data rate, interoperability, application type, assembly ID number, cable length, wire gage or fiber type, cable manufacturer, build date, and several other Management interface requirements per the SFF-8636 specification standard. So even if you miss plug the wrong pluggable cable assembly into another port interface, the smart cable would likely declare itself and be removed and replaced with the correct assembly type or upgraded revision level.

Some observations

New market segment product Developers and Users benefit from having sources of Pluggable interconnects that have already been well-engineered and turned into volume production offerings by global Suppliers. However, these Suppliers need to watch and support some market segments to further shift to using external passive optical interconnects to the bulkhead. Then internal fiber to the module or chip. So no pluggable interconnects are needed. AOC switch radix, Row to Row external cables are still expensive interconnects versus Passive external optical links connecting to internal fiber-optic modules and chips.

It seems that the DSFP pluggable is preferred over SFP-DD used in moderate volume Telecom applications for 2 lane links. It appears that the pluggable micro-QSFP28 four-lane connector remains in moderate use. It seems to still have thermal issues on high density 72 port face-plates and other form factor applications at the 56G and 112G per lane level.  It also seems micro-SFP remains a nascent product design.

It seems that HPC technology and interconnects are becoming more commonly used within Enterprise, Industrial, AV, Medical, and Workstations active equipment boxes and networks.

As more component, assembly, and device orders coming from new market segments, standards, and applications help Suppliers smooth and elongate their product family usage life curves. Also, it helps to achieve ROIs of Manufacturing CAPEX and Testing CAPEX/OPEX and other lean forward application development costs. Good to have new industry business during a challenging market time.

As a long-time industry co-leader of the SFP, QSFP, and other versions of the Pluggable interconnect family, it is satisfying to see the wide extent of its usefulness. This will last awhile, but we can see the need and change for more passive optical and all-optical IO systems evolving. It seems that adoption for CoPackaged Optical modules will hasten the reduced use of the established Pluggable interconnects. Systems and standards are already planning to use Optical Chiplet devices with ribbon optical internal cables connecting to the bulkhead. These cables are using the newer and smaller CS, SN, MDC connectors more than the older larger LC and MPO types.

You may also like: