Contents

  • Acronyms
  • Storage Networking Interface Comparison Table
  • Transfer Rate, Bits vs. Bytes, and Encoding Schemes
  • History
  • Roadmaps
  • Cables: Fiber Optics and Copper
  • Connector Types
  • PCI Express® (PCIe®) 

 


 

Acronyms

  • FC — Fibre Channel
  • FCoE — Fibre Channel over Ethernet
  • IB — InfiniBand
  • iSCSI — Internet Small Computer System Interface
  • NVMe — NVM Express
  • PCIe — PCI Express
  • SAS — Serial Attached SCSI
  • SATA — Serial ATA
  • USB — Universal Serial Bus

 

  • 10GbE — 10 Gigabit Ethernet
  • CNA — Converged Network Adapter (used with FCoE)
  • HBA — Host Bus Adapter (used with FC, iSCSI, SAS, SATA)
  • HCA — Host Channel Adapter (used with IB)
  • NIC — Network Interface Controller or Network Interface Card (used with FCoE, iSCSI)

 

  • ISL — Inter-Switch Link
  • SAN — Storage Area Network

 

  • Gb — Gigabit
  • GB — Gigabyte
  • Mb — Megabit
  • MB — Megabyte

 

  • Gb/s — Gigabits per second
  • Gbit/s — Gigabits per second
  • Gbps — Gigabits per second
  • GB/s — Gigabytes per second
  • GBps — Gigabytes per second
  • Mb/s — Megabits per second
  • Mbit/s — Megabits per second
  • Mbps — Megabits per second
  • MB/s — Megabytes per second
  • MBps — Megabytes per second

 

  • HDD — Hard Disk Drive
  • SSD — Solid State Drive
  • SSHD — Solid State Hybrid Drive

 

  • SDR — Single Data Rate (InfiniBand)
  • DDR — Double Data Rate (InfiniBand)
  • QDR — Quad Data Rate (InfiniBand)
  • FDR — Fourteen Data Rate (InfiniBand)
  • EDR — Enhanced Data Rate (InfiniBand)

Network throughput rates are generally measured in bits per second. Storage throughput rates are generally measured in bytes per second.

 


 

Storage Networking Interface Comparison Table

  Number of Devices Maximum Distance (m) Cable Type Interface Device Transfer Rate (MB/sec) Interface Attributes
FC 16M 10 (copper)
10KM+ (optical)
Copper
Optical
HBA 100, 200, 400, 800, 1600 Dual Port
FCoE 16M 10 (copper)
very long (optical)
Copper
Optical
CNA
10GbE NIC
1150 Dual Port
IB 48M 15 (copper)
very long (optical)
Copper
Optical
HCA 1000, 2000, 4000, 7000 Full Duplex, Dual Port
iSCSI Many Ethernet cable distance Copper
Optical
NIC, HBA 100, 1000  
SAS
(passive)
16K 10 Copper Onboard, HBA 300, 600, 1200 Full Duplex, Dual Port
SAS
(active)
16K 20 Copper Onboard, HBA 300, 600, 1200 Full Duplex, Dual Port
SAS
(active)
16K 100 Optical Onboard, HBA 300, 600, 1200 Full Duplex, Dual Port
SATA 1 1 Copper Onboard, HBA 150, 300, 600 Half Duplex, Single Port
USB 127 5 Copper, Wireless Onboard, Adapter card 0.15, 1.5, 48, 500 Single Port

PCIe data rates are provided in the PCI Express section below.

 


 

Transfer Rate

 

Transfer rate, sometimes known as transfer speed, is the maximum rate at which data can be transferred across the interface. This is not to be confused with the transfer rate of individual devices that may be connected to this interface. Some interfaces may not be able to transfer data at the maximum possible transfer rate due to processing overhead inherent with that interface. Some interface adapters provide hardware offload to improve performance, manageability and/or reliability of the data transmission across the respective interface. The transfer rates listed are across a single port at half duplex.

 

Bits vs. Bytes and Encoding Schemes

Transfer rates for storage interfaces and devices are generally listed as MB/sec or MBps (MegaBytes per second), which is generally calculated as Megabits per second (Mbps) divided by 10. Many of these interfaces use “8b/10b” encoding which maps 8 bit bytes into 10 bit symbols for transmission on the wire, with the extra bits used for command and control purposes. When converting from bits to bytes on the interface, dividing by ten (10) is exactly correct. 8b/10b encoding results in a 20 percent overhead (10-8)/10 on the raw bit rate.

 

Beginning with 10GbE and 10GbFC (for ISL’s), some of the newer speeds emerging in 2010 and beyond, a newer “64b/66b” encoding scheme is being used to improve data transfer efficiency. 64b/66b is the encoding scheme for 16Gb FC and is planned for higher data rates for IB. 64b/66b encoding is not directly compatible with 8b/10b, but the technologies that implement it will be built so that they can work with the older encoding scheme. 16Gb Fibre Channel uses a line rate of 14.025 Gbps, but with the 64b/66b encoding scheme results in a doubling of the throughput of 8Gb Fibre Channel, which uses a line rate of 8.5 Gbps with the 8b/10b encoding scheme.

64b/66b encoding results in a 3 percent overhead (66-64)/66 on the raw bit rate.

 

PCIe versions 1.x and 2.x use 8b/10b encoding. PCIe version 3 uses 128b/130b encoding, resulting in a 1.5 percent overhead on the raw bit rate. Additional PCIe information is provided in the PCI Express sectionbelow.

 

USB 3.1 will use 128b/132b encoding. See Roadmaps section below.

 

 

Encoding Scheme Table

  Overhead Applications
8b/10b 20% 1GbE, FC (up to 8Gb), IB (SDR, DDR & QDR), PCIe (1.0 & 2.0) SAS, SATA, USB (up to 3.0)
64b/66b 3% 10GbE, 100GbE, FC (10Gb & 16Gb), FCoE, IB (FDR & EDR)
128b/130b 1.5% PCIe 3.0
128b/132b 3% USB 3.1 (10 Gbps, see Roadmaps section below)

  

Fibre Channel Speed Table

  Throughput (MBps) Line Rate (GBaud) Encoding Host Adapter requirements (dual-port cards)
1GFC 100 1.0625 8b/10b PCI-X
2GFC 200 2.125 8b/10b PCI-X
4GFC 400 4.25 8b/10b PCI-X 2.0 or
PCIe 1.0 x4
8GFC 800 8.5 8b/10b PCIe 1.0 x8 or
PCIe 2.0 x4
16GFC 1600 14.025 64b/66b PCIe 2.0 x8 or
PCIe 3.0 x4
32GFC 3200 28.05 64b/66b PCIe 3.0 x8

  

InfiniBand Speed Table

  1X data rate 4X data rate 12X data rate Encoding Host Adapter requirements
(dual-port cards)
SDR 2 Gb/s 8 Gb/s 24 Gb/s 8b/10b PCIe 1.0 x8
DDR 4 Gb/s 16 Gb/s 48 Gb/s 8b/10b PCIe 1.0 x16 or
PCIe 2.0 x8
QDR 8 Gb/s 32 Gb/s 96 Gb/s 8b/10b PCIe 2.0 x8
FDR-10*
* Mellanox only
10.3125 Gb/s 41.25 Gb/s 123.75 Gb/s 64b/66b PCIe 3.0 x8
FDR 13.64 Gb/s 54.55 Gb/s 163.64 Gb/s 64b/66b PCIe 3.0 x8
EDR 25 Gb/s 100 Gb/s 300 Gb/s 64b/66b PCIe 3.0 x16

 

InfiniBand connections can be aggregated into 4x (4 lanes) and 12x (12 lanes), depending on the application and connector. QSFP and QSFP+ connectors are used for 4x connections, and CXP connectors are typically used for 12x connections. See the Connector Types section below for more details on the connector types.

 


 

History

 

Products became available with the interface speeds listed during these years. Newer interface speeds are often available in switches and adapters long before they are available in storage devices and storage systems.

 

  • FC — 1Gb/s in 1997, 2Gb/s in 2001, 4Gb/s in 2005, 8Gb/s in 2008, 10Gb/s (ISL only) 2004, 16Gb/s in 2011 (FC is generally backward compatible with the previous two generations)
  • FCoE — FC:4Gb/s and Ethernet:10Gb/s in 2008, 10Gb/s in 2009.

 

  •    (FC-BB-5 was approved in June 2009, INCITS 462-2010 was approved in Spring 2010)
  • IB — 10Gb/s in 2002, 20Gb/s in 2005, 40Gb/s in 2008, 56Gb/s in 2011
  • iSCSI — 1Gb/s in 2003, 10Gb/s in 2007 (basic 10GbE first appeared in 2002)
  • NVMe — Version 1.0 specification published in March 2011. Version 1.1 specification finalized in October 2012.
  • SAS — 3Gb/s in 2005, 6Gb/s in 2009, 12Gb/s in 2H 2013
  • SATA — 1.5Gb/s in 2003, 3Gb/s in 2005, 6Gb/s in 2010 (traditional SATA is not expected to extend beyond 6Gb/s, see Roadmaps section below)
    • SATA μSSD was introduced in August 2011 as a new implementation of SATA for embedded SSDs. These devices do not have the traditional SATA interface connector but use a single ball grid array (BGA) package that can be surface mounted directly on a system motherboard. These SATA μSSD devices are intended for mobile platforms such as tablets and ultrabooks, and consume less electric power than traditional SATA interface devices.
    • SATA Revision 3.2 has been ratified and the announcement was made in August 2013. This revision includes new SATA form factors, details for SATA Express, power management enhancements and optimizations for solid state hybrid drives (SSHDs). One of the new SATA form factors is M.2 that enables small form-factor SATA SSDs suitable for thin devices such as tablets and notebooks. M.2 was formerly known as NGFF, is defined by the PCI-SIG and supports a variety of applications including WiFi, WWAN, USB, PCIe and SATA. The v3.2 specification standardizes the SATA M.2 connector pin layout. SATA v3.2 also introduces USM Slim, which reduces the thickness of the module from 14.5mm to 9mm.
  • USB — 1.5Mb/s in 1997?, 12Mb/s in 1999?, 480Mb/s in 2001?, 5Gb/s in 2009

 

PCIe history is provided in the PCI Express section below.

 


 

Roadmaps

 

These roadmaps include the estimated calendar years that higher speeds may become available and are based on our industry research, which are subject to change. Past history indicates that several of these interfaces are on a three or four year development cycle for the next improvement in speed. It is reasonable to expect that pace to continue. 

It should be noted that it typically takes several months after the specification is complete before products are generally available in the marketplace. Widespread adoption of those new products takes additional time, sometimes years. 

Some of the standards groups are now working on “Energy Efficient” versions of these interfaces to indicate additions to their respective standards to reduce power consumption. 

See the Connector Types section below for additional roadmap information.

  • FC
    • 32Gb/s FC (“32GFC”) — Work on the 32Gb/s FC (“32GFC”) standard, FC-PI-6, began in early 2010. In December 2013, the Fibre Channel Industry Association (FCIA) announced the completion of the PC-PI-6 specification. 32GFC products are expected to become available by 2015 or 2016. 32GFC is expected to use the 25/28G SFP+ connector technology as described in the Connector section below.
      • A multi-lane 128GFC interface, known as 128GFCp (parallel, four-lane), is based on the 32GFC work and has been added to the official Fibre Channel roadmap. The T11 committee has accepted this as a project known as FC-PI-6P. This specification is expected to be completed in late 2014 or early 2015, with products possibly available 2015 or 2016. 128GFCp will probably use QSFP+ connectors and may also support CFP2 or CFP4 connectors.
      • Some vendors refer to 32GFC and 128GFCp as “Gen 6” Fibre Channel, since this version of Fibre Channel supports two different speeds, in two different configurations (serial and parallel).
    • 64Gb/s FC (“64GFC”) — Work has not yet begun in the T11 committee for developing the single-lane 64GFC specifications, but 64GFC is on the FCIA speed roadmap. Each FC revision is expected to be backwards compatible with at least the two previous generations.
    • SAN interface — FC has a future as a SAN interface for the foreseeable future. There has been a huge investment (US$ Billions) in FC infrastructure over the years, primarily in enterprise datacenters, which is likely to remain deployed for many years.
    • Disk drive interface — FC has reached end-of-life as a disk drive interface, as the disk drive and SSD manufacturers have moved to 6Gb/s and 12Gb/s SAS as the interface for enterprise drives. We expect to see the FC interface on 3.5-inch disk drives to live a while to maintain spare parts, due to the relatively large number of 3.5-inch FC disk drives in enterprise disk subsystems. We expect to see relatively few 2.5-inch enterprise disk drives with an FC interface.
  • FCoE
    • FC-BB-6 — As of April 2013, good progress is being made on FC-BB-6 in the T11 committee. FC-BB-6 has completed letter ballot and is in comment resolution. Several hundred comments remain to be resolved but this is normal for a standard of this size. FC-BB-6 standardizes the VN to VN (VN2VN) architecture and introduces the cFCF/FDF topology. VN2VN is a way to directly connect FCoE end-nodes (Virtual N_Ports) without the requirement of FC or FCoE switches, also known as FC Forwarders (FCFs). This concept is also sometimes known as “Ethernet Only” FCoE. The cFCF/FDF topology includes traditional FCFs and FCoE Data Forwarders (FDF). An FDF acts as a layer 2 switch, similar to a network access or edge device. The term “FCoE aware bridge” has also been used to describe an FDF.
    • 40Gb/s and 100Gb/s — 40Gb/s is a year or two away, possibly in the same time period as 32Gb FC. The IEEE 802.3ba 40Gb/s and 100Gb/s Ethernet standards were ratified in June 2010. Products are expected to follow over time. It is expected that 40Gb FCoE and 100Gb FCoE based on the 2010 standards will be used initially for Inter-Switch Link (ISL) cores, thereby maintaining 10Gb FCoE as the predominant FCoE edge connection through at least 2013. It is expected that future versions of 100GFCoE cables and connectors will be available in 10x10 configurations and later in 4x25 configurations. See the connector types section below for discussion on 40Gb and 100Gb connectors.
  • IB — 100Gb/s (Enhanced Data Rate or EDR) is expected to become available by the end of 2014. EDR will use the same 25/28G technology that will be used by other interfaces such as Ethernet and Fibre Channel. See the Connector section below regarding the 25/28G technology. InfiniBand High Data Rate (HDR), supporting twice the speed of EDR, is expected in 2017.
  • iSCSI — follows Ethernet roadmap (see FCoE roadmap above).
  • NVMe — The NVMe workgroup is continuing to develop the specification and is expecting the next revision to be completed approximately mid-year 2014. Some early NVMe SSDs were expected to ship by the end of 2013, but further engineering work was needed. We expect to see the first wave of end-user NVMe SSDs to become available during 2014. The UEFI 2.4 specification contains updates for NVMe, and full BIOS support of NVMe devices is expected to appear in products beginning in 2014. 
  • SAS
    • 12 Gb/s SAS — The SAS 3 specification, that includes 12Gb/s SAS, was submitted to INCITS in Q4 2013. End-user 12Gb/s SAS products began to appear in the second half of 2013, including SAS-interface SSDs, SAS HBAs and RAID controllers. 12Gb/s SAS is required to take full advantage of a PCIe 3.0 bus.
    • 24 Gb/s SAS — Development is currently underway on the 24 Gb/s SAS specification. Early estimates suggest that 24 Gb/s SAS components will begin to appear in 2016 or 2017, but these are estimates and are subject to change. 24 Gb/s SAS is expected to be backward compatible with 12 Gb/s and 6 Gb/s SAS. 24 Gb/s SAS may use a different encoding scheme than previous versions. Drive connectors are expected in Q1 2014.
    • SCSI Express — SCSI Express provides the well-known SCSI Protocol over the PCI Express (PCIe) interface, taking advantage of the low latency of PCIe. SCSI Express is designed to meet the increased performance of SSDs. SCSI Express uses SCSI over PCIe (SOP) and the PCIe Queueing Interface (PQI) to form the SOP-PQI protocol. SCSI Express controllers connect to devices via the Express Bay SFF-8639 multifunction connector, which supports multiple protocols and interfaces, such as PCIe, SAS and SATA. SCSI Express handles PCIe devices that use up to x4 lanes. The SCSI Express specification may be published in late 2013 or early 2014. SCSI Express devices and controllers are estimated to become available in 2H 2014. See theExpress Bay Connector Backplane diagram below.
    • SAS Advanced Connectivity — New SAS cabling options are offering longer distances by using active copper (powered signal) and optical cables. The Mini SAS HD connector can be used for 6 Gb/s SAS and will be used for 12 Gb/s SAS connections. See the connector typessection below for discussion of Mini SAS and Mini SAS HD connectors.
  • SATA
    • SATA Express is included in SATA Revision 3.2 (see history section above). SATA Express enables client SATA and PCI Express (PCIe) solutions to coexist. SATA Express will support increased interface speeds up to 2 lanes of PCIe (2GB/s for PCIe 3.0, 1GB/s for PCIe 2.0), compared with current 0.6GB/s (6Gb/s) SATA technology. These increased speeds are suitable for SSD and SSHD technology, while traditional HDD technology can continue to use today’s SATA interface. The SATA Express device connector pins are multiplexed, meaning that only PCIe or SATA (but not both) can be active for that device at one time. A separate signal, driven by the device tells the host if the device is SATA or PCIe. SATA Express products may become available in 2014 or 2015. See below for the SATA Express Connector Mating Matrix.
  • USB
    • USB data rates — The USB 3.0 Promoter Group announced at the end of July 2013 that the USB 3.1 specification had been completed. USB 3.1 enables USB to operate at 10 Gbps, that is backward compatible with existing USB 3.0 and 2.0 hubs and devices. USB 3.1 uses a 128b/132b encoding scheme, with four bits used for command and control for the protocol and the cable management. The first public prototype demonstration of USB running at 10 Gbps was made in September 2013. USB 3.1 is expected to support USB Power Delivery (described below). The SuperSpeed USB and xHCI controller were designed to scale to reach data rates of 25Gb/s and beyond in the future.
    • USB Power Delivery — USB is becoming a power delivery interface, with an increasing number of devices charging or receiving power via USB ports in computers or wall sockets and power strips. The USB Power Delivery (PD) Specification, version 1.0, was introduced in July 2012 to allow an increased amount of power to be carried via USB. This specification proposes to raise the limit from 7.5 watts up to 100 watts of power, depending on cable and connector types. Devices negotiate with each other to determine voltage and current levels for the power transmission, and power can flow in either direction. Devices can adjust their power charging rates while transmitting data. Prototypes began to appear in late 2013. USB PD is expected to be included with the USB 3.1 specification (see above).

 

SAS-SATA Connector Compatibility

SAS-SATA Connector Compatibility
Source: SCSI Trade Association

 

Express Bay Connector Backplane

Express Bay Connector Backplane
Source: SCSI Trade Association

 

SATA Express Connector Mating Matrix

SATA Express Connector Mating MatrixSource: SATA-IO

 


 

Cables: Fiber Optics and Copper

 

 

As interface speeds increase, expect increased usage of fiber optic cables and connectors for most interfaces. At higher Gigabit speeds (10Gb+), copper cables and interconnects generally have too much amplitude loss except for short distances, such as within a rack or to a nearby rack. This amplitude loss is sometimes called a poor signal-to-noise ratio or simply “too noisy”.

 

Single-mode fiber vs. Multi-mode fiber

There are two general types of fiber optic cables available: single-mode fiber and multi-mode fiber.

  • Single-mode fiber (SMF), typically with an optical core of approximately 9 µm (microns), has lower modal dispersion than multi-mode fiber and can support distances up to 80-100 Km (Kilometers) or more, depending on transmission speed, transceivers and the buffer credits allocated in the switches.
  • Multi-mode fiber (MMF), with optical core of either 50 µm or 62.5 µm, supports distances up to 600 meters, depending on transmission speeds and transceivers.

Meter-for-meter, single-mode and multi-mode cables are similarly priced. However, some of the other components used in single-mode links are more expensive than their multi-mode equivalents.

 

 

When planning datacenter cabling requirements, be sure to consider that a service life of 15 to 20 yearscan be expected for fiber optic cabling, so the choices made today need to support legacy, current and emerging data rates. Also note that deploying large amounts of new cable in a datacenter can be labor- intensive, especially in existing environments.

 

There are different designations for fiber optic cables depending on the bandwidth supported.

  • Multi-mode: OM1, OM2, OM3, OM4
  • Single-mode: OS1 (there is a proposed OS2 standard)

 

OM3 and OM4 are newer multi-mode cables that are “laser optimized” (LOMMF) and support 10 Gigabit Ethernet applications. OM3 and OM4 cables are also the only multi-mode fibers included in the IEEE 802.3ba 40G/100G Ethernet standard that was ratified in June 2010. The 40G and 100G speeds are currently achieved by bundling multiple channels together in parallel with special multi-channel (or multi-lane) connector types. This standard defines an expected operating range of up to 100m for OM3 and up to 150m for OM4 for 40 Gigabit Ethernet and 100 Gigabit Ethernet. These are estimates of distance only and supported distances may differ when 40GbE and 100GbE products become available in the coming years. See the Connector Types section below for additional detail. OM4 cabling is expected to support 32GFC up to 100 meters.

 

Newer multi-mode OM2, OM3 and OM4 (50 µm) and single-mode OS1 (9 µm) fiber optic cables have been introduced that can handle tight corners and turns. These are known as “bend optimized,” “bend insensitive,” or have “enhanced bend performance.” These fiber optic cables can have a very small turn or bend radius with minimal signal loss or “bending loss.” The term “bend optimized” multi-mode fiber (BOMMF) is sometimes used.

 

OS1 and OS2 single-mode fiber optics are used for long distances, up to 10,000m (6.2 miles) with the standard transceivers and have been known to work at much longer distances with special transceivers and switching infrastructure.

 

Each of the multi-mode and single-mode fiber optic cable types includes two wavelengths. The higher wavelengths are used for longer-distance connections.

Update: 24 April 2012 — The Telecommunications Industry Association (TIA) Engineering Committee TR-42 Telecommunications Cabling Systems has approved the publication of TIA-942-A, the revised Telecommunications Infrastructure Standard for Data Centers. A number of changes were made to update the specification with respect to higher transmission speeds, energy efficiency and harmonizing with international standards. For backbone and horizontal cabling and connectors, the following are some of the important updates:

  • Copper cabling — Cat 6 is the minimum requirement, Cat 6a recommended
  • Fiber optic cabling — OM3 is the minimum requirement, OM4 is recommended
  • Fiber optic connectors — LC is the standard for one or two fiber connectors

 

10Gb Ethernet Fiber-Optic Cables

  • 10GBASE-SR — Currently, the most common type of fiber-optic 10GbE cable is the 10GBASE-SR cable that supports an SFP+ connector with an optical transceiver rated for 10Gb transmission speed. These are also known as “short reach” fiber-optic cables.
  • 10GBASE-LR — These are the “long reach” fiber optic cables that support single-mode fiber optic cables and connectors.

 

 

Indoor vs. Outdoor cabling

Indoor fiber-optic cables are suitable for indoor building applications. Outdoor cables, also known as outside plant or OSP, are suitable for outdoor applications and are water (liquid and frozen) and ultra-violet resistant. Indoor/outdoor cables provide the protections of outdoor cables with a fire-retardant jacket that allows deployment of these cables inside the building entrance beyond the OSP maximum distance, which can reduce the number of transition splices and connections needed.

 

 

Fiber Optic Cable Characteristics

  Mode Core Diameter Wavelength Modal Bandwidth Cable jacket color
OM1 multi-mode 62.5 µm 850 nm
1300 nm
200 MHz Orange
OM2 multi-mode 50 µm 850 nm
1300 nm
500 MHz Orange
OM3 multi-mode 50 µm 850 nm
1300 nm
2000 MHz Aqua
OM4 multi-mode 50 µm 850 nm
1300 nm
4700 MHz Aqua
OS1
single-mode 9 µm 1310 nm
1550 nm
Yellow

  

Fiber Optic Cable by Distance and Speed

  OM1 OM2 OM3 OM4
1 Gb/s 300m 500m 860m  
2 Gb/s 150m 300m 500m  
4 Gb/s 70m 150m 380m 400m
8 Gb/s 21m 50m 150m 190m
10 Gb/s 33m 82m Up to 300m Up to 400m ¹
16 Gb/s 15m ¹ 35m ¹ 100m ¹ 125m ¹

 ¹ OM1 cable is not recommended for 16Gb/s FC, but is expected to operate up to 15m. 


Distances supported in actual configurations are generally less than the distance supported by the raw fiber optic cable. The distances shown above are for 850 nm wavelength multi-mode cables. The 1300 nm wavelength multi-mode cables can support longer distances.

 

Active Copper vs. Passive Copper

Passive copper connections are common with many interfaces. The industry is finding that as the transfer rates increase, passive copper does not provide the distance needed and takes up too much physical space. The industry is moving towards an active copper type of interface for higher speed connections, such as 6Gb/s SAS. Active copper connections include components that boost the signal, reduce the noise and work with smaller-gauge cables, improving signal distance, cable flexibility and airflow. These active copper components are expected to be less expensive and consume less electric power than the equivalent components used with fiber optic cables.

 

Copper: 10GBASE-T and 1000BASE-T

1000BASE-T cabling is commonly used for 1Gb Ethernet traffic in general, and 1Gb iSCSI for storage connections. This is the familiar four pair copper cable with the RJ45 connectors. Cables used for 1000BASE-T are known as Cat5e (Category 5 enhanced) or Cat6 (Category 6) cables.

 

10GBASE-T cabling supports 10Gb Ethernet traffic, including 10Gb iSCSI storage traffic. The cables and connectors are similar to, but not the same as the cables used for 1000BASE-T. 10GBASE-T cables are Cat6a (Category 6 augmented), also known as Class EA cables. These support the higher frequencies required for 10Gb transmission up to 100 meters (330 feet). Cables must be certified to at least 500MHz to ensure 10GBASE-T compliance. Cat7 (Category 7, Class F) cable is also certified for 10GBASE-T compliance, and is typically deployed in Europe. Cat6 cables may work in 10GBASE-T deployments up to 55m, but should be tested first. 10GBASE-T cabling is not expected to be deployed for FCoE applications in the near future. Some newer 10GbE switches support 10GBASE-T (RJ45) connectors.

 

10GBASE-CR — Currently, the most common type of copper 10GbE cable is the 10GBASE-CR cable that uses an attached SFP+ connector, also known as a Direct Attach Copper (DAC). This fits into the same form factor connector and housing as the fiber optic cables with SFP+ connectors. Many 10GbE switches accept cables with SFP+ connectors, which support both copper and fiber optic cables. These cables are available in 1m, 3m, 5m, 7m, 8.5m and longer distances. The most commonly deployed distances are 3m and 5m.

 

10GBASE-CX4 — These cables are older and not very common. This type of cable and connector is similar to cables used for InfiniBand technology.

 


 

Connector Types

  

Several types of connectors are available with cables used for storage interfaces. This is not an exhaustive list but is intended to show the more common types. Each of the connector types includes the number of lanes (or channels) and the rated speed.

 

As of early 2011, the fastest generally available connector speeds supported were 10 Gbps per lane. Significantly higher speeds are currently achieved by bundling multiple lanes in parallel, such as 4x10 (40 Gbps), 10x10 (100 Gbps), 12x10 (120 Gbps), etc. Most of the current implementations of 40GbE and 100GbE use multiple lanes of 10GbE and are considered “channel bonded” solutions.

 

14 Gbps per lane connectors appeared in the last half of 2011. These connectors support 16Gb Fibre Channel (single-lane) and 56Gb (FDR) InfiniBand (multi-lane).

 

25 Gbps per lane connectors are expected to become available in 2012 or 2013. When 25 Gbps per lane connectors are available, then higher speeds, such as 100 Gbps can be achieved by bundling four of these lanes together. Other variations of bundling multiple lanes of 25 Gbps may be possible, such as 10x25 (250 Gbps), 12x25 (300 Gbps) or 16x25 (400 Gbps). It is expected that the 25 Gbps (actually 28 Gbps) connectors will support 32Gb Fibre Channel in single-lane configurations and higher speeds for Ethernet and InfiniBand in multi-lane configurations.

 

In calendar Q1 2012, several fiber-optic connector manufacturers demonstrated working prototypes of the “25/28G” connectors. These connectors support speeds up to 28 Gbps per lane and will be used for 100 Gbps Ethernet (100GbE) in a 4x25 configuration. These connector technologies will also be used for other high-speed applications such as the next higher speeds of Fibre Channel (32GFC) and InfiniBand. End-user products with these higher speed technologies are estimated to become available in 2013 or 2014.

 

Two of the popular fiber-optic cable connectors are SFP+ and QSFP+ (see diagrams below). SFP+ is used for single-lane high-speed connections, and QSFP+ is used for four-lane high-speed connections. Many in the industry use the four-lane (“quad”) interface to provide increased bandwidth. Currently, the single-lane SFP+ is used for 10Gb Ethernet and 8Gb and 16Gb Fibre Channel. The four-lane QSFP+ is used for 40Gb Ethernet and 40Gb (QDR) and 56Gb (FDR) InfiniBand. The Fibre Channel technical committee is now officially discussing a single-lane and four-lane (“quad”) solution with the 32GFC technology (4x32) for a 128 Gb/sec connection. See the Roadmaps section above. 

 

SFP+ QSFP+ Connector/Interface Table

  SFP SFP+ QSFP+
Ethernet 1GbE 10GbE 40GbE
Fibre Channel 1GFC, 2GFC, 4GFC 8GFC, 16GFC
InfiniBand QDR, FDR

Note the encoding schemes described above for additional detail on speeds available for various connector and cable combinations.

 

Connector Table

  Type Lanes Max. Speed per lane (Gbps) Max. Speed total (Gbps) Cable type Usage
Mini SAS SAS 4 6 24 Copper 3Gb, 6Gb SAS
Mini SAS HD SAS 4, 8 12 48, 96 Copper 6Gb, 12Gb SAS
Copper CX4 CX4 4 5 20 Copper 10Gb Ethernet,
SDR and DDR InfiniBand
Small Form-factor Pluggable SFP 1 4 4 Copper, Optical 1Gb Ethernet,
Fibre Channel: 1, 2, 4Gb
Small Form-factor Pluggable enhanced SFP+ 1 16 16 Copper, Optical 10Gb Ethernet, 8Gb & 16Gb Fibre Channel,
10Gb FCoE
Quad Small Form-factor Pluggable QSFP 4 5 20 Copper, Optical Various
Quad Small Form-factor Pluggable enhanced QSFP+ 4 16 64 Copper, Optical 40Gb Ethernet,
DDR, QDR & FDR InfiniBand,
64Gb Fibre Channel
CXP CXP 10, 12 10 100, 120 Copper 100Gb Ethernet,
120Gb other
CFP CFP 10 10 100 Optical 100Gb Ethernet

PCIe data rates and connector types are provided in the PCI Express section.

InfiniBand Data Rates

SDR: Single Data Rate, DDR: Double Data Rate, QDR: Quad Data Rate, FDR: Fourteen Data Rate, EDR: Enhanced Data Rate

  

Connector Diagrams

  Type Diagram
Mini SAS SAS
Mini SAS HD SAS HD
Copper CX4 CX4
Small Form-factor Pluggable SFP, SFP+
Quad Small Form-factor Pluggable QSFP, QSFP+

PCIe connector types are provided in the PCI Express section.

 

Demartek mini-SFP photoMini SFP

In the second half of 2010, a new variant of the SFP/SFP+ connector was introduced to accommodate the Fibre Channel backbone with 64-port blades and the planned increased density Ethernet core switches. This new connector, known as mSFP, mini-SFP or mini-LC SFP, narrows the optical centerline of a conventional SFP/SFP+ connector from 6.25 mm to 5.25 mm. Although this connector looks very much like a standard SFP style connector, it is narrower and is required for the higher-density devices. The photo at the right shows the difference between mini-SFP and the standard size.

 

CXP and CFP

The CXP (copper) and CFP (optical) connectors are expected to be used initially for switch-to-switch connections. These are expected for Ethernet and may also be used for InfiniBand. CFP connectors currently support 10 lanes of 10 Gbps connections (10x10) that consume approximately 35-40 watts. CFP2 is a single board, smaller version of CFP that also supports 10x10 but uses less power than CFP. During 2013, quite a bit of development activity is focused on CFP2. A future CFP4 connector is in the planning stages that is expected to use the 25/28G connectors and support 4x25. CFP4 is expected to handle long range fiber optic distances.

 

Mini SAS and Mini SAS HD

The Mini SAS connector is the familiar 4-lane connector available on most SAS cables today. The Mini SAS HD connector provides twice the density as the Mini SAS connector, and is available in 4-lane and 8-lane configurations. The Mini SAS HD connector is the same connector for passive copper, active copper and optical SAS cables. The diagrams below compare these two types of SAS connectors.


Mini SAS HD Receptacle ComparisonMini SAS HD Comparison
Source: SCSI Trade Association

 


 

PCI Express (PCIe) 

 

PCI Express, also known as PCIe, stands for Peripheral Component Interconnect Express and is the computer industry standard for the I/O bus for computers introduced in the last few years. The first version of the PCIe specification, 1.0a, was introduced in 2003. Version 2.0 was introduced in 2007 and version 3.0 was introduced in 2010. These versions are often identified by their generation (“gen 1”, “gen 2”, etc.). It can take a year or two between the time the specification is introduced and general availability of computer systems and devices using those specification versions. The PCIe specifications are developed and maintained by the PCI-SIG (PCI Special Interest Group). PCI Express and PCIe are registered trademarks of the PCI-SIG.

 

Data rates for different versions of PCIe are shown in the table below. PCIe data rates are expressed in Gigatransfers per second (GT/s) and are a function of the number of lanes in the connection. The number of lanes is expressed with an “x” before the number of lanes, and is often spoken as “by 1”, “by 4”, etc. PCIe supports full-duplex (traffic in both directions). The data rates shown below are in each direction. Note the explanation of encoding schemes described above.

 

PCIe Data Rate Table

  GT/s Encoding x1 x2 x4 x8 x16
PCIe 1.x 2.5 8b/10b 250 MB/s 500 MB/s 1 GB/s 2 GB/s 4 GB/s
PCIe 2.x 5 8b/10b 500 MB/s 1 GB/s 2 GB/s 4 GB/s 8 GB/s
PCIe 3.x 8 128b/130b 1 GB/s 2 GB/s 4 GB/s 8 GB/s 16 GB/s

 

Efforts are underway to enable SATA and SAS to be carried over PCIe connections. See the roadmapssection above.

 

Mini-PCIe — PCI Express cards are also available in a mini PCIe form factor. This is a special form factor for PCIe that is approximately 30mm x 51mm or 30mm x 26.5mm, designed for laptop and notebook computers, and equivalent to a single-lane (x1) PCIe slot. A variety of devices including WiFi modules, WAN modules, video/audio decoders, SSDs and other devices are available in this form factor.

 

PCIe 2.0 — Servers that have PCIe 2.0 x8 slots can support two ports of 10GbE or two ports of 16GFC on one adapter.

 

PCIe 3.0 — On 6 March 2012, the major server vendors announced their next generation servers that support PCIe 3.0, which, among other things, doubles the I/O throughput rate from the previous generation. These servers also provide up to 40 PCIe 3.0 lanes per processor socket, which is also at least double from the previous server generation. Workstation and desktop computer motherboards that support PCIe 3.0 first appeared in late 2011. PCIe 3.0 graphics cards appeared in late 2011. We expect to see networking and storage I/O adapters that are PCIe 3.0 capable in 2012 and in 2013.

 

PCIe 4.0 — In November 2011, the PCI-SIG announced the approval of 16 gigatransfers per second as the bit rate for the next generation of PCIe architecture, known as PCIe 4.0. After technical analysis, it was determined that 16 GT/s can be manufactured and deployed with known technologies. Several other aspects of the 4.0 specification are yet to be decided. The final PCIe 4.0 specifications are expected to be available in late 2015. PCIe 4.0 is expected to be backward compatible with PCIe 1.x, 2.x and 3.x.

 

I/O Virtualization — In 2008, the PCI-SIG announced the completion of its I/O Virtualization (IOV) suite of specifications including single-root IOV (SR-IOV) and multi-root IOV (MR-IOV). These technologies can work with system virtualization technologies and can allow multiple operating systems to natively share PCIe devices. SR-IOV is currently supported with several 10GbE NICs and hypervisors. 

 

The concept of sharing PCIe devices or providing access to PCIe devices that may be physically larger than some smaller form-factor systems can accommodate has led to the development of external connections to some PCIe devices. Cables have been developed for extending the PCIe bus outside of the chassis holding the PCIe slots. These cables are specified by indicating the number of PCIe lanes (x4, x8, etc.) supported. Cables are typically available for x4, x8 and x16 lane configurations. Common cable lengths are 1m and 3m. The photo below shows some PCIe cables and connectors. PCIe can also be carried over fiber-optic cables for longer distances.

 

PCIe cables