Ethernet: Example of a Gargantuan Appetite for Data Centers

Ethernet: Example of a Gargantuan Appetite for Data Centers

December 10th, 2018

By Lynnette Reese, Editor-in-Chief, Embedded Systems Engineering

Ethernet technology is stretching to meet the growing demands of data centers for a data-centric society.

Data centers are reaching phenomenal sizes. A 3.5 million square foot data center in Las Vegas, Nevada “provides fiber optic-speed information retrieval to over 50 million customers in the USA,” according to Gigabit Magazine.[i] Another 6.3 million square foot facility in Langfang, China covers an area similar in size to the U.S. Pentagon. The Kolos Data Center in Norway is projected to be around 6.46 million square feet and run entirely from renewable energy sources.[ii]

The era of personal computers gave way to the age of the internet, then mobile devices. Now, a massive amount of data is flowing through data centers, driven by growth in mobile applications and content, the ubiquitous use of smartphones in everyday life, Artificial Intelligence (AI), and the dream of autonomous vehicles. In this new data economy, wireless 5G is a hot topic, but data centers are just as critical as “the last mile” that 5G is expected to deliver.

IHS Markit’s latest report that tracks the Ethernet switching market forecasts a low- to mid-single-digit increase in demand from 2019 to 2022. The 10 Gigabit Ethernet (10GbE), 25GbE, 100GbE, 200GbE, and 400GbE segments are projected to show exceptional growth over the next few years. In June 2018, the Ethernet switching market had surged 12 percent over the previous year, mainly due to increasing data demands and subsequent infrastructure upgrades. “The market enjoyed its strongest growth in seven years in 2017, and the momentum continued into 2018, fueled by continuing data center upgrades and expansion, as well as growing demand for campus gear due to improving economic conditions. The transition to 25/100GE architectures in the data center is in full swing, driving strong gains in 25GE, 100GE and white box shipments,” per IHS Markit. [iii]

Figure 1: Total Ethernet revenue per switch by quarter. (Source: IHS Markit)

Although 100GbE networks are in wide use today, the next big step for data centers in accommodating an ever-increasing need for bandwidth is physical layer connectivity at an Ethernet speed of 400Gb/s. The 200Gb/s and 400Gb/s Ethernet (IEEE p802.3bs) standard was approved and fully ratified by the IEEE-SA standards board in late 2017 after about four and a half years of effort.[iv] Whereas 200Gb/s is a two-fold improvement, 400GbE not only quadruples the speed of 100GbE, it provides a more dense configuration and proportionate savings in cost due to an increase in the level of throughput.

Standards are created by industry players working together to create the best solution, although some Multiple-Source Agreements (MSAs) have been made between a consortium of companies. These companies, anticipating market demand, couldn’t wait for the official standards and launched proprietary transceivers ahead of the ratified specification. Although the MSA companies, for the most part, followed IEEE specifications in the design and construction of transceivers that they manufactured, they are not part of the IEEE. Some of the work done by MSA companies made it into the IEEE specifications, however.[v]

Manufacturers are aggressively pursuing 400GbE transceivers, optical modules, and Ethernet switches. Technology at 100GbE alone has several architectures. For instance, “for 100G connectivity, there are currently 18 optical physical media dependent (PMD) architectures—either standardized by IEEE or under various multi-source agreements amongst a select group of companies,” according to Nexans’ 400 Gb/s Landscape Potential Interfaces and Architectures. The report also states that the trend for proprietary MSA devices is continuing at 400GbE with various architectures that are not fully compliant with the IEEE standard.[vi] The demand for faster, bigger pipes for the era of big data arose sooner than the official IEEE 400 GbE standard, and the market has responded.

400 GbE Means Multiplexing
To achieve speeds higher than 25Gb/s for LAN applications, optical signals must be multiplexed. The IEEE standard also implements a Clock Data Re-Timer (CDR) in its criteria for physical media access. Multiplexing lower-speed signals to achieve higher throughput is accomplished with Wavelength Division Multiplexing (WDM) and by using additional fibers through parallel optics in aggregate. 400GbE is much more complicated and challenges data center network managers in deciding between multimode and singlemode fiber options, among other items. Again, per Nexans, “For 40G and 100G (except for 100GBASE-SR10) data rates, the signaling rate and its encoding scheme was the same for all its respective PMD architectures, and the only variables were number of WDM channels and fibers per connector. WDM transmits multiple wavelengths on a single fiber.) At 400GbE, the landscape consists of at least four different signaling rates with both NRZ and PAM-4 encoding schemes; up to eight lanes of WDM; and up to 32 fibers per connector…in some cases, a combination of more than one multiplexing scheme is required to achieve 400Gb/s.”

The 400GbE transceiver can have as many as eight channels and uses the PAM-4 encoding scheme to stretch transmissions to 50Gb/s per channel. Several 400GbE interface standards for transceivers have been released, including 400GBASE-SR16, 400GBASE-FR8, 400GBASE-LR8, and 400GBASE-DR4. Organized by distance covered:

100 m: The 400GBASE-SR16 interface standard has 32 fibers (16 transmit and 16 receiving) carrying 25Gb/s each over a range of at least 100m. But at 32 fibers, supporting 400Gb/s gets awkward.

500 m: Extending for at least 500 meters, the 400GBASE-DR4 standard runs over single-mode fiber. The 400GBASE-DR4 standard delivers a 4x100Gb/s PMD constructed using PAM-4 modulation on four parallel fibers running 100Gb/s in each direction. Operating at 400Gb/s is possible with just eight fibers, which is in common use for 40G and 100G links.

2 km: The 400GBASE-FR8 specification employs 8x50G PAM-4 WDM for distances of at least two km with a single-mode fiber in each direction. Output signals are multiplexed into one fiber transmitting at 400Gb/s. The receiver then de-multiplexes the signal, transferring it onto eight optical channels at 50Gb/s.

Figure 2: Block diagram for a 400Gb/s singlemode optical PMD architecture. (Source: nexansdatacenter.com)

10 km: The 400GBASE-LR8 specification is like -FR8 above but reaches a longer distance of at least 10 km using single-mode fiber.


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

[i] https://www.gigabitmagazine.com/top10/top-10-biggest-data-centres-world

[ii] http://kolos.com/

[iii] https://technology.ihs.com/550593

[iv] http://www.ieee802.org/3/400GSG/email/msg01519.html

[v] https://www.cablinginstall.com/articles/print/volume-26/issue-6/features/technology/multimode-and-singlemode-cabling-options-for-data-centers.html

[vi] https://nexansdatacenter.com/wp-content/uploads/2018/06/400G-Ethernet-Landscape-WP_FINAL.pdf