By David Gross
Intel, Google, Verizon, and a group of other companies recently announced that they were forming a research initiative to promote the advancement of Terabit Ethernet. With 100 Gigabit having just been standardized, and 120 Gigabit InfiniBand on its way, this seems like a logical step. But it's not a linear one.
The 40 Gigabit port types coming to market now, like 40GBASE-SR4 and 40GBASE-CR4, are combining multiple 10 Gigabit channels to get to 40 Gigs. Part of the problem is that no one has figured out how to do serial transmission above 10 Gigabit without creating a switch port that costs the same as a house in Silicon Valley.
Serial transmission above 10 Gigabit exists, including in a few OC-768 ports in AT&T's network - it's not a technology mystery. But serial transmission above 10 Gigabit that does not require expensive SerDes, dispersion compensation, and optical components is an economic problem no one has solved yet. It is quite possible that 10 Gigabit will be the ceiling for serial transmission, just as 14,400 baud was the economic ceiling for dial modems, which had to modulate multiple bits into each transmission to achieve rates of 28.8 kbps and higher.
Even at short reaches, some kind of 1000TBASE-SR100 standard, with 100 channels of 10G, would be a cabling and multiplexing disaster. Ultimately, just as modems had to start using QPSK and QAM to mux up to higher rates, some kind of new modulation is likely to be needed, in addition to CWDM ot DWDM, to get up to Terabit. But the first step to Terabit is not to do 10x 100 Gigabit Ethernet, but to figure out how to deliver one channel of 25 billion bits without asking customers to spend millions.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.