By David Gross
ZDNet has a good article out on the decision to upgrade to 40G. But for all the talk and excitement over the new standard, the reality is that line rate is not as important a measure of network capacity as it used to be, especially with multi-gigabit port prices having more to do with transceiver reach, cabling options, and port densities than framing protocol. 10GBASE-SR, the 850nm 10 Gigabit Ethernet multimode standard, for example, has a lot more in common with 40GBASE-SR4 than it does 10GBASE-EW, the 1550nm singlemode, WAN PHY, standard. Moreover, a 40 Gigabit port can can cost as little as $350, if it's configured on a high-density Voltaire InfiniBand switch, but over $600,000 if it's configured to run Packet-over-SONET on a Cisco CRS.
Within the data center, which line rate you choose is increasingly falling back in importance to which transceiver you choose, what type of cabling, whether to aggregate at end-of-row or top-of-rack, and so forth. Moreover, as I wrote a few weeks ago, the most power efficient data center network is often not the most capital efficient. So rather than considering when 40 Gigabit Ethernet upgrades will occur, I think it's more important to monitor what's happening with average link lengths, the ratio of installed singlemode/multimode/copper ports, cross-connect densities in public data centers, the rate of transition to IPv6 peering, which can require power-hungry TCAMs within core routers, and especially whether price ratios among 850, 1310, and 1550 nanometer transceivers are growing or shrinking. So rather than wondering when 40G will achieve a 3x price ratio to 10G, it's equally important to consider whether 10G, 1550nm transceivers will ever fall below 10x the price of 10G, 850nm transceivers.
Line rate used to be far more important within this discussion. When 802.3z (the optical GigE standard) came out in 1998, it wasn't that it was 3x the price of Fast Ethernet that mattered to data centers, but that it was cheaper than 100 Meg FDDI, which was the leading networking standard for the first public web hosting centers. The wholesale and rapid replacement of FDDI with GigE was largely a result of line rate - more bits for less money - and an economic result of the low price of high volume Ethernet framers. But over a gigabit, production volume of framers gives way to the technical challenge of clocking with sub-nanonsecond bit intervals, and silicon vendors have had the same challenges with signaling and jitter with Ethernet that they've had with InfiniBand and Fibre Channel. This is a major issue economically, because widely-used LVDS on-chip signaling is not Ethernet-specific, and therefore does not allow Ethernet to create the same price/performance gains over other framing protocols it did at a gigabit and below.
Another factor to look at is that all the 40 Gigabit protocols showing any signs of hope within the data center run at serial rates of 10 Gigabit, whether InfiniBand or Ethernet-framed, because no one has come up with a way to run more than 10 Gigabit on a serial lane economically, even though it's been technically possible and commercially available for years on OC-768 Packet-over-SONET line cards. In addition to the high costs of dispersion compensation on longer reach optical links, transmitting a bit less than 1/10th of a nanosecond after the last bit was sent has proven to be a major economic challenge for silicon developers, and likely will be for years.
So as we look beyond 10 Gigabit in the data center, we also need to advance the public discussion beyond the very late-90s/early 2000s emphasis on line rates, and look further into how other factors are now playing a large role in achieving the best cost per bit in data center networks.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.