By Lisa Huff
The Peripheral Component Interconnect (PCI) is that bus in computers that connects everything back to the processor. It has been around for as long as I can remember having workstations. But in recent years, it has been morphing in response to the need to connect to the processor at higher data rates.
PCI Express (PCIe) GEN1 defined PCIe over cable implementations in 2007. Molex was instrumental in helping to define this and up until now, it has been a purely copper solution using its iPass™ connection system. This system has been used mainly for “inside-the-box” applications first at 2.5G (GEN1) and then at 5G (GEN2). The adoption rate for PCIe over cable has been slow. It is mainly used for high-end multi-chassis applications including I/O expansion, disk array subsystems, high speed video and audio editing equipment and medical imaging systems.
PCIe GEN3 is running at 8G and some physical layer component vendors are looking to use an optical solution instead of the current copper cable along with trying to move it into a true I/O technology for the data center connections – servers to switches and eventually storage. While component vendors are excited about these applications, mainstream OEMs do not seem interested in supporting it. I believe it is because they see it as a threat to their Ethernet equipment revenue.
CXP AOCs seem to be a perfect fit for this GEN3 version of PCIe, but neither equipment manufacturers nor component suppliers believe it will reach the price level needed for this low-cost system. It is expected that the optical interconnect should cost 10’s of dollars, not 100’s. However, CXP AOCs may be used for PCIe GEN3 prototype testing for proof of concept. But if the first demonstrations are any indication, this will not be the case. PCIe GEN3 over optical cable was recently shown by PLX Technology using just a one channel optical engine next to its GEN3 chip with standard LC jumpers. PLX and other vendors are looking towards using optical engines with standard MPO patch cords to extend this to 4x and 8x implementations.
Columbia University and McGill University also demonstrated PCIe GEN3, but with eight lanes over a WDM optical interconnect. This is obviously much more expensive than even the CXP AOCs and is not expected to get any traction in real networks.
Another factor against PCIe as an I/O is the end user. In a data center, there are typically three types of support personnel – networking (switches), storage/server managers and data center managers. While the server managers are familiar with PCIe from an “inside-the-box” perspective, I’m not sure they are ready to replace their Ethernet connections outside the box. And, the others may have heard of PCIe, but probably aren’t open to changing their Ethernet connections either. They can run 10-Gigabit on their Ethernet connections today so really don’t see a need to learn an entirely new type of interconnect in their data center. In fact, they are all leaning towards consolidation instead – getting to one network throughout their data center like FCoE. But, as I've stated in previous posts, this won’t happen until it is shown to be more cost effective than just using 10GigE to connect their LANs and SANs. The fact that PCIe I/O could be cheaper than Ethernet may not be enough because Ethernet is pretty cost-effective itself and has the luxury of being the installed base.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.