By David Gross
I'd rather read the Internal Revenue Code than a typical sell-side research note. At least the I.R.S. is specific, and doesn't create buzzwords and catchphrases like "first mover", "secular growth", and "low hanging fruit". And even though we've had a series of tax reforms through the years, no one's ever referred to a "next generation" tax code . Also, when it tells you to do something, the tax collection agency is far better at telling you what to do next than an investment research firm. "You owe us" leaves far less doubt about where to send your money than the analyst note informing you that a stock is a "near-term accumulate".
In the tradition of vague analyst-speak, Zacks recently announced it had upgraded Equinix from neutral to outperform. Its reasoning - the company beat consensus revenue by 0.4%, issued encouraging guidance, and is still expanding. No wait, that's not quite right. It was "continuous efforts to expand the current facilities". Not sure how to interpret this. I mean, I've been to the Ashburn campus in the last few days and any effort - continuous or not - to expand DC2 will push them through the pine trees and into DFT's ACC4 building.
The note then goes through Equinix's financial ratios, remarks how it's well-positioned or something, but I can't tell you what it said after that because I received an e-mail about a Nigerian prince leaving me the sum of exactly $1,307,465.27, which was a lot more interesting. And specific!
Equinix is up, or in Wall Street language "moving in positive territory" by 57 cents this afternoon, on very light volume of 400,000 shares.
Thursday, December 30, 2010
Wednesday, December 29, 2010
Microsoft Gets Final Approval for West Des Moines Data Center
By David Gross
This summer, Microsoft announced that it was resuming construction on the Iowa data center it had postponed completing during the recession. Now The Des Moines Register is reporting that Microsoft has received final approval for the West Des Moines data center, and that the company is obligated to complete construction on the facility by December 2012.
The City of West Des Moines is kicking in $8 million worth of roads and water main extensions to serve the facility, which will be financed through bonds secured by the site's property taxes. The project will cost $200 million to complete, which suggests this will only be phase 1, or a scaled down version of the initially proposed 500,000 square foot site.
The data center has been a high profile economic development project for the State of Iowa, which also hosts a Google facility two hours west in Council Bluffs.
This summer, Microsoft announced that it was resuming construction on the Iowa data center it had postponed completing during the recession. Now The Des Moines Register is reporting that Microsoft has received final approval for the West Des Moines data center, and that the company is obligated to complete construction on the facility by December 2012.
The City of West Des Moines is kicking in $8 million worth of roads and water main extensions to serve the facility, which will be financed through bonds secured by the site's property taxes. The project will cost $200 million to complete, which suggests this will only be phase 1, or a scaled down version of the initially proposed 500,000 square foot site.
The data center has been a high profile economic development project for the State of Iowa, which also hosts a Google facility two hours west in Council Bluffs.
Labels:
Microsoft
Monday, December 27, 2010
PAETEC Opens 5th Data Center in Milwaukee
By David Gross
With the Cincinnati Bell-Cyrus One and Windstream-Hosted Solutions deals of the past year, we've seen growing interest in the data center market among independent telcos. But many CLECs aren't just providing connectivity into facilities, they're expanding their own regional data center offerings. Paetec, which at $1.5 billion in annual revenue is one of the larger remaining CLECs, recently announced it had opened its 5th data center, a 92,000 square foot project in Milwaukee.
The new facility is Paetec's first in the Midwest, and is targeted at businesses throughout the region, from Chicago to St. Louis to Minneapolis. The company's existing buildings are in Pennsylvania, Massachusetts, and Texas, and is plans to expand to Arizona next year.
With the Cincinnati Bell-Cyrus One and Windstream-Hosted Solutions deals of the past year, we've seen growing interest in the data center market among independent telcos. But many CLECs aren't just providing connectivity into facilities, they're expanding their own regional data center offerings. Paetec, which at $1.5 billion in annual revenue is one of the larger remaining CLECs, recently announced it had opened its 5th data center, a 92,000 square foot project in Milwaukee.
The new facility is Paetec's first in the Midwest, and is targeted at businesses throughout the region, from Chicago to St. Louis to Minneapolis. The company's existing buildings are in Pennsylvania, Massachusetts, and Texas, and is plans to expand to Arizona next year.
XO Connects with Baltimore Technology Park
By David Gross
XO announced last week that it is providing connectivity to Baltimore Technology Park, a carrier-neutral co-lo facility located in that city's downtown. BTP, and its sister site, the Philadelphia Technology Park, offer regional versions of what providers like Equinix and CoreSite offer in the big data center markets, allowing local businesses to cross-connect and colocate without having to reach Northern New Jersey or Northern Virginia. By bringing in XO and the same selection of carriers available at an Equinix site, albeit without the massive peering exchange, these data centers offer regional Fortune 500 businesses, hospitals, universities, and other local businesses a comparable service to what financial traders and major websites get at Equinix sites in the larger markets.
In September, Lisa and I visited the Philadelphia Technology Park, which is located within the Philadelphia Navy Yard. More information on that site, as well as the one in Baltimore, is available at http://www.philadelphiatechnologypark.com/ and http://www.baltimoretechnologypark.com.
XO announced last week that it is providing connectivity to Baltimore Technology Park, a carrier-neutral co-lo facility located in that city's downtown. BTP, and its sister site, the Philadelphia Technology Park, offer regional versions of what providers like Equinix and CoreSite offer in the big data center markets, allowing local businesses to cross-connect and colocate without having to reach Northern New Jersey or Northern Virginia. By bringing in XO and the same selection of carriers available at an Equinix site, albeit without the massive peering exchange, these data centers offer regional Fortune 500 businesses, hospitals, universities, and other local businesses a comparable service to what financial traders and major websites get at Equinix sites in the larger markets.
In September, Lisa and I visited the Philadelphia Technology Park, which is located within the Philadelphia Navy Yard. More information on that site, as well as the one in Baltimore, is available at http://www.philadelphiatechnologypark.com/ and http://www.baltimoretechnologypark.com.
Wednesday, December 22, 2010
Everything Saves Energy Costs
By David Gross
It's always amusing to see what techniques are used to sell products in this industry. For many years, the made up "TCO" metric, developed in sales and marketing, not finance, has made its way into all kinds of places. The term "ROI" has also been used and abused, with very little mention of the metric that really matters to any capital expenditure, which is IRR. When technology, sales, and financial measurement have met, the results haven't been pretty.
The latest financial tag being attached to products is saving energy costs. And I'm not just talking about new CRACs (Computer Room A/C), UPS systems, or PDUs, but cabling, networks, really anything that is physically near a data center, can be sold as a device to cut your energy bill. As I pointed out a few weeks ago, there are some tremendously energy efficient network products which make little sense to deploy unless you want to re-design your network. A Voltaire 4036 InfiniBand switch, for example, has a nameplate capacity of .18 Watts per Gbps, less than a tenth of a typical Ethernet switch. Only problem is that deploying an InfiniBand cluster doesn't make financial or operational sense for many data centers.
Now my favorite recent example is from a recent Processor article, where a Cisco exec claims you should deploy Fibre Channel-over-Ethernet because it reduces energy costs 30%. Yes, deploy a network that's likely to degrade performance of your SAN and LAN, and increase the capital costs of both by forcing you to buy expensive switches and CNAs (Converged Network Adapters). This made me laugh because this was EXACTLY the argument used for the failed "God Boxes" of the early 2000s. Buy one big monster instead of multiple smaller devices, and save power because it's one piece of hardware, not seven or eight. The capital returns on doing this were atrocious, and the market performance of those products reflected this. Moreover, the power savings are theoretical, not based on operating networks.
It's no secret that energy efficiency is important to any data center. But like anything else, it's a trade-off. You can have high response times, a 100 Meg network, and lightly loaded racks and use very little energy.
The Processor article goes on to say that locating your data center at a renewable power source is a great way to reduce your carbon footprint. This comes from a RackForce Networks exec. Economically, this also cuts your variable power cost down to almost zero, especially with wind power, which has remarkably low O&M costs. However, this does not mean everyone will follow Google, Microsoft, Yahoo, and Verizon to Lake Erie or the Columbia River Valley. The trade-off is that you also have to put more capital into fiber and network than you do in Santa Clara or Ashburn. Not to mention the building itself. For this reason, it makes little sense to talk about energy savings generically, but rather to determine how the trade-offs change when you go from Equinix or DLR to your own building, and vice versa.
It's always amusing to see what techniques are used to sell products in this industry. For many years, the made up "TCO" metric, developed in sales and marketing, not finance, has made its way into all kinds of places. The term "ROI" has also been used and abused, with very little mention of the metric that really matters to any capital expenditure, which is IRR. When technology, sales, and financial measurement have met, the results haven't been pretty.
The latest financial tag being attached to products is saving energy costs. And I'm not just talking about new CRACs (Computer Room A/C), UPS systems, or PDUs, but cabling, networks, really anything that is physically near a data center, can be sold as a device to cut your energy bill. As I pointed out a few weeks ago, there are some tremendously energy efficient network products which make little sense to deploy unless you want to re-design your network. A Voltaire 4036 InfiniBand switch, for example, has a nameplate capacity of .18 Watts per Gbps, less than a tenth of a typical Ethernet switch. Only problem is that deploying an InfiniBand cluster doesn't make financial or operational sense for many data centers.
Now my favorite recent example is from a recent Processor article, where a Cisco exec claims you should deploy Fibre Channel-over-Ethernet because it reduces energy costs 30%. Yes, deploy a network that's likely to degrade performance of your SAN and LAN, and increase the capital costs of both by forcing you to buy expensive switches and CNAs (Converged Network Adapters). This made me laugh because this was EXACTLY the argument used for the failed "God Boxes" of the early 2000s. Buy one big monster instead of multiple smaller devices, and save power because it's one piece of hardware, not seven or eight. The capital returns on doing this were atrocious, and the market performance of those products reflected this. Moreover, the power savings are theoretical, not based on operating networks.
It's no secret that energy efficiency is important to any data center. But like anything else, it's a trade-off. You can have high response times, a 100 Meg network, and lightly loaded racks and use very little energy.
The Processor article goes on to say that locating your data center at a renewable power source is a great way to reduce your carbon footprint. This comes from a RackForce Networks exec. Economically, this also cuts your variable power cost down to almost zero, especially with wind power, which has remarkably low O&M costs. However, this does not mean everyone will follow Google, Microsoft, Yahoo, and Verizon to Lake Erie or the Columbia River Valley. The trade-off is that you also have to put more capital into fiber and network than you do in Santa Clara or Ashburn. Not to mention the building itself. For this reason, it makes little sense to talk about energy savings generically, but rather to determine how the trade-offs change when you go from Equinix or DLR to your own building, and vice versa.
Labels:
Power and Cooling
The 10X10 MSA: Niche, Distraction or the Right Answer? (Continued)
By Lisa Huff
While Vipul has a point that this new MSA is probably a distraction, it is difficult to deny that there is a market for cost-effective devices with optical reaches between 100m and 10km. In fact, 100m to 300m is the market that multi-mode fiber has served so well for the last 20 years. And, 300m to 2km has been a niche for lower-cost 1310nm single mode products like 1000BASE-LX. So I have a slightly different opinion about this 10x10 MSA and whether it’s a niche, distraction or the right answer.In a recent article written on Optical Reflection, Pauline Rigby quotes Google’s senior network architect, Bikash Koley. About 100GBASE-SR10, he says 100m isn’t long enough for Google – that it won’t even cover room-to-room connections and that “ribbon fibres are hard to deploy, hard to manage, hard to terminate and hard to connect. We don’t like them.” There is an answer for this ribbon-fiber problem – don’t use it. There are many optical fiber manufacturers that now provide round multi-fiber cables that are only “ribbonized” at the ends for use with the 12-position MPO connector and are much easier to install – Berk-Tek, A Nexans Company, AFL and even Corning have released products that address this concern. But, the 100m optical reach is another matter.
I have to agree with Google about one other thing – 4x25G QSFP+ solutions are at least four years away from reality (and I would say probably even longer). This solution will eventually have the low-cost, low-power and high-density Google requires, but not quick enough. I think something needs to be done to address Google’s and others requirements between 300m and 2km in the short term, but I also believe that it needs to be standardized. There is no IEEE variant that would currently cover a 10x10G single mode device. However, there is an effort currently going on in the IEEE for 40G over SMF up to 2km. Perhaps the members of the MSA should look to work with this group to expand its work or start a new related project to cover 100G for 2km as well? I know this was thrown out of the IEEE before, but so were 1000BASE-T and 10GBASE-T initially.
So what I'm saying is that the market is more than a niche - hundreds of millions of dollars of LOMF sales at 1G and 10G would attest to that. And it's more than a distraction because there is a need. But I don't think it's entirely the right answer without an IEEE variant to back it up.
Let us know what you think.
Labels:
Optical Components
Tuesday, December 21, 2010
CoreSite Declares Dividend, Yield Just Under 4%
by David Gross
CoreSite declared a dividend of 13 cents this week, giving the company an annualized yield just under 4% based on today's closing price of $13.60. The stock has risen nearly ten percent since yesterday morning when the dividend was announced. Nonetheless, it remains below the $16 level at which it IPO'd a couple months ago.
Overall, the last few weeks have been rough for data center REITs. In addition to CoreSite struggling to get back to its IPO price, Digital Realty is down over 4% over the last month, and DuPont Fabros is down over 7%. DLR's yield is up to 4.33% as a result of its weak performance during autumn. It began the season over $61, and is down to $48.94 as we head into winter.
CoreSite declared a dividend of 13 cents this week, giving the company an annualized yield just under 4% based on today's closing price of $13.60. The stock has risen nearly ten percent since yesterday morning when the dividend was announced. Nonetheless, it remains below the $16 level at which it IPO'd a couple months ago.
Overall, the last few weeks have been rough for data center REITs. In addition to CoreSite struggling to get back to its IPO price, Digital Realty is down over 4% over the last month, and DuPont Fabros is down over 7%. DLR's yield is up to 4.33% as a result of its weak performance during autumn. It began the season over $61, and is down to $48.94 as we head into winter.
Labels:
COR,
Data Center REITs
The 10X10 MSA: Niche, Distraction or the Right Answer?
By Vipul Bhatt, Guest Blogger
{For today’s blog, our guest author is Vipul Bhatt. Lisa has known Vipul for several years, since when he was the Director of High Speed Optical Subsystems at Finisar. He has served as the Chair of Optical PMD Subgroup of IEEE 802.3ah Ethernet in the First Mile (EFM), and the Chair of Equalization Ad Hoc of IEEE 802.3ae 10G Ethernet. He can be reached at vjb@SignalOptics.com.}
If you are interested in guest blogging here, please contact us at mail at datacenterstocks.com
Last week, Google, JDSU, Brocade and Santur Corp announced the 10X10 Multi-Source Agreement (MSA) to establish sources of 100G transceivers. It will have 10 optical lanes of 10G each. Their focus is on using single mode fiber to achieve a link length of up to 2 km. The key idea is that a transceiver based on 10 lanes of 10G will have lower power consumption and cost because it doesn’t need the 10:4 gearbox and 25G components. But is this a good idea? What is the tradeoff? Based on my conversations with colleagues in the industry, it seems there are three different opinions emerging about how this will play out. I will label them as niche, distraction, or the right answer. Here is a paraphrasing of those three opinions.
It’s a niche: It’s a solution optimized for giant data centers – we’re talking about a minority of data centers (a) that are [already] rich in single mode fiber, (b) where the 100-meter reach of multi-mode 100GBASE-SR10 is inadequate, and (c) where the need for enormous bandwidth is so urgent that the density of 10G ports is not enough, and 100G ports can be consumed in respectable quantities in 2011.
It’s a distraction: Why create another MSA that is less comprehensive in scope than CFP, when the CFP has sufficient support and momentum already? Ethernet addresses various needs – large campuses, metro links, etc. – with specifications like the LR4 that need to support link lengths of well beyond 2 km over one pair of fiber. We [do] need an MSA that implements LR4, and the SR10 meets the needs of a vast majority of data centers, so why not go with CFP that can implement both LR4 and SR10? As for reducing power consumption and cost, the CFP folks are already working on it. And it’s not like we don’t have time – the 10G volume curve hasn’t peaked yet, and may not even peak in 2011. Question: What is the surest way to slow down the decisions of Ethernet switch vendors? Answer: Have one MSA too many.
It’s the right answer: What is the point of having a standard if we can’t implement it for two years? The CFP just isn’t at the right price-performance point today. The 10X10 MSA can be the “here and now” solution because it will be built with 10G components that have already traversed the experience curve. It can be built with power, density and cost figures that will excite the switch vendors, which may accelerate the adoption of 100G Ethernet, not distract it. As for 1-pair vs. 10-pairs of fiber, the first swelling of 100G demand will be in data centers where it’s easier to lay more fiber, if there isn’t plenty installed already. The 2-km length is sufficient to serve small campuses and large urban buildings as well.
Okay, so what do I think? I think the distraction argument is the most persuasive. An implementation that is neither SR10-compliant nor LR4-compliant is going to have a tough time winning the commitment of Ethernet switch vendors, even if it’s cheaper and cooler than the CFP in the short term.
{For today’s blog, our guest author is Vipul Bhatt. Lisa has known Vipul for several years, since when he was the Director of High Speed Optical Subsystems at Finisar. He has served as the Chair of Optical PMD Subgroup of IEEE 802.3ah Ethernet in the First Mile (EFM), and the Chair of Equalization Ad Hoc of IEEE 802.3ae 10G Ethernet. He can be reached at vjb@SignalOptics.com.}
If you are interested in guest blogging here, please contact us at mail at datacenterstocks.com
Last week, Google, JDSU, Brocade and Santur Corp announced the 10X10 Multi-Source Agreement (MSA) to establish sources of 100G transceivers. It will have 10 optical lanes of 10G each. Their focus is on using single mode fiber to achieve a link length of up to 2 km. The key idea is that a transceiver based on 10 lanes of 10G will have lower power consumption and cost because it doesn’t need the 10:4 gearbox and 25G components. But is this a good idea? What is the tradeoff? Based on my conversations with colleagues in the industry, it seems there are three different opinions emerging about how this will play out. I will label them as niche, distraction, or the right answer. Here is a paraphrasing of those three opinions.
It’s a niche: It’s a solution optimized for giant data centers – we’re talking about a minority of data centers (a) that are [already] rich in single mode fiber, (b) where the 100-meter reach of multi-mode 100GBASE-SR10 is inadequate, and (c) where the need for enormous bandwidth is so urgent that the density of 10G ports is not enough, and 100G ports can be consumed in respectable quantities in 2011.
It’s a distraction: Why create another MSA that is less comprehensive in scope than CFP, when the CFP has sufficient support and momentum already? Ethernet addresses various needs – large campuses, metro links, etc. – with specifications like the LR4 that need to support link lengths of well beyond 2 km over one pair of fiber. We [do] need an MSA that implements LR4, and the SR10 meets the needs of a vast majority of data centers, so why not go with CFP that can implement both LR4 and SR10? As for reducing power consumption and cost, the CFP folks are already working on it. And it’s not like we don’t have time – the 10G volume curve hasn’t peaked yet, and may not even peak in 2011. Question: What is the surest way to slow down the decisions of Ethernet switch vendors? Answer: Have one MSA too many.
It’s the right answer: What is the point of having a standard if we can’t implement it for two years? The CFP just isn’t at the right price-performance point today. The 10X10 MSA can be the “here and now” solution because it will be built with 10G components that have already traversed the experience curve. It can be built with power, density and cost figures that will excite the switch vendors, which may accelerate the adoption of 100G Ethernet, not distract it. As for 1-pair vs. 10-pairs of fiber, the first swelling of 100G demand will be in data centers where it’s easier to lay more fiber, if there isn’t plenty installed already. The 2-km length is sufficient to serve small campuses and large urban buildings as well.
Okay, so what do I think? I think the distraction argument is the most persuasive. An implementation that is neither SR10-compliant nor LR4-compliant is going to have a tough time winning the commitment of Ethernet switch vendors, even if it’s cheaper and cooler than the CFP in the short term.
Labels:
Optical Components
Friday, December 17, 2010
Google Moving Out, Small Businesses Moving In
By David Gross
David Chernicoff over at ZDNet has a good article out on data center planning, where he notes that many of the small to mid-size businesses he's spoken to are planning to outsource some of their operations. This is similar to the experience Lisa and I have had talking to data center managers who run internal centers, and are hitting capacity limits. It also is an important point for investors to consider, many of whom are still fretting about the data center services industry with Google, Facebook, and other brand name tenants investing so heavily in their own buildings.
One of the factors to consider with this developing market segment is that these small businesses are not going to be buying a powered base building sort of service, nor are they likely to hit up Equinix for a few cabinets. More realistically, they'll go to IBM, Horizon Data Centers, a hosting provider, or even someone like Rackspace, and start handing over applications slowly. Additionally, connectivity is a major concern once these small businesses move beyond simple e-mail outsourcing, and a data center that has dedicated links to other facilities closer to the customer will allow that customer to cross-connect closer to the office, and avoid high dedicated circuit costs from a telco.
Economically, an internal data center for Google, Apple, or Facebook produces a financial return by turning an operating cost for a building lease into a capital cost, while an outsourcing arrangement for a small business turns a capital cost for servers into an operating cost. As a result, the heaviest users are hitting a point where outsourcing makes less sense, while the lightest users are hitting a point where outsourcing makes more sense. The result is that the public data center of the future will have a tenant roster that looks less like what you might find in an office building in Santa Clara, and more like what you'd see in a typical suburban office park.
David Chernicoff over at ZDNet has a good article out on data center planning, where he notes that many of the small to mid-size businesses he's spoken to are planning to outsource some of their operations. This is similar to the experience Lisa and I have had talking to data center managers who run internal centers, and are hitting capacity limits. It also is an important point for investors to consider, many of whom are still fretting about the data center services industry with Google, Facebook, and other brand name tenants investing so heavily in their own buildings.
One of the factors to consider with this developing market segment is that these small businesses are not going to be buying a powered base building sort of service, nor are they likely to hit up Equinix for a few cabinets. More realistically, they'll go to IBM, Horizon Data Centers, a hosting provider, or even someone like Rackspace, and start handing over applications slowly. Additionally, connectivity is a major concern once these small businesses move beyond simple e-mail outsourcing, and a data center that has dedicated links to other facilities closer to the customer will allow that customer to cross-connect closer to the office, and avoid high dedicated circuit costs from a telco.
Economically, an internal data center for Google, Apple, or Facebook produces a financial return by turning an operating cost for a building lease into a capital cost, while an outsourcing arrangement for a small business turns a capital cost for servers into an operating cost. As a result, the heaviest users are hitting a point where outsourcing makes less sense, while the lightest users are hitting a point where outsourcing makes more sense. The result is that the public data center of the future will have a tenant roster that looks less like what you might find in an office building in Santa Clara, and more like what you'd see in a typical suburban office park.
Thursday, December 16, 2010
AboveNet Expanding Services at Data Centers
By David Gross
For years, facility-based CLECs have struggled to fill many of the optical links they've run to corporate office buildings. With 5-10 tenants in some locations, it can be a struggle for a provider to generate enough revenue to get a good return on the capital invested in the fiber lateral that hits the building. The data center has provided a great opportunity to overcome this challenge by offering so many corporate customers in one physical location. And few providers have seized this opportunity as well as AboveNet has. This is one factor behind the company's 16% net margins - actual profit, not EBITDA. This is the highest I've ever seen for a bandwidth provider.
Earlier this week, AboveNet announced a new sales initiative to provide optical connectivity services at over 400 data centers across the country. Its footprint follows many of the major public data center markets, including DC, New York, and Silicon Valley. A more detailed map of the company's data center POPs is available here.
For years, facility-based CLECs have struggled to fill many of the optical links they've run to corporate office buildings. With 5-10 tenants in some locations, it can be a struggle for a provider to generate enough revenue to get a good return on the capital invested in the fiber lateral that hits the building. The data center has provided a great opportunity to overcome this challenge by offering so many corporate customers in one physical location. And few providers have seized this opportunity as well as AboveNet has. This is one factor behind the company's 16% net margins - actual profit, not EBITDA. This is the highest I've ever seen for a bandwidth provider.
Earlier this week, AboveNet announced a new sales initiative to provide optical connectivity services at over 400 data centers across the country. Its footprint follows many of the major public data center markets, including DC, New York, and Silicon Valley. A more detailed map of the company's data center POPs is available here.
Labels:
ABVT
Wednesday, December 15, 2010
DAC Report
By David Gross
We're happy to announce that our latest report, Direct Attach Copper Cable Assemblies for 10, 40, and 100 Gigabit Networks, is now available. We've posted a table of contents on the "DAC Report" page if you are interested in learning more.
We're happy to announce that our latest report, Direct Attach Copper Cable Assemblies for 10, 40, and 100 Gigabit Networks, is now available. We've posted a table of contents on the "DAC Report" page if you are interested in learning more.
Tuesday, December 14, 2010
Should You Increase CRAC Set Points to Save Energy Costs?
By David Gross
Energy management for data centers has been lighting up the press wire lately. The fundamental economic premise behind most of the stories is that by monitoring temperature, air flow, and humidity at more places and more closely, a data center will get a great financial return by reducing energy costs. But I'm finding that some of the vendor presentations present the savings at a very generic level, and while they might have a good story to tell, the suppliers need more detailed financial analysis, and more sensitivity analysis in their financial estimates, especially to highlight how the financial paybacks vary at different power densities.
Recently, consulting firm Data Center Resources LLC put out a press release claiming that by increasing CRAC (Computer Room Air Conditioner) set points, a data center could get a "six month" ROI on its investment in sensors, aisle containment systems, and airstrips that augment existing blanking panels. Of course, there is no such thing as a six month ROI, but I'll grant them the point that they really mean a six month payback period. However, as I've said many times, ROI is a meaningless metric, instead data center managers should be using IRR, and incorporating the time value of money into all such calculations.
Once these new systems are installed, Data Center Resources argues that you can start increasing the temperature (they did not mention anything about humidity) set point on the CRACs and reduce energy costs. The firm claims each degree increase in the CRAC set point cuts 4-5% in annual energy expenses. But given the wide discrepancies in data center power densities, the actual savings are going to vary dramatically, and before estimating an IRR, a data center manager would need to perform a sensitivity analysis based on growing server, power, and cooling capacities at different rates, otherwise this is all just a generic argument for hot aisle/cold aisle containment.
Energy management for data centers has been lighting up the press wire lately. The fundamental economic premise behind most of the stories is that by monitoring temperature, air flow, and humidity at more places and more closely, a data center will get a great financial return by reducing energy costs. But I'm finding that some of the vendor presentations present the savings at a very generic level, and while they might have a good story to tell, the suppliers need more detailed financial analysis, and more sensitivity analysis in their financial estimates, especially to highlight how the financial paybacks vary at different power densities.
Recently, consulting firm Data Center Resources LLC put out a press release claiming that by increasing CRAC (Computer Room Air Conditioner) set points, a data center could get a "six month" ROI on its investment in sensors, aisle containment systems, and airstrips that augment existing blanking panels. Of course, there is no such thing as a six month ROI, but I'll grant them the point that they really mean a six month payback period. However, as I've said many times, ROI is a meaningless metric, instead data center managers should be using IRR, and incorporating the time value of money into all such calculations.
Once these new systems are installed, Data Center Resources argues that you can start increasing the temperature (they did not mention anything about humidity) set point on the CRACs and reduce energy costs. The firm claims each degree increase in the CRAC set point cuts 4-5% in annual energy expenses. But given the wide discrepancies in data center power densities, the actual savings are going to vary dramatically, and before estimating an IRR, a data center manager would need to perform a sensitivity analysis based on growing server, power, and cooling capacities at different rates, otherwise this is all just a generic argument for hot aisle/cold aisle containment.
Labels:
Power and Cooling
Monday, December 13, 2010
When to Upgrade to 40 Gigabit?
By David Gross
ZDNet has a good article out on the decision to upgrade to 40G. But for all the talk and excitement over the new standard, the reality is that line rate is not as important a measure of network capacity as it used to be, especially with multi-gigabit port prices having more to do with transceiver reach, cabling options, and port densities than framing protocol. 10GBASE-SR, the 850nm 10 Gigabit Ethernet multimode standard, for example, has a lot more in common with 40GBASE-SR4 than it does 10GBASE-EW, the 1550nm singlemode, WAN PHY, standard. Moreover, a 40 Gigabit port can can cost as little as $350, if it's configured on a high-density Voltaire InfiniBand switch, but over $600,000 if it's configured to run Packet-over-SONET on a Cisco CRS.
Within the data center, which line rate you choose is increasingly falling back in importance to which transceiver you choose, what type of cabling, whether to aggregate at end-of-row or top-of-rack, and so forth. Moreover, as I wrote a few weeks ago, the most power efficient data center network is often not the most capital efficient. So rather than considering when 40 Gigabit Ethernet upgrades will occur, I think it's more important to monitor what's happening with average link lengths, the ratio of installed singlemode/multimode/copper ports, cross-connect densities in public data centers, the rate of transition to IPv6 peering, which can require power-hungry TCAMs within core routers, and especially whether price ratios among 850, 1310, and 1550 nanometer transceivers are growing or shrinking. So rather than wondering when 40G will achieve a 3x price ratio to 10G, it's equally important to consider whether 10G, 1550nm transceivers will ever fall below 10x the price of 10G, 850nm transceivers.
Line rate used to be far more important within this discussion. When 802.3z (the optical GigE standard) came out in 1998, it wasn't that it was 3x the price of Fast Ethernet that mattered to data centers, but that it was cheaper than 100 Meg FDDI, which was the leading networking standard for the first public web hosting centers. The wholesale and rapid replacement of FDDI with GigE was largely a result of line rate - more bits for less money - and an economic result of the low price of high volume Ethernet framers. But over a gigabit, production volume of framers gives way to the technical challenge of clocking with sub-nanonsecond bit intervals, and silicon vendors have had the same challenges with signaling and jitter with Ethernet that they've had with InfiniBand and Fibre Channel. This is a major issue economically, because widely-used LVDS on-chip signaling is not Ethernet-specific, and therefore does not allow Ethernet to create the same price/performance gains over other framing protocols it did at a gigabit and below.
Another factor to look at is that all the 40 Gigabit protocols showing any signs of hope within the data center run at serial rates of 10 Gigabit, whether InfiniBand or Ethernet-framed, because no one has come up with a way to run more than 10 Gigabit on a serial lane economically, even though it's been technically possible and commercially available for years on OC-768 Packet-over-SONET line cards. In addition to the high costs of dispersion compensation on longer reach optical links, transmitting a bit less than 1/10th of a nanosecond after the last bit was sent has proven to be a major economic challenge for silicon developers, and likely will be for years.
So as we look beyond 10 Gigabit in the data center, we also need to advance the public discussion beyond the very late-90s/early 2000s emphasis on line rates, and look further into how other factors are now playing a large role in achieving the best cost per bit in data center networks.
ZDNet has a good article out on the decision to upgrade to 40G. But for all the talk and excitement over the new standard, the reality is that line rate is not as important a measure of network capacity as it used to be, especially with multi-gigabit port prices having more to do with transceiver reach, cabling options, and port densities than framing protocol. 10GBASE-SR, the 850nm 10 Gigabit Ethernet multimode standard, for example, has a lot more in common with 40GBASE-SR4 than it does 10GBASE-EW, the 1550nm singlemode, WAN PHY, standard. Moreover, a 40 Gigabit port can can cost as little as $350, if it's configured on a high-density Voltaire InfiniBand switch, but over $600,000 if it's configured to run Packet-over-SONET on a Cisco CRS.
Within the data center, which line rate you choose is increasingly falling back in importance to which transceiver you choose, what type of cabling, whether to aggregate at end-of-row or top-of-rack, and so forth. Moreover, as I wrote a few weeks ago, the most power efficient data center network is often not the most capital efficient. So rather than considering when 40 Gigabit Ethernet upgrades will occur, I think it's more important to monitor what's happening with average link lengths, the ratio of installed singlemode/multimode/copper ports, cross-connect densities in public data centers, the rate of transition to IPv6 peering, which can require power-hungry TCAMs within core routers, and especially whether price ratios among 850, 1310, and 1550 nanometer transceivers are growing or shrinking. So rather than wondering when 40G will achieve a 3x price ratio to 10G, it's equally important to consider whether 10G, 1550nm transceivers will ever fall below 10x the price of 10G, 850nm transceivers.
Line rate used to be far more important within this discussion. When 802.3z (the optical GigE standard) came out in 1998, it wasn't that it was 3x the price of Fast Ethernet that mattered to data centers, but that it was cheaper than 100 Meg FDDI, which was the leading networking standard for the first public web hosting centers. The wholesale and rapid replacement of FDDI with GigE was largely a result of line rate - more bits for less money - and an economic result of the low price of high volume Ethernet framers. But over a gigabit, production volume of framers gives way to the technical challenge of clocking with sub-nanonsecond bit intervals, and silicon vendors have had the same challenges with signaling and jitter with Ethernet that they've had with InfiniBand and Fibre Channel. This is a major issue economically, because widely-used LVDS on-chip signaling is not Ethernet-specific, and therefore does not allow Ethernet to create the same price/performance gains over other framing protocols it did at a gigabit and below.
Another factor to look at is that all the 40 Gigabit protocols showing any signs of hope within the data center run at serial rates of 10 Gigabit, whether InfiniBand or Ethernet-framed, because no one has come up with a way to run more than 10 Gigabit on a serial lane economically, even though it's been technically possible and commercially available for years on OC-768 Packet-over-SONET line cards. In addition to the high costs of dispersion compensation on longer reach optical links, transmitting a bit less than 1/10th of a nanosecond after the last bit was sent has proven to be a major economic challenge for silicon developers, and likely will be for years.
So as we look beyond 10 Gigabit in the data center, we also need to advance the public discussion beyond the very late-90s/early 2000s emphasis on line rates, and look further into how other factors are now playing a large role in achieving the best cost per bit in data center networks.
Labels:
40 Gigabit
Friday, December 10, 2010
Is SFP+ the Optical RJ-45?
By Lisa Huff
For those of you that have been in the networking industry for what seems to be 100 years, but is really about 25 years, you know that the one “connector” that hasn’t changed much is the RJ45. While there have been improvements by adding compensation for the error that was made way back when AT&T developed the wiring pattern (splitting the pair causing major crosstalk issues), the connector itself has remained intact. Contrastingly, optical connectors for datacom applications have changed several times – ST to SC to MT-RJ to LC. They have finally seemed to settle on the LC and perhaps on a transceiver form factor – the SFP+. The SFP was originally introduced at 1G, was used for 2G and 4G and with slight improvements has become the SFP+ and the dominant form factor now used for 10G. Well, it is in the process of getting some slight improvements again and promises to make it all the way to 32G. That’s six generations of data rates – pretty impressive. But how?
The INCITS T11.2 Committee's Fibre Channel Physical Layer – 5 (FC-PI-5) standard was ratified in September. It specifies 16G Fibre Channel. Meanwhile, the top transceiver manufacturers have been demonstrating pre-standard 16G SFP+ SW devices. But, wait a minute – short-wavelength VCSELs were supposed to be very unstable when trying to modulate them at data rates above 10G right? Well, it seems that at least Avago and Finisar have figured this out. New microcontrollers and adding at least one clock and data recovery (CDR) device in the module to help clean up the signals have proven to be keys. Both vendors believe it is possible to do this and not add too much cost to the modules. In fact, both also think that possibly by adding electronic dispersion compensation (EDC) they can push the SFP+ to 32G as well - which is the next step for Fibre Channel - hoping to stop at 20G and 25G to cover developments in Ethernet and InfiniBand.
And what about long wavelength devices? It has always been a challenge fitting the components needed to drive long distances into such a small package mainly because the lasers need to be cooled. But not anymore – Opnext has figured it out. In fact, it was showing its 10km 16G FC SFP+ devices long before any of the SW ones were out (March 2010). Of course, this isn't surprising considering Opnext has already figured out 100G long haul as well.
These developments are important to datacom optical networking for a few of reasons:
For those of you that have been in the networking industry for what seems to be 100 years, but is really about 25 years, you know that the one “connector” that hasn’t changed much is the RJ45. While there have been improvements by adding compensation for the error that was made way back when AT&T developed the wiring pattern (splitting the pair causing major crosstalk issues), the connector itself has remained intact. Contrastingly, optical connectors for datacom applications have changed several times – ST to SC to MT-RJ to LC. They have finally seemed to settle on the LC and perhaps on a transceiver form factor – the SFP+. The SFP was originally introduced at 1G, was used for 2G and 4G and with slight improvements has become the SFP+ and the dominant form factor now used for 10G. Well, it is in the process of getting some slight improvements again and promises to make it all the way to 32G. That’s six generations of data rates – pretty impressive. But how?
The INCITS T11.2 Committee's Fibre Channel Physical Layer – 5 (FC-PI-5) standard was ratified in September. It specifies 16G Fibre Channel. Meanwhile, the top transceiver manufacturers have been demonstrating pre-standard 16G SFP+ SW devices. But, wait a minute – short-wavelength VCSELs were supposed to be very unstable when trying to modulate them at data rates above 10G right? Well, it seems that at least Avago and Finisar have figured this out. New microcontrollers and adding at least one clock and data recovery (CDR) device in the module to help clean up the signals have proven to be keys. Both vendors believe it is possible to do this and not add too much cost to the modules. In fact, both also think that possibly by adding electronic dispersion compensation (EDC) they can push the SFP+ to 32G as well - which is the next step for Fibre Channel - hoping to stop at 20G and 25G to cover developments in Ethernet and InfiniBand.
And what about long wavelength devices? It has always been a challenge fitting the components needed to drive long distances into such a small package mainly because the lasers need to be cooled. But not anymore – Opnext has figured it out. In fact, it was showing its 10km 16G FC SFP+ devices long before any of the SW ones were out (March 2010). Of course, this isn't surprising considering Opnext has already figured out 100G long haul as well.
These developments are important to datacom optical networking for a few of reasons:
- They show that Fibre Channel is not dead.
- The optical connector and form factor "wars" have seemed to subsided so transceiver manufacturers and optical components vendors can focus on cooperation instead of positioning.
- They will impact the path other networking technologies are taking – Ethernet and InfiniBand are using parallel optics for speeds above 10G – will they switch back to serial?
F5 Added to the S&P 500
By David Gross
F5, which has nearly tripled over the last 12 months, will be added to the S&P 500 after the market close December 17th. The stock was up over 5% after hours yesterday on this news, topping $145 a share.
Netflix, Cablevision, and Newfield Exploration will be joining F5 as new members of the index. Office Depot, The New York Times, Eastman Kodak, and King Pharmaceuticals will be moving out.
F5, which has nearly tripled over the last 12 months, will be added to the S&P 500 after the market close December 17th. The stock was up over 5% after hours yesterday on this news, topping $145 a share.
Netflix, Cablevision, and Newfield Exploration will be joining F5 as new members of the index. Office Depot, The New York Times, Eastman Kodak, and King Pharmaceuticals will be moving out.
Labels:
FFIV
Thursday, December 9, 2010
Savvis Reaffirms Guidance
By David Gross
At its investor day yesterday, Savvis reaffirmed its annual guidance of $1.03 billion to $1.06 billion of revenue, and Adjusted EBITDA of $265 million to $290 million. Wall Street was expecting $1.05 billion and $270 million.
The stock was one of the best performers among data center and hosting providers between July and the end of October, and has nearly doubled over the last five months. But it has fallen $1.54 over the last two days to $26.26 on heavy volume, after it was announced that one of its largest shareholders, Welsh, Carson, Anderson & Stowe, had cut its stake in the company a third to 10.3 million shares.
Savvis is one of those companies where I don't think EBITDA tells you a good story about its prospects. It is still net income and free cash flow negative due to high capex requirements. And its capex produces less revenue per dollar invested than rival Rackspace, whose Revenue/PP&E is approximately 50% higher, because it does not have to spread itself over such a wide product line. Savvis did the right thing selling its CDN to Level 3. At some point, it will need to re-examine why it's still in the bandwidth business.
At its investor day yesterday, Savvis reaffirmed its annual guidance of $1.03 billion to $1.06 billion of revenue, and Adjusted EBITDA of $265 million to $290 million. Wall Street was expecting $1.05 billion and $270 million.
The stock was one of the best performers among data center and hosting providers between July and the end of October, and has nearly doubled over the last five months. But it has fallen $1.54 over the last two days to $26.26 on heavy volume, after it was announced that one of its largest shareholders, Welsh, Carson, Anderson & Stowe, had cut its stake in the company a third to 10.3 million shares.
Savvis is one of those companies where I don't think EBITDA tells you a good story about its prospects. It is still net income and free cash flow negative due to high capex requirements. And its capex produces less revenue per dollar invested than rival Rackspace, whose Revenue/PP&E is approximately 50% higher, because it does not have to spread itself over such a wide product line. Savvis did the right thing selling its CDN to Level 3. At some point, it will need to re-examine why it's still in the bandwidth business.
Labels:
SVVS
Monday, December 6, 2010
Capital Usage Effectiveness vs. Power Usage Effectiveness
By David Gross
The Green Grid recently proposed that data center managers add Carbon Usage Effectiveness and Water Usage Effectiveness to the already widely used Power Usage Effectiveness metric. While I've never seen a data center that tracks too many operating metrics, I've seen plenty that lack appropriate financial measurements.
While I think some good could come out of additional energy and environmental metrics, including possible innovations in cooling architectures, they cannot overwhelm metrics important to shareholders, many of which are not tracked. It's rare to find a data center operator who can't tell you the PUE of the building, by season, but it's also rare to find one who can tell you the IRR on the capital invested in the place. Haphazard upgrades are sometimes required operationally, but as an investor in a building, either a self-administered or leased facility, I'd want to know what the financial returns are on that capital investment, what the alternatives are to these investments, and how much of the capital could be substituted by operating expenses, and what the return on doing such a thing would be. But rarely can data center managers discuss these numbers like they can their PUEs.
The problem with not tracking IRR is that the number of options to build or buy continue to expand in this industry, from the facility itself, to power, to cooling, to telecom capacity. How do you know you're making the best decision if you don't know the returns, and I don't mean costs, but the financial returns of shifting an operating cost to a captial outlay, and vice versa? And what about the timing of expansions? A time-sensitive financial measure like IRR, and not ROI or TCO, is needed to handle this.
It's time for the industry to start tracking a new CUE, not Carbon Usage Effectiveness, but Capital Usage Effectiveness. Vendor TCO models, which generally originate in their marketing departments, not internal capital planning, are a poor substitute for doing this, in fact they're negative because they're arbitrary and typically exclude the opportunity costs of alternative uses of capital, as well as the time value of money. Moreover, good capital planning can assist with good environmental planning, by eliminating unnecessary costs and capital outlays. But it won't start happening until data center managers start tracking their capital output as closely as their environmental output.
The Green Grid recently proposed that data center managers add Carbon Usage Effectiveness and Water Usage Effectiveness to the already widely used Power Usage Effectiveness metric. While I've never seen a data center that tracks too many operating metrics, I've seen plenty that lack appropriate financial measurements.
While I think some good could come out of additional energy and environmental metrics, including possible innovations in cooling architectures, they cannot overwhelm metrics important to shareholders, many of which are not tracked. It's rare to find a data center operator who can't tell you the PUE of the building, by season, but it's also rare to find one who can tell you the IRR on the capital invested in the place. Haphazard upgrades are sometimes required operationally, but as an investor in a building, either a self-administered or leased facility, I'd want to know what the financial returns are on that capital investment, what the alternatives are to these investments, and how much of the capital could be substituted by operating expenses, and what the return on doing such a thing would be. But rarely can data center managers discuss these numbers like they can their PUEs.
The problem with not tracking IRR is that the number of options to build or buy continue to expand in this industry, from the facility itself, to power, to cooling, to telecom capacity. How do you know you're making the best decision if you don't know the returns, and I don't mean costs, but the financial returns of shifting an operating cost to a captial outlay, and vice versa? And what about the timing of expansions? A time-sensitive financial measure like IRR, and not ROI or TCO, is needed to handle this.
It's time for the industry to start tracking a new CUE, not Carbon Usage Effectiveness, but Capital Usage Effectiveness. Vendor TCO models, which generally originate in their marketing departments, not internal capital planning, are a poor substitute for doing this, in fact they're negative because they're arbitrary and typically exclude the opportunity costs of alternative uses of capital, as well as the time value of money. Moreover, good capital planning can assist with good environmental planning, by eliminating unnecessary costs and capital outlays. But it won't start happening until data center managers start tracking their capital output as closely as their environmental output.
Telx Adds 12,500 Square Feet Outside of Digital Realty Buildings
By David Gross
Wall Street has been paying close attention to the relationship between GI Partners, Digital Realty (DLR), and Telx. An investor in both companies, GI Partners has enabled a close relationship between DLR and Telx, including a deal where DLR has granted Telx of exclusive right to operate the Meet Me Rooms in ten of its facilities. Telx operates in five additional buildings, and today announced it has expanded in two of them - 8435 Stemmons Freeway in Dallas, and 100 Delawanna Avenue in Northern New Jersey.
8435 North Stemmons Freeway is an office building in which Telx already had leased a floor. It sits just to the west of Love Field, and is four miles north of the massive Infomart building at 1950 North Stemmons Freeway, a major carrier hotel which serves as the Dallas equivalent to 111 8th Avenue or 60 Hudson.
The 100 Delawanna Avenue facility is located in Clifton, NJ, about three miles west of the Meadlowlands Sports Complex, and a ten mile direct shot down Route 3 to the Lincoln Tunnel. Adjacent to the New Jersey entrance to the tunnel is the 310,000 square feet 300 Boulevard East facility, owned by DLR and leased by Telx as well as many financial traders. (300 Boulevard East sits right next to the loop by the NJ entrance to the tunnel, featured in the intro to The Sopranos") 100 Delawanna provides connectivity into that building as well as the popular Manhattan carrier hotels, and in many respects is a backup site and additional POP for customers in Weehawken. Equinix has a competing site, NY4, in Secaucus, which sits just across the New Jersey Turnpike from 300 Boulevard East.
Telx has been in registration since March, but unfounded concerns about Equinix, as well as the mediocre performance of CoreSite in the aftermarket have kept it from coming out. The company reported $95 million in revenue for the first nine months of 2010, up over 30% from the prior year, with operating margins rising from -5% to 14%, and EBITDA margins increasing to 33%.
Wall Street has been paying close attention to the relationship between GI Partners, Digital Realty (DLR), and Telx. An investor in both companies, GI Partners has enabled a close relationship between DLR and Telx, including a deal where DLR has granted Telx of exclusive right to operate the Meet Me Rooms in ten of its facilities. Telx operates in five additional buildings, and today announced it has expanded in two of them - 8435 Stemmons Freeway in Dallas, and 100 Delawanna Avenue in Northern New Jersey.
8435 North Stemmons Freeway is an office building in which Telx already had leased a floor. It sits just to the west of Love Field, and is four miles north of the massive Infomart building at 1950 North Stemmons Freeway, a major carrier hotel which serves as the Dallas equivalent to 111 8th Avenue or 60 Hudson.
The 100 Delawanna Avenue facility is located in Clifton, NJ, about three miles west of the Meadlowlands Sports Complex, and a ten mile direct shot down Route 3 to the Lincoln Tunnel. Adjacent to the New Jersey entrance to the tunnel is the 310,000 square feet 300 Boulevard East facility, owned by DLR and leased by Telx as well as many financial traders. (300 Boulevard East sits right next to the loop by the NJ entrance to the tunnel, featured in the intro to The Sopranos") 100 Delawanna provides connectivity into that building as well as the popular Manhattan carrier hotels, and in many respects is a backup site and additional POP for customers in Weehawken. Equinix has a competing site, NY4, in Secaucus, which sits just across the New Jersey Turnpike from 300 Boulevard East.
Telx has been in registration since March, but unfounded concerns about Equinix, as well as the mediocre performance of CoreSite in the aftermarket have kept it from coming out. The company reported $95 million in revenue for the first nine months of 2010, up over 30% from the prior year, with operating margins rising from -5% to 14%, and EBITDA margins increasing to 33%.
Labels:
TELX
Friday, December 3, 2010
F5 at a 52-Week High
By David Gross
In spite of a flat to down market today, F5 hit a 52 week high this morning of $141.58, and has nearly tripled over the last 12 months. It's also up over 50% from Goldman's peculiar downgrade of the stock in October. While F5 is a great company that has done an excellent job staying focused on the L4-7 market, the stock is getting ahead of itself.
Since 2003, the company's top line has grown 26% annually, but its current y/y growth rate of 45% is near its post dot com crash peak of 47%. It's no secret on Wall Street or in the data center industry that there's a lot of room for both load balancers and WAN Optimization devices to keep growing, but they're not going to keep growing at close to 50% per year, which is what the current enterprise value/earnings ratio of 55 suggests. That said, this is purely a short-term risk, not unlike the spring and summer of 2006 when the stock lost nearly half of its value as its revenue growth rate decelerated from the high 40s into the low 30s. But investors who held on through that volatility are being rewarded now, and anyone planning to benefit from future growth needs to be ready to handle a 2006-like drop with current revenue growth running so far above historical levels.
In spite of a flat to down market today, F5 hit a 52 week high this morning of $141.58, and has nearly tripled over the last 12 months. It's also up over 50% from Goldman's peculiar downgrade of the stock in October. While F5 is a great company that has done an excellent job staying focused on the L4-7 market, the stock is getting ahead of itself.
Since 2003, the company's top line has grown 26% annually, but its current y/y growth rate of 45% is near its post dot com crash peak of 47%. It's no secret on Wall Street or in the data center industry that there's a lot of room for both load balancers and WAN Optimization devices to keep growing, but they're not going to keep growing at close to 50% per year, which is what the current enterprise value/earnings ratio of 55 suggests. That said, this is purely a short-term risk, not unlike the spring and summer of 2006 when the stock lost nearly half of its value as its revenue growth rate decelerated from the high 40s into the low 30s. But investors who held on through that volatility are being rewarded now, and anyone planning to benefit from future growth needs to be ready to handle a 2006-like drop with current revenue growth running so far above historical levels.
Labels:
FFIV,
Layer 4-7 Hardware
Thursday, December 2, 2010
Windstream Completes Acquisition of Hosted Solutions
By David Gross
Windstream announced yesterday that it had closed its $310 million acquistion of Hosted Solutions. Windstream paid cash for the company, and financed the purchase through its own cash and a revolving line of credit. Like Cincinnati Bell's purchase of CyrusOne, this deal marks a departure from the standard video and bandwidth offerings typical for an independent local phone company.
While CyrusOne is focused on big Texas markets, Hosted Solutions is focused on three markets that classify as tier 2 or 3 within the data center market: Charlotte, Raleigh/Durham, and Boston. The deal brings an additional 68,000 square feet of data center space, and 600 new customers to Windstream. Not unlike Savvis, Hosted Solutions offers a mix of colocation and managed services.
Windstream announced yesterday that it had closed its $310 million acquistion of Hosted Solutions. Windstream paid cash for the company, and financed the purchase through its own cash and a revolving line of credit. Like Cincinnati Bell's purchase of CyrusOne, this deal marks a departure from the standard video and bandwidth offerings typical for an independent local phone company.
While CyrusOne is focused on big Texas markets, Hosted Solutions is focused on three markets that classify as tier 2 or 3 within the data center market: Charlotte, Raleigh/Durham, and Boston. The deal brings an additional 68,000 square feet of data center space, and 600 new customers to Windstream. Not unlike Savvis, Hosted Solutions offers a mix of colocation and managed services.
Wednesday, December 1, 2010
Data Center Stocks Fall 0.6 Percent in November
By David Gross
After bouncing around in October after the Equinix warning, data center stocks stabilized in November, with our DataCenterStocks.com Services Index falling a modest 0.61% to end the month at 97.88. In October, the index was down 1.52%, but this included recovering most of an 11% drop incurred October 6th following the infamous Equinix warning. The index was launchjavascript:void(0)ed October 1 with a value of 100.
The big losers for the month were the REITs, with Digital Realty, DuPont Fabros, and CoreSite all down over 10%. The big winners were Rackspace and Terremark, which both reported y/y top line growth over 20%. In the case of the REITs, one factor holding them back is their low yields. DuPont Fabros only recently started paying a dividend, CoreSite has not begun paying them, and Digital Realty's current yield of 4.04 percent is lower than the current yield on 30 year Treasuries. Additionally, the "bond bull" is loosing steam, with TrimTabs Research reporting that bond funds and bond ETFs recently ended a steak of 99 consecutive weeks of cash inflows. If bond yields do continue to go higher as a result, that would put more pressure on the REITs to increase their dividend yields, and could further pressure their stock prices. In the last month, 10 year Treasuries yields rose 19 points, from 2.60 to 2.79, while 30 year yields rose 13 points to 4.11.
Outside of the REITs, Savvis started to slow down, rising just over 4% for the month, after rising 13% in October, and 43% in the 3rd Quarter. The company, which is not profitable, is trading at about 1.47x annualized revenue with 13% y/y top line growth, hardly enough to sustain such a big run up. Nonetheless, Wall Street has completely fallen in love with Rackspace, which is now trading at 78x annualized earnings (74x net of its cash), on 23% y/y top line growth. It's a remarkably well run hosting provider, but the stock is clearly ahead of itself. While the company has grown its bottom line 55% in the last year, this is primarily due to reductions in its SG&A/Revenue ratio, not its heavily hyped cloud services or other trendy topics that get Wall Street excited.
After bouncing around in October after the Equinix warning, data center stocks stabilized in November, with our DataCenterStocks.com Services Index falling a modest 0.61% to end the month at 97.88. In October, the index was down 1.52%, but this included recovering most of an 11% drop incurred October 6th following the infamous Equinix warning. The index was launchjavascript:void(0)ed October 1 with a value of 100.
The big losers for the month were the REITs, with Digital Realty, DuPont Fabros, and CoreSite all down over 10%. The big winners were Rackspace and Terremark, which both reported y/y top line growth over 20%. In the case of the REITs, one factor holding them back is their low yields. DuPont Fabros only recently started paying a dividend, CoreSite has not begun paying them, and Digital Realty's current yield of 4.04 percent is lower than the current yield on 30 year Treasuries. Additionally, the "bond bull" is loosing steam, with TrimTabs Research reporting that bond funds and bond ETFs recently ended a steak of 99 consecutive weeks of cash inflows. If bond yields do continue to go higher as a result, that would put more pressure on the REITs to increase their dividend yields, and could further pressure their stock prices. In the last month, 10 year Treasuries yields rose 19 points, from 2.60 to 2.79, while 30 year yields rose 13 points to 4.11.
Outside of the REITs, Savvis started to slow down, rising just over 4% for the month, after rising 13% in October, and 43% in the 3rd Quarter. The company, which is not profitable, is trading at about 1.47x annualized revenue with 13% y/y top line growth, hardly enough to sustain such a big run up. Nonetheless, Wall Street has completely fallen in love with Rackspace, which is now trading at 78x annualized earnings (74x net of its cash), on 23% y/y top line growth. It's a remarkably well run hosting provider, but the stock is clearly ahead of itself. While the company has grown its bottom line 55% in the last year, this is primarily due to reductions in its SG&A/Revenue ratio, not its heavily hyped cloud services or other trendy topics that get Wall Street excited.
DataCenterStocks.com Services Index | |||||
Company | Ticker | Mkt Cap | Nov 30 Close | Nov 1 Open | Monthly Chg |
Equinix | EQIX | $3,537,784,000 | 77.60 | 84.24 | -7.88% |
Digital Realty | DLR | $4,584,996,000 | 52.52 | 59.73 | -12.07% |
DuPont Fabros | DFT | $1,338,005,700 | 22.59 | 25.10 | -10.00% |
Rackspace | RAX | $3,645,374,900 | 29.17 | 24.96 | 16.87% |
Savvis | SVVS | $1,388,181,200 | 25.13 | 24.01 | 4.66% |
Level 3 | LVLT | $1,660,000,000 | 1.00 | 0.97 | 3.09% |
Akamai | AKAM | $9,478,225,900 | 52.19 | 51.67 | 1.01% |
Navisite | NAVI | $133,387,200 | 3.54 | 3.83 | -7.57% |
Terremark | TMRK | $786,548,700 | 11.97 | 9.99 | 19.82% |
Limelight | LLNW | $698,356,000 | 7.10 | 6.79 | 4.57% |
AboveNet | ABVT | $1,477,479,000 | 58.70 | 56.89 | 3.18% |
CoreSite | COR | $220,376,800 | 12.88 | 15.06 | -14.48% |
Internap | INAP | $271,489,300 | 5.23 | 5.00 | 4.60% |
Total | $29,220,204,700 | ||||
Index Value October 1 | 100.00 | ||||
Index Value November 1 | 98.48 | ||||
Index Value December 1 Open | 97.88 |
Jefferies Initiates Coverage of Digital Realty with a $67 Price Target
By David Gross
Wall Street loves vague, MBA-speak, and Jefferies continued that tradition yesterday initiating coverage on DLR with a price target of $67, or about 30% higher than the stock's current level in the low 50s.
In its research note, Jefferies stated that "DLR's dominant market position in the wholesale data center space gives it a major competitive advantage vs peers in regards to taking full advantage of strong fundamentals in the datacenter real estate market, which is poised to continue to experience positive demand/supply fundamentals for the next five years."
So here's the edge they have - they've figured out that DLR has high market share. Amazing insight. Moreover, they've determined that demand/supply conditions will be strong for the next five years. No wait they didn't say that, they said it would be "positive". Either way, I doubt that they have driven out this way in Northern Virginia, where there are cranes all over the place preparing for a new wave of data center clients.
Wall Street loves vague, MBA-speak, and Jefferies continued that tradition yesterday initiating coverage on DLR with a price target of $67, or about 30% higher than the stock's current level in the low 50s.
In its research note, Jefferies stated that "DLR's dominant market position in the wholesale data center space gives it a major competitive advantage vs peers in regards to taking full advantage of strong fundamentals in the datacenter real estate market, which is poised to continue to experience positive demand/supply fundamentals for the next five years."
So here's the edge they have - they've figured out that DLR has high market share. Amazing insight. Moreover, they've determined that demand/supply conditions will be strong for the next five years. No wait they didn't say that, they said it would be "positive". Either way, I doubt that they have driven out this way in Northern Virginia, where there are cranes all over the place preparing for a new wave of data center clients.
Labels:
Data Center REITs,
DLR
Tuesday, November 30, 2010
Data Center TCO is a Meaningless Number
By David Gross
Few metrics are as overused yet as useless as TCO. Largely developed by sales and marketing to close deals, it has very little connection to financial reality, because it ignores the time value of money, offers little support of any kind for a buy vs. build decision, and typically pulls in a lot of costs that you are going to incur anyway.
With data centers, TCO numbers can get comical, because this is an automated industry, where personnel costs are often a very small share of total outlays. And while vendors love to talk about how they save power, which can also reduce operating costs, many of the products that save power require higher up front capital outlays, providing weak financial returns. So then how are you supposed to measure data center costs? How are you supposed to decide between more power consumption or more capital outlays?
As I wrote last week, data center expenditures should be set to minimize the NPV of cash outlays within operational constraints. This is very different than randomly tagging your PDUs, CRACs, or servers with allocated overhead costs as TCO models typically do. Moreover, many of the financial decisions that take place involving a data center don't involve money, but time. If you're building your own facility, it's not really a decision to spend more or less, it's a decision to spend now. If you're renting additional space, you're not just deciding on how much to spend, but when. Moreover, TCO really doesn't apply, because you don't own anything, and if you force some sort of TCO calcuation, you need to place a discount rate on future rent payments, each of which occurs at a different point of time.
Specifically, here are some things the industry can do to bring TCO models closer to the financial reality:
1. Stop Summing Costs from Different Time Periods and then Comparing Them to One Another.
This is a common tactic, to claim half of the "expenses" are capital costs. Problem of course is that there is no such thing as a capital expense, there's depreciation of up front capital outlays.
Moreover, don't say 50% of your "expenses" are capital outlays, say 60% of the present value of your cash outlays are capital expenditures at a 7% discount rate, but this rises to 70% at 12%, which is your corporate cost of capital. If it costs more to borrow in the future, you'll need to look at renting more. This is just one way in which financial data can be used to support important decisions, instead of just validating some vendor marketing department's claim about savings.
2. Stop Making Pie Charts with Cost Categories
Just about every aspect of a data center operation can be rented or bought. A key decision factor then isn't whether power is 15% of costs or 20%, but the fixed cost of turning a rented item into an owned one. In the case of the facility, this is obviously going to be very high, in the case of a blade server, it will be low. This ability-to-buy is far more important than assigning a percentage to facility rent or blade servers because you need both a building and servers regardless of what you decide financially. What matters is if your cash outlays are higher than peers or competitors because you're leasing when you should be buying, or building when you should be renting.
3. Constantly Monitor the Tradeoffs You've Made
TCO often does a poor job of determining trade-offs that underlie decision making. For example, if you buy a power hungry, 9 watts per Gbps Ethernet switch because it has a low port price, you need to monitor prices for power, prices for higher line rate ports, alternate protocols, alternate topologies, in addition to your corporate cost of capital.. The cost justification for such a tradeoff could change, and it won't show up in any static TCO spreadsheet.
Ultimately, as time passes, corporations will be spending more on renting, building out, and operating data centers. Those that move beyond TCO models stand to gain significant financial benefits over those who try to force numbers into convenient categories that have little to do with financial reality.
Few metrics are as overused yet as useless as TCO. Largely developed by sales and marketing to close deals, it has very little connection to financial reality, because it ignores the time value of money, offers little support of any kind for a buy vs. build decision, and typically pulls in a lot of costs that you are going to incur anyway.
With data centers, TCO numbers can get comical, because this is an automated industry, where personnel costs are often a very small share of total outlays. And while vendors love to talk about how they save power, which can also reduce operating costs, many of the products that save power require higher up front capital outlays, providing weak financial returns. So then how are you supposed to measure data center costs? How are you supposed to decide between more power consumption or more capital outlays?
As I wrote last week, data center expenditures should be set to minimize the NPV of cash outlays within operational constraints. This is very different than randomly tagging your PDUs, CRACs, or servers with allocated overhead costs as TCO models typically do. Moreover, many of the financial decisions that take place involving a data center don't involve money, but time. If you're building your own facility, it's not really a decision to spend more or less, it's a decision to spend now. If you're renting additional space, you're not just deciding on how much to spend, but when. Moreover, TCO really doesn't apply, because you don't own anything, and if you force some sort of TCO calcuation, you need to place a discount rate on future rent payments, each of which occurs at a different point of time.
Specifically, here are some things the industry can do to bring TCO models closer to the financial reality:
1. Stop Summing Costs from Different Time Periods and then Comparing Them to One Another.
This is a common tactic, to claim half of the "expenses" are capital costs. Problem of course is that there is no such thing as a capital expense, there's depreciation of up front capital outlays.
Moreover, don't say 50% of your "expenses" are capital outlays, say 60% of the present value of your cash outlays are capital expenditures at a 7% discount rate, but this rises to 70% at 12%, which is your corporate cost of capital. If it costs more to borrow in the future, you'll need to look at renting more. This is just one way in which financial data can be used to support important decisions, instead of just validating some vendor marketing department's claim about savings.
2. Stop Making Pie Charts with Cost Categories
Just about every aspect of a data center operation can be rented or bought. A key decision factor then isn't whether power is 15% of costs or 20%, but the fixed cost of turning a rented item into an owned one. In the case of the facility, this is obviously going to be very high, in the case of a blade server, it will be low. This ability-to-buy is far more important than assigning a percentage to facility rent or blade servers because you need both a building and servers regardless of what you decide financially. What matters is if your cash outlays are higher than peers or competitors because you're leasing when you should be buying, or building when you should be renting.
3. Constantly Monitor the Tradeoffs You've Made
TCO often does a poor job of determining trade-offs that underlie decision making. For example, if you buy a power hungry, 9 watts per Gbps Ethernet switch because it has a low port price, you need to monitor prices for power, prices for higher line rate ports, alternate protocols, alternate topologies, in addition to your corporate cost of capital.. The cost justification for such a tradeoff could change, and it won't show up in any static TCO spreadsheet.
Ultimately, as time passes, corporations will be spending more on renting, building out, and operating data centers. Those that move beyond TCO models stand to gain significant financial benefits over those who try to force numbers into convenient categories that have little to do with financial reality.
Labels:
TCO Models
Monday, November 29, 2010
Focus, not Cost "Synergies", Key to Mellanox-Voltaire Merger Success
By David Gross
InfiniBand IC supplier Mellanox announced today that it acquiring long-time customer, and fellow Israeli InfiniBand technology developer Voltaire. The acquisition price of $8.75 a share represents a more than 35% premium over Friday's close of $6.43, and is net of $42 million of cash held by Voltaire. Mellanox is financing the deal entirely out of its cash balance of $240 million.
Mellanox is down 4% on the news to $24 a share, while Voltaire is up 34% to $8.65, leaving very limited room for risk arbitrage on the deal.
While both companies have gotten into the Ethernet market over the last two years, the deal only makes sense in the context of InfiniBand, which as a niche technology, does not offer a chip supplier billions of ports over which to amortize development costs. Mellanox already offers both ICs and adapter cards. Moreover, and very importantly, InfiniBand switches are low cost, low memory, high performance boxes with stripped down operating systems and forwarding tables. The intellectual property router and Ethernet switch makers put into system design and network O/S is less valuable here as a result. The message acceleration software and management tools associated with InfiniBand devices require far less R&D than new ASICs or network operating systems for high-end, modular Ethernet switches.
What's likely to happen here is Wall Street will do its usual fretting over whether the proposed operating cost reductions will be achieved, which in this case are $10 million, whether the price is reasonable, and what customers will think. Additionally, at least 30 hedge fund managers are likely to ask the same questions on the strategic impact of owning switches, InfiniBand vs. Ethernet, and will seek "more color" on how the integration is going. But none of this will really matter. The key to success here will be the extent to which the new company focuses on InfiniBand. Outside of bridging products and maybe 40G NICs, there new company needs to stay out of the Ethernet market, which already has enough suppliers, and treat Fibre Channel-over-Ethernet as the toxic technology it already has proven to be for Brocade.
InfiniBand IC supplier Mellanox announced today that it acquiring long-time customer, and fellow Israeli InfiniBand technology developer Voltaire. The acquisition price of $8.75 a share represents a more than 35% premium over Friday's close of $6.43, and is net of $42 million of cash held by Voltaire. Mellanox is financing the deal entirely out of its cash balance of $240 million.
Mellanox is down 4% on the news to $24 a share, while Voltaire is up 34% to $8.65, leaving very limited room for risk arbitrage on the deal.
While both companies have gotten into the Ethernet market over the last two years, the deal only makes sense in the context of InfiniBand, which as a niche technology, does not offer a chip supplier billions of ports over which to amortize development costs. Mellanox already offers both ICs and adapter cards. Moreover, and very importantly, InfiniBand switches are low cost, low memory, high performance boxes with stripped down operating systems and forwarding tables. The intellectual property router and Ethernet switch makers put into system design and network O/S is less valuable here as a result. The message acceleration software and management tools associated with InfiniBand devices require far less R&D than new ASICs or network operating systems for high-end, modular Ethernet switches.
What's likely to happen here is Wall Street will do its usual fretting over whether the proposed operating cost reductions will be achieved, which in this case are $10 million, whether the price is reasonable, and what customers will think. Additionally, at least 30 hedge fund managers are likely to ask the same questions on the strategic impact of owning switches, InfiniBand vs. Ethernet, and will seek "more color" on how the integration is going. But none of this will really matter. The key to success here will be the extent to which the new company focuses on InfiniBand. Outside of bridging products and maybe 40G NICs, there new company needs to stay out of the Ethernet market, which already has enough suppliers, and treat Fibre Channel-over-Ethernet as the toxic technology it already has proven to be for Brocade.
Labels:
InfiniBand,
MLNX,
VOLT
Friday, November 26, 2010
Cisco and Brocade - Great Technology vs. Proprietary Technology
By David Gross
Was just reading through an article over at Investopedia on Brocade, and it was so off-base, I thought I would give a counterpoint here. The writer claimed that Brocade had great technology but poor ability to sell, that there was something wrong with posting non-GAAP earnings, that there was good reason to believe Brocade would take share, and that it would benefit from the market shift to Fibre Channel over Ethernet.
All four of these claims are either wrong or based on random speculation. But the strangest one is his point about great technology but poor ability to sell. If Brocade was so bad at selling, it wouldn't have OEM deals with IBM, HP, Hitachi, Oracle, Dell, and others. The company is fairly strong at sales. And its technology might be "great", but on the Ethernet side, it has to deal with proprietary Cisco technologies like VTP, ISL, and CDP which have long held Foundry/Brocade's Ethernet revenue in the $100-$150 million range per quarter, while Cisco's has soared past $3.5 billion.
As I've written here before, Wall Street simply doesn't seem to know about Cisco's proprietary routing and VLAN technologies, and has no idea how important they are to the business. It's understandable considering that Cisco IR and its executives rarely talk about them, and instead focus investor presentations on flashy marketing themes about e-learning, conferencing, "human" networks, and so forth, and I can't fault them if so many Wall Streeters are going to center their analysis on investor relations spin. Nonetheless, EIGRP, VTP, ISL, and other proprietary VLAN and routing protocols are to Cisco what Windows is to Microsoft.
While Cisco is setting up itself for mediocre returns by wasting capital on silly overdiversifications into conferencing and video, Brocade still hasn't been able to grow its Ethernet business as fast as Cisco has grown its Ethernet business, in spite of Cisco's being over 20 times larger. Cisco posted a 25% y/y revenue increase for its switch business in its most recent quarter, while Brocade posted just a 9% increase, and has been selling its Ethernet products at gross margins 30 points lower than Cisco's.
Now, at the end of the Investopedia article the writer says that perhaps Wall Street thinks Brocade's SAN switches could become irrelevant. But for the last three years, Brocade has acted like it thinks its SAN switches could become irrelevant, and has told a fancy tale about "convergence" and Fibre Channel-over-Ethernet that has made it seem like the company has no faith in the SAN market it dominates. And for that, you can't blame Wall Street.
Was just reading through an article over at Investopedia on Brocade, and it was so off-base, I thought I would give a counterpoint here. The writer claimed that Brocade had great technology but poor ability to sell, that there was something wrong with posting non-GAAP earnings, that there was good reason to believe Brocade would take share, and that it would benefit from the market shift to Fibre Channel over Ethernet.
All four of these claims are either wrong or based on random speculation. But the strangest one is his point about great technology but poor ability to sell. If Brocade was so bad at selling, it wouldn't have OEM deals with IBM, HP, Hitachi, Oracle, Dell, and others. The company is fairly strong at sales. And its technology might be "great", but on the Ethernet side, it has to deal with proprietary Cisco technologies like VTP, ISL, and CDP which have long held Foundry/Brocade's Ethernet revenue in the $100-$150 million range per quarter, while Cisco's has soared past $3.5 billion.
As I've written here before, Wall Street simply doesn't seem to know about Cisco's proprietary routing and VLAN technologies, and has no idea how important they are to the business. It's understandable considering that Cisco IR and its executives rarely talk about them, and instead focus investor presentations on flashy marketing themes about e-learning, conferencing, "human" networks, and so forth, and I can't fault them if so many Wall Streeters are going to center their analysis on investor relations spin. Nonetheless, EIGRP, VTP, ISL, and other proprietary VLAN and routing protocols are to Cisco what Windows is to Microsoft.
While Cisco is setting up itself for mediocre returns by wasting capital on silly overdiversifications into conferencing and video, Brocade still hasn't been able to grow its Ethernet business as fast as Cisco has grown its Ethernet business, in spite of Cisco's being over 20 times larger. Cisco posted a 25% y/y revenue increase for its switch business in its most recent quarter, while Brocade posted just a 9% increase, and has been selling its Ethernet products at gross margins 30 points lower than Cisco's.
Now, at the end of the Investopedia article the writer says that perhaps Wall Street thinks Brocade's SAN switches could become irrelevant. But for the last three years, Brocade has acted like it thinks its SAN switches could become irrelevant, and has told a fancy tale about "convergence" and Fibre Channel-over-Ethernet that has made it seem like the company has no faith in the SAN market it dominates. And for that, you can't blame Wall Street.
Labels:
BRCD,
CSCO,
Fibre Channel over Ethernet
Wednesday, November 24, 2010
Cisco Buyback Program Grows while its Market Share Drops
By David Gross
One of the questions I'm answering a lot these days is whether F5 and Riverbed are overvalued. But no one's wondering if the same is true for their "dominant" competitor Cisco. In response to its limp stock performance, Cisco recently announced that it will increase the cap on its share buyback program $10 billion.
Share buybacks used to be seen as some sort of internal endorsement of the company. But that was before technology companies started piling up cash without paying much in the way of dividends. Now they're often used as gimmicks by cash rich companies to try to boost a stock that's treading water while smaller competitors are doing something far more important to their future stock price - taking market share.
While Cisco still owns the switching and routing markets, their overdiversifications into other areas have caught up with them, and now they're doubling down on these bad investments by throwing more shareholder capital at a buyback program that will do nothing to stop John Chambers wild ride into conferencing, consumer devices, and a product that will make the Newton look like a success, the Cius. After years of tremendous success naming products after numbers, Cisco somehow decided to go with something that sounds like it was ripped out of a High School Latin textbook.
It's kind of interesting how the stock has gone nowhere since John Chambers got bored selling switches and routers. Over the last five years, Cisco has spent over $10 billion buying companies like Scientific-Atlanta, WebEx, Pure Digital, and others that have brought it closer to end users, and expanded its presence out of the guts of the network. Over that same period, the stock has gone up 14%, or about 2% per year. Juniper, meanwhile, which hasn't had the resources to buy all sorts of consumer device companies, has seen its stock rise a more respectable 42% over the same period. Riverbed and F5 are up over 300%. So how did Cisco become such an underperformer?
Any Idiot Could Run This Joint
Former Fidelity fund manager Peter Lynch once said that when evaluating companies, he likes to hear that "any idiot could run this joint, because someday, any idiot probably will". Cisco found a person who could fit that description in 1995, but it did eventually catch up with them.
For all his attempts to overdiversify the company, John Chambers still has to contend with the fact that nearly 2/3rds of his product revenue comes from switches and routers. While those products aren't exciting enough for Chambers or Cisco IR to talk about these days, they're still growing in spite of their size, with switch revenue up 25% year-over-year. Moreover, these devices are built on routing technology Cisco built internally, and the switch business has grown out of two acquisitions, Crescendo and Kalpana, which were made before Chambers took over. The third major switch acquisiiton, Grand Junction Networks, was made in 1995 soon after he became CEO. The tens of billions of spent on new companies since then have done remarkably little to add to the top line, although they've done a lot to make Chambers speeches more interesting, because he wasn't about to go all over the world talking about enhancements to OSPF or spanning tree.
Before Cisco really lost its focus, it was a lot better about killing off products from bad acquisitions. In 2001, it realized its $500 million buyout of Monterrey was a mistake, that it was not going to compete strongly in optical switches, and it killed the product line. Yet its $6 billion acquisition of ArrowPoint has created a product line, the CS-series, which is getting crushed today by F5 and Citrix's Netscaler. But it won't admit that it's strength is in layer 2-3, not layer 4-7, regardless of how bad a beating it takes. Chambers used to say something about being 1 or 2 in a market like GE traditionally was, but now market position is losing out to market hype, and it's remarkable how much more is being written about the Cius, telepresence, and how little is being said about the proprietary VLAN and routing protocols which are the foundation of Cisco's $100 billion+ market cap.
Cisco can blame Federal spending levels, the economy, Chambers' receding hairline, or whatever it wants for its current struggles. But there's one thing the company can do to fix things, one thing that will stop the share losses to smaller competitors, one thing that will allow it to dominate for decades - focus on routers and switches. But this won't happen with the current CEO.
Instead buying back shares, the board should be bringing in new leadership.
One of the questions I'm answering a lot these days is whether F5 and Riverbed are overvalued. But no one's wondering if the same is true for their "dominant" competitor Cisco. In response to its limp stock performance, Cisco recently announced that it will increase the cap on its share buyback program $10 billion.
Share buybacks used to be seen as some sort of internal endorsement of the company. But that was before technology companies started piling up cash without paying much in the way of dividends. Now they're often used as gimmicks by cash rich companies to try to boost a stock that's treading water while smaller competitors are doing something far more important to their future stock price - taking market share.
While Cisco still owns the switching and routing markets, their overdiversifications into other areas have caught up with them, and now they're doubling down on these bad investments by throwing more shareholder capital at a buyback program that will do nothing to stop John Chambers wild ride into conferencing, consumer devices, and a product that will make the Newton look like a success, the Cius. After years of tremendous success naming products after numbers, Cisco somehow decided to go with something that sounds like it was ripped out of a High School Latin textbook.
It's kind of interesting how the stock has gone nowhere since John Chambers got bored selling switches and routers. Over the last five years, Cisco has spent over $10 billion buying companies like Scientific-Atlanta, WebEx, Pure Digital, and others that have brought it closer to end users, and expanded its presence out of the guts of the network. Over that same period, the stock has gone up 14%, or about 2% per year. Juniper, meanwhile, which hasn't had the resources to buy all sorts of consumer device companies, has seen its stock rise a more respectable 42% over the same period. Riverbed and F5 are up over 300%. So how did Cisco become such an underperformer?
Any Idiot Could Run This Joint
Former Fidelity fund manager Peter Lynch once said that when evaluating companies, he likes to hear that "any idiot could run this joint, because someday, any idiot probably will". Cisco found a person who could fit that description in 1995, but it did eventually catch up with them.
For all his attempts to overdiversify the company, John Chambers still has to contend with the fact that nearly 2/3rds of his product revenue comes from switches and routers. While those products aren't exciting enough for Chambers or Cisco IR to talk about these days, they're still growing in spite of their size, with switch revenue up 25% year-over-year. Moreover, these devices are built on routing technology Cisco built internally, and the switch business has grown out of two acquisitions, Crescendo and Kalpana, which were made before Chambers took over. The third major switch acquisiiton, Grand Junction Networks, was made in 1995 soon after he became CEO. The tens of billions of spent on new companies since then have done remarkably little to add to the top line, although they've done a lot to make Chambers speeches more interesting, because he wasn't about to go all over the world talking about enhancements to OSPF or spanning tree.
Before Cisco really lost its focus, it was a lot better about killing off products from bad acquisitions. In 2001, it realized its $500 million buyout of Monterrey was a mistake, that it was not going to compete strongly in optical switches, and it killed the product line. Yet its $6 billion acquisition of ArrowPoint has created a product line, the CS-series, which is getting crushed today by F5 and Citrix's Netscaler. But it won't admit that it's strength is in layer 2-3, not layer 4-7, regardless of how bad a beating it takes. Chambers used to say something about being 1 or 2 in a market like GE traditionally was, but now market position is losing out to market hype, and it's remarkable how much more is being written about the Cius, telepresence, and how little is being said about the proprietary VLAN and routing protocols which are the foundation of Cisco's $100 billion+ market cap.
Cisco can blame Federal spending levels, the economy, Chambers' receding hairline, or whatever it wants for its current struggles. But there's one thing the company can do to fix things, one thing that will stop the share losses to smaller competitors, one thing that will allow it to dominate for decades - focus on routers and switches. But this won't happen with the current CEO.
Instead buying back shares, the board should be bringing in new leadership.
Labels:
CSCO
Tuesday, November 23, 2010
Brocade Guides Down 3%, Stock Drops 5%
By David Gross
Brocade fell 5% in after hours trading Monday after guiding its revenue midpoint for next quarter down from $558 million, to $542 million. The culprit behind the decline was the same as it was for Cisco - the government. The company said it could see a drop in Federal Ethernet revenue of $20-$25 million due to delays in government contracts. Now I remember when Foundry transformed its Reston, VA office from an AOL/Cable & Wireless/telecom focus, to a Federal focus in 2001 and 2002. Almost seems like the tide is turning in the other direction now. Nonetheless, Brocade still gets 23% of its revenue from the Federal government.
Overall, revenue for its fiscal 4th quarter, which ended October 31st, was up 5% y/y to $550 million. Ethernet revenue was up slightly to 26% of corporate revenue, compared to 25% a year ago. The balance sheet remains unusually ugly for a network equipment manufacturer, with just over $330 million in cash and equivalents, but over $900 million in long-term debt, most of which came from financing the Foundry acquisition. Nonetheless, the company is producing free cash, and its cash balance was up $40 million sequentially. Moreover, gross margins were up 20% y/y to $325 million.
While I'm no fan of either Fibre Channel over Ethernet, or Brocade's all-things-to-everyone product strategy, this wasn't a bad quarter and there are plenty of other reasons to sell the stock besides a 3% drop in revenue guidance.
Brocade fell 5% in after hours trading Monday after guiding its revenue midpoint for next quarter down from $558 million, to $542 million. The culprit behind the decline was the same as it was for Cisco - the government. The company said it could see a drop in Federal Ethernet revenue of $20-$25 million due to delays in government contracts. Now I remember when Foundry transformed its Reston, VA office from an AOL/Cable & Wireless/telecom focus, to a Federal focus in 2001 and 2002. Almost seems like the tide is turning in the other direction now. Nonetheless, Brocade still gets 23% of its revenue from the Federal government.
Overall, revenue for its fiscal 4th quarter, which ended October 31st, was up 5% y/y to $550 million. Ethernet revenue was up slightly to 26% of corporate revenue, compared to 25% a year ago. The balance sheet remains unusually ugly for a network equipment manufacturer, with just over $330 million in cash and equivalents, but over $900 million in long-term debt, most of which came from financing the Foundry acquisition. Nonetheless, the company is producing free cash, and its cash balance was up $40 million sequentially. Moreover, gross margins were up 20% y/y to $325 million.
While I'm no fan of either Fibre Channel over Ethernet, or Brocade's all-things-to-everyone product strategy, this wasn't a bad quarter and there are plenty of other reasons to sell the stock besides a 3% drop in revenue guidance.
nlyte Software Raises $12 Million C Round
By David Gross
A month after Emerson announced its Trellis DCIM, or Data Center Infrastructure Management platfrom, nlyte Software, which makes capacity planning tools for data centers, recently announced it had raised a $12 million C round. Whatever the valuation was, investors surely must have been encouraged by Emerson's $1.2 billion acquisition of Avocent last year.
While often mixed together with feel good PR about making data centers more green, DCIM tools are increasingly being used to bring some order and structure to the often chaotic process of building out data centers. In nlyte's case, it holds a patent on a method of allocating servers into racks.
While I don't like all the greenwashing that's going on with DCIM products, they do have the potential to improve the IRR on corporate investments in data center assets, both from their ability to track assets, as well as by enabling data centers managers to plan more efficiently.
A month after Emerson announced its Trellis DCIM, or Data Center Infrastructure Management platfrom, nlyte Software, which makes capacity planning tools for data centers, recently announced it had raised a $12 million C round. Whatever the valuation was, investors surely must have been encouraged by Emerson's $1.2 billion acquisition of Avocent last year.
While often mixed together with feel good PR about making data centers more green, DCIM tools are increasingly being used to bring some order and structure to the often chaotic process of building out data centers. In nlyte's case, it holds a patent on a method of allocating servers into racks.
While I don't like all the greenwashing that's going on with DCIM products, they do have the potential to improve the IRR on corporate investments in data center assets, both from their ability to track assets, as well as by enabling data centers managers to plan more efficiently.
Monday, November 22, 2010
Data Center Cash Outlays vs. Data Center Costs
By David Gross
An odd statement that I hear a lot is that "power is the biggest cost in a data center". This idea has been repeated so frequently, it's become an assumption for some industry followers, especially in the press. Now while power is a major cost, it's often not the largest, especially because data center tenants really need to distinguish costs from cash outlays to get the best returns on their investments in power and space.
Part of the problem with some approaches to evaluating data center costs is that there's a bad tendency to force a number next to each category of expenses. Assign one number to servers, another to storage, one to power, and so forth. This distorts the true economic picture of a data center operation, because recurring personnel costs are extremely low, which means every data center owner or lessor has the choice of buying or renting just about every aspect of their data center operation. But if you try to boil everything down to an amortized cost, the true economics of the operation get lost in that forced calculation.
Actual Operating Costs vs. Accounting Operating Costs
Power can be self-generated with wind, solar, or backup generators, but for the most part it gets paid for on a monthly basis - either by the amp or by the kWh. If you look at a typical co-lo contract, it comes out to about 25-30% of total charges. In terms of public companies, CoreSite reports that about 25% of its revenue comes from power.
In addition to power, people, and rent (in the case of a colo/REIT customer), the other major operating cost is telecom circuits, which can become significant for a heavily cross-connected customer, a customer buying extended cross-connects, or one needing 100 Megabits or more out of the facility. Equinix gets 20% of its revenue in North America from telecom services, and that doesn't include what its customers pay third party transit providers. While telecom costs can be much lower than power expenses, they are also far more variable based on customer requirements.
But once you get past power, rent, and telecom services, there are all the servers, storage arrays, and network equipment boxes to buy. In one scenario, I estimated the loaded capital cost per server for all of this equipment, plus software licenses, to be about $8,000, or around $25,000 per square foot. Now the temptation here is to amortize this over 36 or 60 months and call it something-hundred per foot per month. The problem with doing this is that you can lease the equipment, finance it through low cost debt or high cost equity, and you can cluster purchases in a handful of months - making a straight-line depreciation figure a financial accounting abstraction that has nothing to do with your economic reality. The point is that these are fixed costs with a lot of financing and purchase options, and throwing one number out there to cover them buries the economics of owning (or leasing) these assets.
What I recommend is not to force a number in per month, but to aggregate the cash outlays, and then NPV them at the corporate cost of capital, as well as other interest rates to determine the sensitivity to your discount rate. This is the only way to get a true picture of the economic costs. Additionally, leasehold improvements can be incorporated into this analysis as well. Then the objective should be to minimize the NPV, not the amortized monthly cost, because no matter what you're paying for power, cross-connects, or bandwidth, your data center is an asset-heavy, not a people-heavy, operation.
An odd statement that I hear a lot is that "power is the biggest cost in a data center". This idea has been repeated so frequently, it's become an assumption for some industry followers, especially in the press. Now while power is a major cost, it's often not the largest, especially because data center tenants really need to distinguish costs from cash outlays to get the best returns on their investments in power and space.
Part of the problem with some approaches to evaluating data center costs is that there's a bad tendency to force a number next to each category of expenses. Assign one number to servers, another to storage, one to power, and so forth. This distorts the true economic picture of a data center operation, because recurring personnel costs are extremely low, which means every data center owner or lessor has the choice of buying or renting just about every aspect of their data center operation. But if you try to boil everything down to an amortized cost, the true economics of the operation get lost in that forced calculation.
Actual Operating Costs vs. Accounting Operating Costs
Power can be self-generated with wind, solar, or backup generators, but for the most part it gets paid for on a monthly basis - either by the amp or by the kWh. If you look at a typical co-lo contract, it comes out to about 25-30% of total charges. In terms of public companies, CoreSite reports that about 25% of its revenue comes from power.
In addition to power, people, and rent (in the case of a colo/REIT customer), the other major operating cost is telecom circuits, which can become significant for a heavily cross-connected customer, a customer buying extended cross-connects, or one needing 100 Megabits or more out of the facility. Equinix gets 20% of its revenue in North America from telecom services, and that doesn't include what its customers pay third party transit providers. While telecom costs can be much lower than power expenses, they are also far more variable based on customer requirements.
But once you get past power, rent, and telecom services, there are all the servers, storage arrays, and network equipment boxes to buy. In one scenario, I estimated the loaded capital cost per server for all of this equipment, plus software licenses, to be about $8,000, or around $25,000 per square foot. Now the temptation here is to amortize this over 36 or 60 months and call it something-hundred per foot per month. The problem with doing this is that you can lease the equipment, finance it through low cost debt or high cost equity, and you can cluster purchases in a handful of months - making a straight-line depreciation figure a financial accounting abstraction that has nothing to do with your economic reality. The point is that these are fixed costs with a lot of financing and purchase options, and throwing one number out there to cover them buries the economics of owning (or leasing) these assets.
What I recommend is not to force a number in per month, but to aggregate the cash outlays, and then NPV them at the corporate cost of capital, as well as other interest rates to determine the sensitivity to your discount rate. This is the only way to get a true picture of the economic costs. Additionally, leasehold improvements can be incorporated into this analysis as well. Then the objective should be to minimize the NPV, not the amortized monthly cost, because no matter what you're paying for power, cross-connects, or bandwidth, your data center is an asset-heavy, not a people-heavy, operation.
Friday, November 19, 2010
SC10 and Optical Components for the Data Center
By Lisa Huff
It's a beautiful time of year in New Orleans for the top supercomputing companies to show their wares. While I wouldn’t consider SC10 exactly the place to sell optical components, there were a few new developments there. SCinet – the network that is always built at the top HPC conference – boasted 100G Ethernet as well as OTu4. Alcatel-Lucent, Ciena, Cisco, Force10 and Juniper, among others donated equipment to build this network. Module vendors Avago Technologies, Finisar and Reflex Photonics contributed QSFP and CFP 40G and 100G devices to the cause.
Meanwhile, the Ethernet alliance was showing two demonstrations in its booth – a converged network running FCoE and RoCE over 40GigE and 100GigE. Nineteen different vendors participated in this demo that was run by the University of New Hampshire Interoperability Lab. Both CFPs and QSFPs were used in this demo.
Some of you may wonder why I would attend SC10. I keep my eye on the HPC market because it usually indicates where the broader data center market will be in a few years. And, in fact, even medium-sized businesses’ data centers with higher computational needs are starting to resemble small HPC centers with their server clusters using top-of-rack switching.
Most of the top optical transceiver vendors and even some of the smaller ones see this market as an opportunity as well. While InfiniBand still uses a lot of copper interconnects, for 40G and 120G, this is changing. QSFP was the standout for 40G IB displays and CXP AOCs were shown for 120G as well. Avago Technologies was the first to announce a CXP module at the show.
Some believe that the CXP will be short-lived because there is progress being made on 4x25 technologies – Luxtera announced its 25G receivers to go with its 25G transmitter that it announced earlier this year. But it will still be a few years before all of the components for 25G will be ready for system’s developers to spec in. Tyco Electronics had a demonstration at their booth showing it is possible to run 28G over eight inches of a PCB, but this was still a prototype. And Xilinx has announced a chip for 28G electrical transceivers that can be used with this board design. But, none of these devices are even being tested by equipment manufacturers yet and the CXP has already been adopted by a few. So I think the CXP may have more life in it than some people may think.
It's a beautiful time of year in New Orleans for the top supercomputing companies to show their wares. While I wouldn’t consider SC10 exactly the place to sell optical components, there were a few new developments there. SCinet – the network that is always built at the top HPC conference – boasted 100G Ethernet as well as OTu4. Alcatel-Lucent, Ciena, Cisco, Force10 and Juniper, among others donated equipment to build this network. Module vendors Avago Technologies, Finisar and Reflex Photonics contributed QSFP and CFP 40G and 100G devices to the cause.
Meanwhile, the Ethernet alliance was showing two demonstrations in its booth – a converged network running FCoE and RoCE over 40GigE and 100GigE. Nineteen different vendors participated in this demo that was run by the University of New Hampshire Interoperability Lab. Both CFPs and QSFPs were used in this demo.
Some of you may wonder why I would attend SC10. I keep my eye on the HPC market because it usually indicates where the broader data center market will be in a few years. And, in fact, even medium-sized businesses’ data centers with higher computational needs are starting to resemble small HPC centers with their server clusters using top-of-rack switching.
Most of the top optical transceiver vendors and even some of the smaller ones see this market as an opportunity as well. While InfiniBand still uses a lot of copper interconnects, for 40G and 120G, this is changing. QSFP was the standout for 40G IB displays and CXP AOCs were shown for 120G as well. Avago Technologies was the first to announce a CXP module at the show.
Some believe that the CXP will be short-lived because there is progress being made on 4x25 technologies – Luxtera announced its 25G receivers to go with its 25G transmitter that it announced earlier this year. But it will still be a few years before all of the components for 25G will be ready for system’s developers to spec in. Tyco Electronics had a demonstration at their booth showing it is possible to run 28G over eight inches of a PCB, but this was still a prototype. And Xilinx has announced a chip for 28G electrical transceivers that can be used with this board design. But, none of these devices are even being tested by equipment manufacturers yet and the CXP has already been adopted by a few. So I think the CXP may have more life in it than some people may think.
Equinix Expands in San Jose
By David Gross
One of the most keys to success in the wholesale/co-lo market has been the ability to dominate certain geographies. While Wall Street is still going on with its vague, misguided approaches to understanding demand, businesspeople who don't need to ask for "a little more color" get that they need to have high share within the metro areas in which they operate, regardless of how big their competitors are in other cities. So for this reason, it was good to see Equinix announce its newest data center, not in Singapore, Sydney, or Slough, but San Jose.
While some companies overdiversify by offering too many services, Equinix has done so by entering too many markets, especially those where it has limited share. To succeed in Los Angelese, it will have to invest a tremendous amount of cash in marketing, in order to reach media companies and content distributors already in CoreSite buildings. Dallas will always be a tough market, because Equinix has no chance of catching up to the 2 million square feet owned by Digital Realty. Northern Virginia, a former Equinix stronghold, should have another 1 million square feet of Equinix-owned space right now, but expansions elsewhere diverted capital to other markets, and created a nice opening for Digital Realty to come in and develop its Devin Shafron properties.
Equinix's newest facility in San Jose is its eighth. Its first phase includes 1,098 cabinets, with a full build out of 2,600 and total space of 165,000 square feet. In its press release announcing the opening of the facility, the company proudly pointed to its legacy serving Silicon Valley companies, and to the cross-connect opportunities across its regional footprint. But both would be even greater if it wasn't stretching its capital budget to cover large markets already dominated by someone else.
One of the most keys to success in the wholesale/co-lo market has been the ability to dominate certain geographies. While Wall Street is still going on with its vague, misguided approaches to understanding demand, businesspeople who don't need to ask for "a little more color" get that they need to have high share within the metro areas in which they operate, regardless of how big their competitors are in other cities. So for this reason, it was good to see Equinix announce its newest data center, not in Singapore, Sydney, or Slough, but San Jose.
While some companies overdiversify by offering too many services, Equinix has done so by entering too many markets, especially those where it has limited share. To succeed in Los Angelese, it will have to invest a tremendous amount of cash in marketing, in order to reach media companies and content distributors already in CoreSite buildings. Dallas will always be a tough market, because Equinix has no chance of catching up to the 2 million square feet owned by Digital Realty. Northern Virginia, a former Equinix stronghold, should have another 1 million square feet of Equinix-owned space right now, but expansions elsewhere diverted capital to other markets, and created a nice opening for Digital Realty to come in and develop its Devin Shafron properties.
Equinix's newest facility in San Jose is its eighth. Its first phase includes 1,098 cabinets, with a full build out of 2,600 and total space of 165,000 square feet. In its press release announcing the opening of the facility, the company proudly pointed to its legacy serving Silicon Valley companies, and to the cross-connect opportunities across its regional footprint. But both would be even greater if it wasn't stretching its capital budget to cover large markets already dominated by someone else.
Thursday, November 18, 2010
North Carolina Hype Getting Out of Hand
By David Gross
North Carolina recently added Indian IT Services provider Wipro to its growing list of private data center builders, and it's been off to the races for industry pundits, some of whom are hyping North Carolina as the next great location for the industry. I've written a couple articles about North Carolina, and agree that it's doing a good job recruiting brand name companies, but any comparisons to Silicon Valley and Northern Virginia are ridiculous.
The public data center market continues to be driven by bandwidth, while the private market is driven by power and tax incentives. Not surprisingly, they're heading into different locations as a result. Moreover, these fundamental attributes aren't changing much. No one is talking about building their own private data center in Santa Clara or Ashburn, and none of the public providers are heading to rural Oregon or the banks of Lake Ontario.
Investors need to get beyond the hype and conventional thinking that spreads quickly, and look instead at the factors that go into data center site decisions, which change far less frequently than many other aspects of the data center industry.
North Carolina recently added Indian IT Services provider Wipro to its growing list of private data center builders, and it's been off to the races for industry pundits, some of whom are hyping North Carolina as the next great location for the industry. I've written a couple articles about North Carolina, and agree that it's doing a good job recruiting brand name companies, but any comparisons to Silicon Valley and Northern Virginia are ridiculous.
The public data center market continues to be driven by bandwidth, while the private market is driven by power and tax incentives. Not surprisingly, they're heading into different locations as a result. Moreover, these fundamental attributes aren't changing much. No one is talking about building their own private data center in Santa Clara or Ashburn, and none of the public providers are heading to rural Oregon or the banks of Lake Ontario.
Investors need to get beyond the hype and conventional thinking that spreads quickly, and look instead at the factors that go into data center site decisions, which change far less frequently than many other aspects of the data center industry.
Labels:
North Carolina Data Centers
Wednesday, November 17, 2010
InfiniBand's Growth Slows in Supercomputing
By David Gross
The latest semi-annual Top500 survey is out this week, in conjunction with the SC10 show, and InfiniBand has posted fairly modest gains over the last six months, with implementations growing from 207 to 214 of the world's largest supercomputers.
In the June survey, InfiniBand showed major gains from the November 2009 tally, growing from 181 to 207 system interconnects, and 151 interconnects from the June 2009 count. Five years ago, InfiniBand was used in just 27 systems, and trailed not just Ethernet, but proprietary interconnect Myrinet. Back then, over 40% of the world's top 500 supercomputers used proprietary or custom interconnects, while today just 11% do. Ethernet has held steady over this period, dropping slightly from 250 to 228 of the top 500, and most of InfiniBand's gains have come up at the expense of Myrinet, Quadrics, and other proprietary interconnects.
While Ethernet still has a slight lead in number of systems, the average InfiniBand-connected supercomputer has approximately 70% more processors than the average Ethernet connected supercomputer. With proprietary interconnects essentially wiped out, any future share gains for InfiniBand will now have to come at Ethernet's expense.
The latest semi-annual Top500 survey is out this week, in conjunction with the SC10 show, and InfiniBand has posted fairly modest gains over the last six months, with implementations growing from 207 to 214 of the world's largest supercomputers.
In the June survey, InfiniBand showed major gains from the November 2009 tally, growing from 181 to 207 system interconnects, and 151 interconnects from the June 2009 count. Five years ago, InfiniBand was used in just 27 systems, and trailed not just Ethernet, but proprietary interconnect Myrinet. Back then, over 40% of the world's top 500 supercomputers used proprietary or custom interconnects, while today just 11% do. Ethernet has held steady over this period, dropping slightly from 250 to 228 of the top 500, and most of InfiniBand's gains have come up at the expense of Myrinet, Quadrics, and other proprietary interconnects.
While Ethernet still has a slight lead in number of systems, the average InfiniBand-connected supercomputer has approximately 70% more processors than the average Ethernet connected supercomputer. With proprietary interconnects essentially wiped out, any future share gains for InfiniBand will now have to come at Ethernet's expense.
Labels:
InfiniBand
Tuesday, November 16, 2010
Lawsuit Filed Against Verizon's Upstate NY Data Center
By David Gross
Verizon recently won approval from the Town of Somerset to build a 1 million square foot data center on Lake Ontario, about 40 miles east of Niagara Falls. But now the Buffalo News is reporting that the owner of the farm across from the proposed site is suing the town, claiming it didn't go through the appropriate environmental, planning, and zoning procedures.
There's already a 675 MW coal-fired plant next door to the proposed location, so the site is not completely pristine. And living in the midst of Northern Virginia's many data centers, I think the farm's owner, Mary Rizzo, would benefit from seeing how these facilities often sit across the street from $1 million housing developments. Hastings Drive in Ashburn, for example, is filled with cranes and construction vehicles as DuPont Fabros builds out its ACC5 project. On one side of this data center are more data centers, two owned by DuPont Fabros, one by Digital Realty, and five owned by Equinix. And on the other are expensive houses, separated by nothing more than a short construction fence and a narrow road.
I don't know how much legal merit the suit has, but if buyers of upscale DC-area homes can learn to live next to data centers, I'm sure upstate NY farmers can as well.
Verizon recently won approval from the Town of Somerset to build a 1 million square foot data center on Lake Ontario, about 40 miles east of Niagara Falls. But now the Buffalo News is reporting that the owner of the farm across from the proposed site is suing the town, claiming it didn't go through the appropriate environmental, planning, and zoning procedures.
There's already a 675 MW coal-fired plant next door to the proposed location, so the site is not completely pristine. And living in the midst of Northern Virginia's many data centers, I think the farm's owner, Mary Rizzo, would benefit from seeing how these facilities often sit across the street from $1 million housing developments. Hastings Drive in Ashburn, for example, is filled with cranes and construction vehicles as DuPont Fabros builds out its ACC5 project. On one side of this data center are more data centers, two owned by DuPont Fabros, one by Digital Realty, and five owned by Equinix. And on the other are expensive houses, separated by nothing more than a short construction fence and a narrow road.
I don't know how much legal merit the suit has, but if buyers of upscale DC-area homes can learn to live next to data centers, I'm sure upstate NY farmers can as well.
Monday, November 15, 2010
Top-of-Rack Switching - a Power Saver?
By David Gross
In just about any survey about energy efficiency, data center managers reply that it's their number one concern, a major priority, or give some other kind of indication that they are highly focused on saving energy. Then they go out and buy power hungry, high density GigE line cards for their switches and routers.
Many technology surveys ignore the fact that buyers often say one thing and do another, but investors can't. And when it comes to network equipment, the best ways to reduce watts per Gbps include buying an OC-768 line card, which costs over half a million dollars, or an InfiniBand switch, which typically requires a Clos topology, which is still uncommon outside of supercomputing.
As an example, Voltaire's 4036E offers 1.36 Tbps of InfiniBand switching with just 240 Watts, or just .18 Watts per Gbps. Meanwhile, the 32-port GigE/1 port 10GigE uplink card for the Nexus 7000 consumes 385 Watts, or 9.17 Watts per Gbps, 50x more, and there are far more Nexus switches than Voltaire devices in most data centers.
There is often a trade-off between power consumption and features, because much of a high-speed router or switch's power is tied to TCAMs, packet processing, and the memory/forwarding requirements of large routing tables. However, with few of these sophisticated features, Top-of-Rack switches offer tremendous bandwidth with limited power. Force10's S4810 ToR switch, for example, requires just .44 Watts per Gigabit. But its larger Exascale switch has a 10-port 10 GigE line card that needs over 3 Watts per Gig. Configuration flexibility has a price.
While many data center managers are still resisting ToR switches, there is roughly a 40% Watt per Gbps improvement just from upgrading from GigE to 10GigE. And while there is a significant cost penalty at 10GigE going from short reach 850nm optics to longer reach 1310nm and 1550nm optics, there is no power penalty. Here again, the least capital efficient networking options and the most power efficient.
While the IRR on buying more expensive ports to save power is often negative, there are at least a couple options developing to reduce Watts/Gbps that don't require an entirely new topology, or a multi-million dollar router.
In just about any survey about energy efficiency, data center managers reply that it's their number one concern, a major priority, or give some other kind of indication that they are highly focused on saving energy. Then they go out and buy power hungry, high density GigE line cards for their switches and routers.
Many technology surveys ignore the fact that buyers often say one thing and do another, but investors can't. And when it comes to network equipment, the best ways to reduce watts per Gbps include buying an OC-768 line card, which costs over half a million dollars, or an InfiniBand switch, which typically requires a Clos topology, which is still uncommon outside of supercomputing.
As an example, Voltaire's 4036E offers 1.36 Tbps of InfiniBand switching with just 240 Watts, or just .18 Watts per Gbps. Meanwhile, the 32-port GigE/1 port 10GigE uplink card for the Nexus 7000 consumes 385 Watts, or 9.17 Watts per Gbps, 50x more, and there are far more Nexus switches than Voltaire devices in most data centers.
There is often a trade-off between power consumption and features, because much of a high-speed router or switch's power is tied to TCAMs, packet processing, and the memory/forwarding requirements of large routing tables. However, with few of these sophisticated features, Top-of-Rack switches offer tremendous bandwidth with limited power. Force10's S4810 ToR switch, for example, requires just .44 Watts per Gigabit. But its larger Exascale switch has a 10-port 10 GigE line card that needs over 3 Watts per Gig. Configuration flexibility has a price.
While many data center managers are still resisting ToR switches, there is roughly a 40% Watt per Gbps improvement just from upgrading from GigE to 10GigE. And while there is a significant cost penalty at 10GigE going from short reach 850nm optics to longer reach 1310nm and 1550nm optics, there is no power penalty. Here again, the least capital efficient networking options and the most power efficient.
While the IRR on buying more expensive ports to save power is often negative, there are at least a couple options developing to reduce Watts/Gbps that don't require an entirely new topology, or a multi-million dollar router.
Labels:
Top-of-Rack
Akamai Leasing Another 16,000 Square Feet from CoreSite
By David Gross
While the Netflix-Akamai/Level 3 story is getting more air time now than "Panama" did in 1984, other events are still happening in the data center industry regarding CDNs, including news that Akamai has leased another 16,061 square feet from CoreSite, according to the data center provider's most recent SEC filing.
In its quarterly newsletter, CoreSite mentioned that Akamai was now peering at 40 Gbps on the provider's Any2 IP exchange, a hint that more space could have been on the way. With the additional capacity, Akamai will be leasing over 29,000 square feet from CoreSite, making the CDN supplier CoreSite's 4th largest customer by annualized rent. Facebook is still #1, and the I.R.S. is #2, with the tax collectors taking over 120,000 square feet of office space at the 55 South Market building in San Jose, where the successor to the MAE West exchange is located.
Either way, Akamai's growing presence is a major win for CoreSite against Equinix, which as I mentioned in the CoreSite earnings preview is struggling in LA against CoreSite's One Wilshire leasehold and 256,000 square foot 900 North Alameda building, which along with its New York City property, are making it a leading choice for media companies, including NBC Universal.
Even though Akamai's annualized rent will now top $3 million at CoreSite, it will still trail the I.R.S in how much revenue it produces for the REIT. And no matter how much rent the tax collectors pay to CoreSite, it's nowhere near as exciting as talking about Netflix and Level 3 as frequently as Z-100 played "Panama" in 1984.
While the Netflix-Akamai/Level 3 story is getting more air time now than "Panama" did in 1984, other events are still happening in the data center industry regarding CDNs, including news that Akamai has leased another 16,061 square feet from CoreSite, according to the data center provider's most recent SEC filing.
In its quarterly newsletter, CoreSite mentioned that Akamai was now peering at 40 Gbps on the provider's Any2 IP exchange, a hint that more space could have been on the way. With the additional capacity, Akamai will be leasing over 29,000 square feet from CoreSite, making the CDN supplier CoreSite's 4th largest customer by annualized rent. Facebook is still #1, and the I.R.S. is #2, with the tax collectors taking over 120,000 square feet of office space at the 55 South Market building in San Jose, where the successor to the MAE West exchange is located.
Either way, Akamai's growing presence is a major win for CoreSite against Equinix, which as I mentioned in the CoreSite earnings preview is struggling in LA against CoreSite's One Wilshire leasehold and 256,000 square foot 900 North Alameda building, which along with its New York City property, are making it a leading choice for media companies, including NBC Universal.
Even though Akamai's annualized rent will now top $3 million at CoreSite, it will still trail the I.R.S in how much revenue it produces for the REIT. And no matter how much rent the tax collectors pay to CoreSite, it's nowhere near as exciting as talking about Netflix and Level 3 as frequently as Z-100 played "Panama" in 1984.
Labels:
COR,
Data Center REITs
Friday, November 12, 2010
CoreSite Earnings Preview
By David Gross
CoreSite has its first earnings call as a public company today, a few things to look out for:
CoreSite has its first earnings call as a public company today, a few things to look out for:
- Employee Productivity - While incorporated as a REIT, CoreSite did an un-REIT like $600k per employee in 2009, which is about what Equinix did, and about 75% lower than Digital Realty and DuPont Fabros. Part of this is because of size, and also because the company needs more ops staffing to accommodate its Any2 exchanges, including hiring for an Ops Support Center. At this point, investors cannot realistically apply REIT-like metrics such as NOI and cap rates to CoreSite like they can to DLR and DFT, because CoreSite has been structured like a REIT that's operating as an interconnection company.
- Media Clients and Los Angeles- The company's customers include NBC Universal and Akamai, and in addition to its leasehold at One Wilshire, it has over a quarter million square feet at 900 North Alameda Street, making Los Angeles its largest market, with over 40% of its space. CoreSite is giving Equinix major problems in LA, especially with Equinix building away from the downtown carrier hotels, and trying to get everyone to come out to its centers next to LAX.
- Performance of the non-Data Center Assets - About a third of the company's portfolio is office and light industrial space. Much of this comes from the over 200,000 square feet of office space in the MAE West building it owns at 55 South Market in San Jose, which is a 15 story property whose largest tenant is the I.R.S..
Labels:
COR,
Data Center REITs
Updated North Carolina Map with Facebook
By David Gross
A few weeks ago, I posted a story with a map of the T5, Apple, Google, and American Express data centers in North Carolina. I've just updated the map to reflect this week's news about Facebook's $450 million facility.
One odd take I've seen on the story is how "green" the center is. If the economics of the decision work, there shouldn't be a need to greenwash it with some odd PR about clean power. North Carolina is doing a great job attracting self-built data centers, but unlike upstate NY or the Pac Northwest, it has poor conditions for wind power, and no multi-Gigawatt hydro plants. But so what? Is everyone afraid of Greenpeace protesting at their site like they did in Prineville?
Where wind and hydro are available, they offer tremendous economic benefits, because of their low variable costs. Coal's biggest problem isn't Greenpeace, but the rapidly rising capital cost of constructing coal-fired plants, which has surpassed $2,500 a kW. The need for alternatives is an economic one, not just an environmental one.
View North Carolina Data Centers in a larger map
A few weeks ago, I posted a story with a map of the T5, Apple, Google, and American Express data centers in North Carolina. I've just updated the map to reflect this week's news about Facebook's $450 million facility.
One odd take I've seen on the story is how "green" the center is. If the economics of the decision work, there shouldn't be a need to greenwash it with some odd PR about clean power. North Carolina is doing a great job attracting self-built data centers, but unlike upstate NY or the Pac Northwest, it has poor conditions for wind power, and no multi-Gigawatt hydro plants. But so what? Is everyone afraid of Greenpeace protesting at their site like they did in Prineville?
Where wind and hydro are available, they offer tremendous economic benefits, because of their low variable costs. Coal's biggest problem isn't Greenpeace, but the rapidly rising capital cost of constructing coal-fired plants, which has surpassed $2,500 a kW. The need for alternatives is an economic one, not just an environmental one.
View North Carolina Data Centers in a larger map
Wednesday, November 10, 2010
PCIe – An I/O Optical Interconnect Soon?
By Lisa Huff
The Peripheral Component Interconnect (PCI) is that bus in computers that connects everything back to the processor. It has been around for as long as I can remember having workstations. But in recent years, it has been morphing in response to the need to connect to the processor at higher data rates.
PCI Express (PCIe) GEN1 defined PCIe over cable implementations in 2007. Molex was instrumental in helping to define this and up until now, it has been a purely copper solution using its iPass™ connection system. This system has been used mainly for “inside-the-box” applications first at 2.5G (GEN1) and then at 5G (GEN2). The adoption rate for PCIe over cable has been slow. It is mainly used for high-end multi-chassis applications including I/O expansion, disk array subsystems, high speed video and audio editing equipment and medical imaging systems.
PCIe GEN3 is running at 8G and some physical layer component vendors are looking to use an optical solution instead of the current copper cable along with trying to move it into a true I/O technology for the data center connections – servers to switches and eventually storage. While component vendors are excited about these applications, mainstream OEMs do not seem interested in supporting it. I believe it is because they see it as a threat to their Ethernet equipment revenue.
CXP AOCs seem to be a perfect fit for this GEN3 version of PCIe, but neither equipment manufacturers nor component suppliers believe it will reach the price level needed for this low-cost system. It is expected that the optical interconnect should cost 10’s of dollars, not 100’s. However, CXP AOCs may be used for PCIe GEN3 prototype testing for proof of concept. But if the first demonstrations are any indication, this will not be the case. PCIe GEN3 over optical cable was recently shown by PLX Technology using just a one channel optical engine next to its GEN3 chip with standard LC jumpers. PLX and other vendors are looking towards using optical engines with standard MPO patch cords to extend this to 4x and 8x implementations.
Columbia University and McGill University also demonstrated PCIe GEN3, but with eight lanes over a WDM optical interconnect. This is obviously much more expensive than even the CXP AOCs and is not expected to get any traction in real networks.
Another factor against PCIe as an I/O is the end user. In a data center, there are typically three types of support personnel – networking (switches), storage/server managers and data center managers. While the server managers are familiar with PCIe from an “inside-the-box” perspective, I’m not sure they are ready to replace their Ethernet connections outside the box. And, the others may have heard of PCIe, but probably aren’t open to changing their Ethernet connections either. They can run 10-Gigabit on their Ethernet connections today so really don’t see a need to learn an entirely new type of interconnect in their data center. In fact, they are all leaning towards consolidation instead – getting to one network throughout their data center like FCoE. But, as I've stated in previous posts, this won’t happen until it is shown to be more cost effective than just using 10GigE to connect their LANs and SANs. The fact that PCIe I/O could be cheaper than Ethernet may not be enough because Ethernet is pretty cost-effective itself and has the luxury of being the installed base.
The Peripheral Component Interconnect (PCI) is that bus in computers that connects everything back to the processor. It has been around for as long as I can remember having workstations. But in recent years, it has been morphing in response to the need to connect to the processor at higher data rates.
PCI Express (PCIe) GEN1 defined PCIe over cable implementations in 2007. Molex was instrumental in helping to define this and up until now, it has been a purely copper solution using its iPass™ connection system. This system has been used mainly for “inside-the-box” applications first at 2.5G (GEN1) and then at 5G (GEN2). The adoption rate for PCIe over cable has been slow. It is mainly used for high-end multi-chassis applications including I/O expansion, disk array subsystems, high speed video and audio editing equipment and medical imaging systems.
PCIe GEN3 is running at 8G and some physical layer component vendors are looking to use an optical solution instead of the current copper cable along with trying to move it into a true I/O technology for the data center connections – servers to switches and eventually storage. While component vendors are excited about these applications, mainstream OEMs do not seem interested in supporting it. I believe it is because they see it as a threat to their Ethernet equipment revenue.
CXP AOCs seem to be a perfect fit for this GEN3 version of PCIe, but neither equipment manufacturers nor component suppliers believe it will reach the price level needed for this low-cost system. It is expected that the optical interconnect should cost 10’s of dollars, not 100’s. However, CXP AOCs may be used for PCIe GEN3 prototype testing for proof of concept. But if the first demonstrations are any indication, this will not be the case. PCIe GEN3 over optical cable was recently shown by PLX Technology using just a one channel optical engine next to its GEN3 chip with standard LC jumpers. PLX and other vendors are looking towards using optical engines with standard MPO patch cords to extend this to 4x and 8x implementations.
Columbia University and McGill University also demonstrated PCIe GEN3, but with eight lanes over a WDM optical interconnect. This is obviously much more expensive than even the CXP AOCs and is not expected to get any traction in real networks.
Another factor against PCIe as an I/O is the end user. In a data center, there are typically three types of support personnel – networking (switches), storage/server managers and data center managers. While the server managers are familiar with PCIe from an “inside-the-box” perspective, I’m not sure they are ready to replace their Ethernet connections outside the box. And, the others may have heard of PCIe, but probably aren’t open to changing their Ethernet connections either. They can run 10-Gigabit on their Ethernet connections today so really don’t see a need to learn an entirely new type of interconnect in their data center. In fact, they are all leaning towards consolidation instead – getting to one network throughout their data center like FCoE. But, as I've stated in previous posts, this won’t happen until it is shown to be more cost effective than just using 10GigE to connect their LANs and SANs. The fact that PCIe I/O could be cheaper than Ethernet may not be enough because Ethernet is pretty cost-effective itself and has the luxury of being the installed base.
Correction from Monday's Story on Digital Realty and the Brick S-house
By David Gross
On Monday, I attributed to the initial comment about the "Brick S-house" at the IMN Forum on Data Center Investing to Digital Realty/GI Partners Managing Director Rick Magnuson. While he used the term, it was in fact created by Jeff Moerdler of Mintz, Levin, Cohn, Ferris, Glovsky and Popeo during the pre-keynote banter, when I was still in the registration line, along with the people attending the Distressed Hotels event. Now, some of this blog's readers who were already in the conference noticed that I credited Magnuson, and not Moerdler, for this term, and I wanted to publish a correction here, and let all of our readers know that neither I, nor Lisa, would ever want to incorrectly attribute any quote made at a conference about a "sh--house".
In all seriousness, it was a great event, with a lot of good detail on site selection issues, financing, pricing, and competition. I highly recommend hedge fund and mutual fund PMs take a look at the 2011 event if it continues with these themes.
On Monday, I attributed to the initial comment about the "Brick S-house" at the IMN Forum on Data Center Investing to Digital Realty/GI Partners Managing Director Rick Magnuson. While he used the term, it was in fact created by Jeff Moerdler of Mintz, Levin, Cohn, Ferris, Glovsky and Popeo during the pre-keynote banter, when I was still in the registration line, along with the people attending the Distressed Hotels event. Now, some of this blog's readers who were already in the conference noticed that I credited Magnuson, and not Moerdler, for this term, and I wanted to publish a correction here, and let all of our readers know that neither I, nor Lisa, would ever want to incorrectly attribute any quote made at a conference about a "sh--house".
In all seriousness, it was a great event, with a lot of good detail on site selection issues, financing, pricing, and competition. I highly recommend hedge fund and mutual fund PMs take a look at the 2011 event if it continues with these themes.
Tuesday, November 9, 2010
Netflix Story is No Reason to Sell Akamai
By David Gross
Akamai is down nearly 5% today after CDN expert Dan Rayburn published a story in Seeking Alpha saying that Netflix is leaving Akamai, and handing over its business to Level 3 and Limelight. While he says he cannot precisely nail down how much revenue is associated with this, he estimates it's $10-$15 million. So Wall Street is knocking the company down 5% for losing what amounts to 1% of annual revenue. Yet another emotionally charged expectations multiple, although the 5 here (5%/1%) is lower than the double digits we've seen with Riverbed and Equinix.
The flip side is that not only is Limelight soaring, but Level 3 is out of the delisting zone, back over a dollar, which is an important development considering it got warning from NASDAQ last week due to its stock spending too much time trading under a buck, which means it would need to reverse split or hold above a dollar to avoid being booted down to the OTC market.
Level 3 is up 14% on this news, which like Akamai, would get about a 1% shift in revenue based on Rayburn's estimates, which means its getting an expectations multiple of 14, almost as high as Equinix's record 17 when it dropped 35% after guiding down 2%. What makes this even sillier is that Akamai's cost structure hasn't changed a bit, and it still spends half as much per dollar of revenue on bandwidth as Limelight, and generates more revenue from CDNs than Level 3 does from managed hosting, colocation, Ethernet, wavelengths, IP/VPNs, wireless backhaul services, managed WAN Optimization, SIP Trunking, Wholesale VoIP, PRIs, and CDNs combined.
Part of the reason there is such an overreaction from the manic buyers and panic sellers on Wall Street is that the customer is Netflix, which is heavily followed, and whose executives can't go to the bathroom without some analyst pondering the greater meeting. If a less exciting Akamai customer, like the Food and Drug Administration, or the National Center for Missing and Exploited Children, was going over to Level 3 or Limelight, I seriously doubt we'd see this response, even if the revenue opportunity was the same.
Akamai is down nearly 5% today after CDN expert Dan Rayburn published a story in Seeking Alpha saying that Netflix is leaving Akamai, and handing over its business to Level 3 and Limelight. While he says he cannot precisely nail down how much revenue is associated with this, he estimates it's $10-$15 million. So Wall Street is knocking the company down 5% for losing what amounts to 1% of annual revenue. Yet another emotionally charged expectations multiple, although the 5 here (5%/1%) is lower than the double digits we've seen with Riverbed and Equinix.
The flip side is that not only is Limelight soaring, but Level 3 is out of the delisting zone, back over a dollar, which is an important development considering it got warning from NASDAQ last week due to its stock spending too much time trading under a buck, which means it would need to reverse split or hold above a dollar to avoid being booted down to the OTC market.
Level 3 is up 14% on this news, which like Akamai, would get about a 1% shift in revenue based on Rayburn's estimates, which means its getting an expectations multiple of 14, almost as high as Equinix's record 17 when it dropped 35% after guiding down 2%. What makes this even sillier is that Akamai's cost structure hasn't changed a bit, and it still spends half as much per dollar of revenue on bandwidth as Limelight, and generates more revenue from CDNs than Level 3 does from managed hosting, colocation, Ethernet, wavelengths, IP/VPNs, wireless backhaul services, managed WAN Optimization, SIP Trunking, Wholesale VoIP, PRIs, and CDNs combined.
Part of the reason there is such an overreaction from the manic buyers and panic sellers on Wall Street is that the customer is Netflix, which is heavily followed, and whose executives can't go to the bathroom without some analyst pondering the greater meeting. If a less exciting Akamai customer, like the Food and Drug Administration, or the National Center for Missing and Exploited Children, was going over to Level 3 or Limelight, I seriously doubt we'd see this response, even if the revenue opportunity was the same.
Labels:
AKAM
Subscribe to:
Posts (Atom)