It is our opinion that Google (GOOG) has designed and deployed home-grown 10GbE switches as part of a secret internal initiative that was launched when it realized commercial options couldn’t meet the cost and power consumption targets required for their data centers.
This decision by Google, while small in terms of units purchased, is enormous in terms of the disruptive impact it should have on 10GbE switching equipment providers and their component supply chains. It is as if a MACHO just arrived in the Enterprise networking business and the orbits of the existing satellites have begun to shift without observers knowing why – until now.
We were watching shipments of SFP+ components for 10GbE in the market but simply couldn’t account for their end destination – sort of an optical component dark matter problem. After a great deal of investigation we have reached the following opinion:
Through conversations with multiple carrier, equipment, and component industry sources we have confirmed that Google has designed, built, and deployed homebrewed 10GbE switches for providing server interconnect within their data centers. This is very similar to Google’s efforts to build its own server computers (excellent article here). Google realized that because its computing needs were very specific, it could design and build computers that were cheaper and lower power than off the shelf alternatives. The decision to do so had a profound impact on server architecture and influenced the market’s move to lower power density solutions that Sun (JAVA) , Intel (INTC) and AMD (AMD) now embrace.
It now appears that the process Google trail blazed in the server computing market will repeat itself in the enterprise switching market. Given the relative dearth of low-cost 10GbE switching solutions, it isn’t surprising to see Google revisit this approach.
We believe Google based their current switch design on Broadcom’s (BRCM) 20-port 10GE switch silicon (BCM56800) and SFP+ based interconnect. It is likely that Broadcom’s 10GbE PHY is also being employed. This would be a repeat of the same winner-take-all scenario that played out in 1GbE interconnect. Vendors of standalone 10GbE PHY silicon ( AMCC (AMCC), VTSS (VTSS.PK), Netlogic/Aeluros (NETL) ) should take close note. Broadcom’s role in hollowing out equipment is something we previously profiled in depth (see Cisco’s Fear of a Broadcom Planet).
What is interesting about Google’s approach is that it has eschewed traditional 10GBASE optical standards and instead adopted off-standard solutions that better suit its needs for time-to-market, power and port density, and cost. While Google makes use of the SFP+ cage format, it does not use the receive dispersion compensation (EDC) function typically associated with SFP+. Instead Google is looking to employ a combination of twinax cabling for short reach (<10m) intra-rack cabling and a motley 850nm SR-like standard. Off the shelf SR optical modules appear to work well up to 100m over without receive equalization. Ironically, Finisar (FNSR) proposed such a solution several years ago.
This non-standard and very low cost optical format should prove just as attractive to other datacenter customers. Given the delays in deploying production grade EDC solutions it is possible vendors will move forward with an SFP+ SR standard without EDC. This would be a boon to suppliers of SR based SFP+ modules such as Finisar and Avago as adoption of the SFP+ standard will accelerate faster once decoupled from the complexity and cost of EDC. (see Five Misconceptions About the 10G Optical Market)
It is difficult to determine the precise amount of components Google is purchasing. Google is believed to have in excess of 500,000 servers. Based on shipments of 10G SFP+ modules, our best guess puts Google’s current usage at approximately 5k ports of 10GbE a month. This would include both server based SFP interconnect as well as the switches themselves. While the number is low, it is Google’s implementation and motivation for building their own switches that will resonate through the equipment and component industries.
At this time, other purveyors of large data centers like Yahoo, Microsoft, and Equinix do not appear to be following the same aggressive path with SFP+ optics. This is likely to change as new low cost per port 10GbE equipment from Arastra, Woven, Force10, Cisco (CSCO), and Juniper (JNPR) come into production that make use of the new format.
To us, it is Arastra that is the most interesting company in the context of Google’s decision. Arastra is building a system that closely matches what Google appears to be doing in secret. A picture of Arastra’s 7148S system with 48x 10GbE ports is below.
Arastra presents the pseudo-IEEE standard 10GBASE-CR which appears to match the twinax approach Google is taking. Furthermore, Arastra was founded and funded by Andy Bechtolsheim, Chief Architect at Sun Microsystems and who is closely tied to Eric Schmidt, the CEO of Google and an ex-Sun executive. Andy Bechtolsheim was also one of the first investors in Google. With these connections, Arastra may be the commercialization of Google’s technology and the ultimate supplier to Google itself.
Through our investigative research, Nyquist Capital reached the conclusion that 12 months ago Google took a look at the state of the art in 10GE switching equipment and decided that it could do better. The reasons behind this decision will have a large impact on how the small but rapidly growing 10GbE equipment and component market evolves.
Author holds positions in Broadcom, Vitesse, Finisar and AMCC.