Article Info

Google’s Secret 10GbE Switch

It is our opinion that Google (GOOG) has designed and deployed home-grown 10GbE switches as part of a secret internal initiative that was launched when it realized commercial options couldn’t meet the cost and power consumption targets required for their data centers.

This decision by Google, while small in terms of units purchased, is enormous in terms of the disruptive impact it should have on 10GbE switching equipment providers and their component supply chains. It is as if a MACHO just arrived in the Enterprise networking business and the orbits of the existing satellites have begun to shift without observers knowing why – until now.


We were watching shipments of SFP+ components for 10GbE in the market but simply couldn’t account for their end destination – sort of an optical component dark matter problem. After a great deal of investigation we have reached the following opinion:

Through conversations with multiple carrier, equipment, and component industry sources we have confirmed that Google has designed, built, and deployed homebrewed 10GbE switches for providing server interconnect within their data centers. This is very similar to Google’s efforts to build its own server computers (excellent article here). Google realized that because its computing needs were very specific, it could design and build computers that were cheaper and lower power than off the shelf alternatives. The decision to do so had a profound impact on server architecture and influenced the market’s move to lower power density solutions that Sun (JAVA) , Intel (INTC) and AMD (AMD) now embrace.

It now appears that the process Google trail blazed in the server computing market will repeat itself in the enterprise switching market. Given the relative dearth of low-cost 10GbE switching solutions, it isn’t surprising to see Google revisit this approach.

We believe Google based their current switch design on Broadcom’s (BRCM) 20-port 10GE switch silicon (BCM56800) and SFP+ based interconnect. It is likely that Broadcom’s 10GbE PHY is also being employed. This would be a repeat of the same winner-take-all scenario that played out in 1GbE interconnect. Vendors of standalone 10GbE PHY silicon ( AMCC (AMCC), VTSS (VTSS.PK), Netlogic/Aeluros (NETL) ) should take close note. Broadcom’s role in hollowing out equipment is something we previously profiled in depth (see Cisco’s Fear of a Broadcom Planet).

What is interesting about Google’s approach is that it has eschewed traditional 10GBASE optical standards and instead adopted off-standard solutions that better suit its needs for time-to-market, power and port density, and cost. While Google makes use of the SFP+ cage format, it does not use the receive dispersion compensation (EDC) function typically associated with SFP+. Instead Google is looking to employ a combination of twinax cabling for short reach (<10m) intra-rack cabling and a motley 850nm SR-like standard. Off the shelf SR optical modules appear to work well up to 100m over without receive equalization. Ironically, Finisar (FNSR) proposed such a solution several years ago.

This non-standard and very low cost optical format should prove just as attractive to other datacenter customers. Given the delays in deploying production grade EDC solutions it is possible vendors will move forward with an SFP+ SR standard without EDC. This would be a boon to suppliers of SR based SFP+ modules such as Finisar and Avago as adoption of the SFP+ standard will accelerate faster once decoupled from the complexity and cost of EDC. (see Five Misconceptions About the 10G Optical Market)

It is difficult to determine the precise amount of components Google is purchasing. Google is believed to have in excess of 500,000 servers. Based on shipments of 10G SFP+ modules, our best guess puts Google’s current usage at approximately 5k ports of 10GbE a month. This would include both server based SFP interconnect as well as the switches themselves. While the number is low, it is Google’s implementation and motivation for building their own switches that will resonate through the equipment and component industries.

At this time, other purveyors of large data centers like Yahoo, Microsoft, and Equinix do not appear to be following the same aggressive path with SFP+ optics. This is likely to change as new low cost per port 10GbE equipment from Arastra, Woven, Force10, Cisco (CSCO), and Juniper (JNPR) come into production that make use of the new format.

To us, it is Arastra that is the most interesting company in the context of Google’s decision. Arastra is building a system that closely matches what Google appears to be doing in secret. A picture of Arastra’s 7148S system with 48x 10GbE ports is below.

image[5]

Arastra presents the pseudo-IEEE standard 10GBASE-CR which appears to match the twinax approach Google is taking. Furthermore, Arastra was founded and funded by Andy Bechtolsheim, Chief Architect at Sun Microsystems and who is closely tied to Eric Schmidt, the CEO of Google and an ex-Sun executive. Andy Bechtolsheim was also one of the first investors in Google. With these connections, Arastra may be the commercialization of Google’s technology and the ultimate supplier to Google itself.

Through our investigative research, Nyquist Capital reached the conclusion  that 12 months ago Google took a look at the state of the art in 10GE switching equipment and decided that it could do better. The reasons behind this decision will have a large impact on how the small but rapidly growing 10GbE equipment and component market evolves.

Author holds positions in Broadcom, Vitesse, Finisar and AMCC.

Discussion

Comments are disallowed for this post.

  1. * what a scoop :-)
    * keep it up.
    ** makes sense along the line that most hardware vendors are building for telco’s and Web companies don’t want that gear.


    Posted by Iain Verigin | November 16, 2007, 5:46 PM
  2. Andrew,

    Great dig but, unfortunately, Google’s ‘search’ missed a good one: Myricom. (www.myricom.com). Samueli’s and Bechtolsheim’s best and brightest could not make it pass the parking lot of Chuck Seitz’ little gem of a company. These guys have been interconnecting some of the world’s most powerful clusters without ever breaking a sweat. For more info on their switch and the underlying technology see the links bellow (note: going through their website requires patience and a gps unit).

    http://www.myri.com/vlsi/
    http://www.myri.com/news/isc07/


    Posted by Bill Baker | November 16, 2007, 7:06 PM
  3. Andrew
    Good reporting always a pleasure. It’s a substantive story about Google, which means it would ordinarily be picked up many places as news. It’s going to be interesting to see which reporters read your blog.

    db


    Posted by Dave Burstein | November 16, 2007, 7:24 PM
  4. The new Arastra box looks interesting — definitely changes the economics for deploying 10GE. They claim a unique software model as well, where they publish their API and let their customers develop features on top — something else Google should appreciate.

    Cost, density, latency, performance, SFP+, software extensibility…

    Seems a good fit for the high-growth data center.


    Posted by The OC | November 16, 2007, 8:18 PM
  5. Andrew,
    Neat stuff. But this effort seems more of a testament to the “we’re smarter than everyone else” attitude at Google than a new business model. With production lead times for components plus system integration engineering and testing times, it’s a rare commercial vendor who can do “custom” on demand for a variety of customers while still holding costs low or delivery times short. Google has a nice advantage of creating and supporting its own internal technology market. I haven’t seen any Google servers for sale at Costco, so the larger impact seems constrained. But I do agree this effort exposes a potential market gap at the low end for a high density, short reach, low power 10GbE switch.


    Posted by Threegsmd | November 17, 2007, 11:55 AM
  6. “Google is believed to have in excess of 500,000 servers.”

    They serve 300 million search queries a day, about 3500 a second. Lets assume vast overkill of 1 server per query/second, that would put their search requirement at about 3500 consumer facing servers and lets assume another 50% for the crawl engine to give us a figure for search at say 5000 servers. Say the same for their other services, Youtube etc. Advert serving is their biggest thing, I read they serve 3 billion plus ads, but the algorithms are simpler than search and they don’t really serve ad’s they serve ad *blocks* (of 2, 3,4,6… ads) , I’m guessing another 5k for that.

    So something around the 15000 mark, say 20k with a safety margin.

    A double check, say there are 1 billion Internet users, that would give 1 server per 50k users, or just short of 2 seconds of exclusive server time a day. i.e. about 20 0.1 second queries. Which is a little low but then not all of the 1 billion users have broadband or use Google services.


    Posted by SminglePingle | November 17, 2007, 1:41 PM
  7. I guess all the hardware engineers that Google has been hiring from Cisco and other networking companies for years have been doing something.

    It’s not like this is a big secret, go look at Google’s job postings and you can get a sense what they are up to. The real insight is who they are hiring that they do not post for.

    Myricom: Great insight but, “unfortunately, Myricom’s ’search’ missed a good one:” reality.
    1. Its not engineers that are going to define the future of 10GbE switching. This is the way engineers think, and especially as of late we have seen this thinking is way off the mark.
    2. I would back the horse that defined 1GbE performance switching. Granite Systems (Andy’s) original architechture is still, today, the de facto low cost high performance switch in the market. In that space, no one can touch its cost/performance point. As far GbE port volume, BRCM. AndyB and HenryS.
    3. I am sure Seitz is a very smart man, but AndyB is one of the most successful VCs in the world, BRCM is the most successful communication silicon company in the world,and Google has some of the most sought after sockets in the world. I am sure Seitz does not have their insight into the most revolutionary and innovative technology that exists today.
    4. What will define 10GbE switching for the likes of Google will not be home spun ASICs. Their will be some aspect that is revolutionary.
    5. One can make the analogy of BRCM to Intel. Meaning that Google took Intel uP’s and made their design around it. Unfortunately for BRCM Google is a monster now, not a start-up (albeit with the likes of AndyB and Andy Grove behind them at that point). Also Networking is still not as a mature industry as computing was then. In other words, BRCM is not going to define 10GbE switching, they will come in behind it, and clean it up as they always smartly have.
    6. To confuse things even further: Is Arastra using BRCM’s switch? What is Cisco’s involvment in all this? How about Intel, who has been with Google since day one?

    Lastly, the most important feature needed in switching for Google is not mentioned. Guys, this is a huge miss.


    Posted by JohnB | November 17, 2007, 1:47 PM
  8. Smingle – Aside from the fact the server count is well agreed on the internet, do you really think Google needs 15 buildings the size of two soccer fields to store 15k servers?

    JohnB – Arastra is using a switch from Fulcrum Micro.. It’s a very interesting and well run company.


    Posted by Andrew Schmitt | November 17, 2007, 2:29 PM
  9. Andrew,

    For info, both Fulcrum and Myricom trace their roots to CalTech but Myricom’s silicon is far superior (and I have no interest in either). And I will be the first to highlight (as I have mentioned to Chuck) that Myricom lacks marketing muscle.

    John B,

    For info, I am NOT an engineer but some of the best in SoCal have worked for me in the past so I do know the difference between above-average, excellent and superb. Not taking anything away form Andy B. and/or Henry S. only stating the fact that Myricom’s original engineering power has not been diluted by countless new hires that add little value. When Andy and/or Henry can deploy their switching solutions at MareNostrum (http://www.bsc.es) then we’ll talk but until then Google missed one, period. And don’t knock engineering; it is what got us here, remember?


    Posted by Bill Baker | November 18, 2007, 12:32 AM
  10. 1300 servers per datacenter, with the associated data store, even that sounds reasonable to me (say their servers are 2U rack mounts similar to their search appliance, 13 per cabinet in 100 cabinets plus cooling power and office space).

    http://www.google.com/enterprise/gsa/product_models.html

    They say their 12 server one is good for 30 million documents, my guess is there are say 4 million websites of substance (yours is the 1.2 millionth most popular according to Alexa so this seems reasonable), say 1000 pages per site average, thats 4 billion pages.

    i.e. 133 cabinets is good for 4 billion, which would put their data center size at 133 cabinets….

    However I estimate it, I come up with a much smaller number than is put around on the net.


    Posted by SminglePingle | November 18, 2007, 6:37 AM
  11. Urs Holzle of Google, in a minor part of a presentation at one of their engineering open houses back in 2005, talked about how then current switch manufacturers weren’t making products appopriate to Google’s needs. That is, they were too general purpose and thus consumed too much power and space to scale well for Google. He mentioned the possibility of building custom switches in an off-handed, casual way, in the midst of alot of other topics. So this just feels like the other shoe dropping…


    Posted by guyal | November 18, 2007, 3:29 PM
  12. Bill Baker: if by far superior you mean an order of magnitude slower and much more expensive, then yes, Myrinet has a leg up.

    Take a closer look at Fulcrum: their latest silicon boasts 300 *nanosecond* switching latency. Read that again. 10x faster, 10x denser, 10x cheaper. This is the game-changer Google is hungry for.


    Posted by Jeremy Kemper | November 18, 2007, 5:02 PM
  13. Definitely *NOT* a scoop: I had heard of this about this years ago: http://cbcg.net/talks/googleinternals/index.html


    Posted by Toby DiPasquale | November 18, 2007, 7:56 PM
  14. Smingle –

    So why does The Dalles Google site have 200,000 sq ft of floor space? That’s room for, v conservatively, 7,000 racks, each with 40 dual-core processors. That is a minimum of 500,000 cores.

    Don’t forget about Gmail, Gearth, Alexa and all the other services that Google provides as well as their internal research needs. MapReduce runs batch jobs. Got to run them somewhere.


    Posted by Robin Harris | November 18, 2007, 9:11 PM
  15. Jeremy,

    The guy that wrote (10x, 10x, 10x, etc.) used to work for me and, obviously, I thought him well (re: Mike Ziele). Here’s more Myricom ammo (see link); you think the boys at Aragonne don’t know their VLSI/swich vendors. This is only a preview of what Google really needs.

    “Myricom’s economical, low-latency modular switches represent the heart of the ALCF’s data-management system. The nine-switch complex supports up to 2,048 connections, each of which simultaneously exchanges data at around 1 billion bytes per second.”

    http://www.webwire.com/ViewPressRel.asp?aId=52420


    Posted by Bill Baker | November 18, 2007, 11:21 PM
  16. @Smingle,

    Google backs up everything 3 times for their MapReduce and their hardware is always breaking so they need to maintain a three backup minimum at all times. That would already make your count to 45k and then they have hundreds of other services. Also take into account all the images for Google Earth/Map (which may not be that many servers but for latency and high availability). Then they have development machines and staging machines. Additionally, they keep all the mail for GMail users.

    Then they need servers to expand their current products. Also they probably replicate more critical applications multiple times (>3 times) for lower latency.

    They probably keep servers exclusive for high performance research.

    I made a list of everything I could think of that would add to your back of the envelope calculation and I’m sure there’s some mistakes and something I’m missing.

    I could see their numbers in 150k – 200k range (+- 20%), but I’m in the same boat as you where I don’t see what they need 500k for. I won’t be surprised if they do have that many but it’s incredibly high.


    Posted by dasickis | November 19, 2007, 2:04 AM
  17. Power, power, power…….


    Posted by JohnB | November 19, 2007, 7:54 PM
  18. makes sense , after all an’t they also building mobile DC’s into a containers ?


    Posted by /pd | November 19, 2007, 9:02 PM
  19. Now Tony (I mean Bill Baker), how did my name got dragged into this discussion? Although I may be tempted to set the record straight, I wouldn’t dare post a message about my own company — I’d be mortified if someone were to find out.

    The truth is that Fulcrum and Myricom aren’t even competitors. Fulcrum develops fully-standard, low-latency 10G Ethernet switch devices that are being leveraged by innovators like Arastra to redefine the data center interconnect (in terms of cost, scalability, interoperability, and performance). Myricom develops proprietary interconnect solutions that have historically competed with Infiniband and other proprietary fabrics in the highly-tuned clusters of research and academia. Even in that niche, if the Top500 list is any indicator, plain-old 1G Ethernet has captured the lion’s share of recent installations.

    I don’t think there’s any debate that Myricom makes good interconnect products. Comparing their silicon to Fulcrum’s, though, isn’t really relevant — at least not until they develop Ethernet switch silicon and make it available on the open market.

    The only real problem with Myricom’s Myrinet is that it’s not Ethernet. Oh, but I know where they can get a killer Ethernet switch chip and solve that problem…


    Posted by Mike Zeile | November 19, 2007, 9:59 PM
  20. Bill if you are speaking of Myricom and not acknowledging a relationship with them that is not being very nice to the rest of us. Please reply.


    Posted by Andrew Schmitt | November 19, 2007, 10:28 PM
  21. Andrew,

    1. You do a terrific job of reading through heaps of information to dig up gems like the one on Google but I do suggest you do as well when it comes to your own blog. In my post dated Nov. 18th 2007 (12:32 am) up above I made it clear that I have no interest in either Fulcrum and/or Myricom nor am I lobbying for one. Bill runs his own show…

    2. The reason Mr. Ziele’s name was evoked (and not dragged, I assure you) has to do with Jeremy Kemper’s post regarding Fulcrum’s claims of ‘300ns latency and 10x faster, 10x cheaper and 10x denser than Myricom’ (posted on Nov. 18th at 5:02pm). These types of unsubstantiated claims originate somewhere in marketing and as the VP of Marketing at Fulcrum Mr. Ziele has the ultimate and final responsibility for such claims, which no one at Fulcrum denied or corrected. And yes, we all know they read this blog.

    3. Though technically Fulcrum and Myricom do not compete directly Fulcrum’s chips are part of several switch vendors’ switching solutions that, in turn, compete with Myricom.

    4. Mike Ziele knows better than to call me a mouthpiece for anything other than the absolute facts. He also knows I know my stuff. That was a cheap shot; guess I did not teach him everything I know, just everything he knows. And finally, to set the record straight, Myricom has been developing their own Ethernet solutions for quite some time now. Here’s what Om Malik states about the same (but strongly advice against questioning his motives): http://gigaom.com/2007/11/18/google-making-its-own-10gig-switches/

    Btw Mike, Tony is on vacation and totally oblivious to this…Best–bb


    Posted by Bill Baker | November 19, 2007, 11:16 PM
  22. Got it. I missed the post above.


    Posted by Andrew Schmitt | November 20, 2007, 8:12 AM
  23. Mr. Baker:

    It’s a bit of a stretch to suggest that I am responsible for monitoring and setting straight the claims that others make on blogs. As you point out, there are a number of switch systems that leverage our silicon; plenty of people know our virtues and have a vested interest in our success.

    You reference Om Malik’s 11/18 article on Google’s 10GE switch plans. If you haven’t seen his updated article, published this morning and included in Andrew’s link-list above, it’s worth checking out:

    http://gigaom.com/2007/11/20/more-details-about-googles-gigabit-switches/

    Anyway, no reason to sling mud; let’s take this conversation off line.


    Posted by Mike Zeile | November 20, 2007, 6:57 PM
  24. While you folks are arguing about 10X, note that the Fulcrum number is just the switching time for a single hop, while the Myricom number is an end-to-end latency. Too bad they don’t require that you pass an exam to add blog comments. It’s true that Myricom only makes 10gigE endpoints; their switch gear is their proprietary protocol.

    I won’t ding anyone for not realizing that one of Myricom’s founders (Bob Felderman) works at Google. That’s a bit more obscure.


    Posted by Greg Lindahl | November 21, 2007, 12:06 AM
  25. Bottom line: If it wasn’t before, Google is now aware of all viable options at its disposal; therefore, the original scoop by AS served its purpose. As far as I am concerned, Google owes AS a favor as do the vendors mentioned in the posts/comments. Job well done by everyone…


    Posted by Bill Baker | November 21, 2007, 11:20 AM
  26. As much as I’d like everyone to write a check out to Nyquist Capital, something tells me Google doesn’t have trouble finding vendors to talk to.


    Posted by Andrew Schmitt | November 21, 2007, 12:13 PM
  27. Mike, I am back from vacation and want to set your record straight. Same for Greg. I suggest that you do more research on solutions and deployments before you comment about Myricom, or any other company. You look silly.

    I have two friends that have been in two Google data centers in the past year and Juniper is a former client of mine and they have been there a few times too, right.

    No shot at BRCM, but Google is far from deciding anything. They love to look, test, do mind melds and ponder the future. They are very private on some fronts for a public company.

    Google’s recent announcements in wireless suggest that they want to do a lot of things. Time will judge If they have tried to do things they should not be doing.

    Last time I checked, Cisco gear that runs Ethernet is controlled by a legacy proprietary technology IOS, IOX, etc. Proprietary = control over an item of property. Cisco has built a massive company making IOS a standard non standard and have done quite the job controlling the enterprise and carrier network. While CSCO is not in this dance others that are could have some control in their technology too. Gee, it works well for the rest of the industry.

    Mike, do not assume you know about technology, or who is even posting unless you want to be silly.

    Andrew, good research.
    Cheers to all.


    Posted by Tony Fisch | December 5, 2007, 2:45 PM
  28. Funny to see a mentioning of Cisco and Myricom on the same line, and even more to read the funny response of Greg.

    If someone belives that Myricom will be able to beat Cisco on 10G switching he needs to visit his doctor. Even more, Myricom low latency is when using their proprietary protocol, someting like doing IP over myrinet. Doing true TCP/IP they are far behind. Check their web site. details…details….

    and last, there are no much of 10G deployments now due to price price and price. When 10G will be mature, there will be only 2-3 companies. Cisco will be there.


    Posted by Ben | December 6, 2007, 10:38 AM
  29. Dear Ben,

    Your technical knowledge of Myricom’s implementation is wrong on several points:
    * There is no IP-over-Myrinet protocol, it’s actually Ethernet-over-Myrinet. The Ethernet frames are encapsulated into a Myrinet packet. These frames could carry IP, but also any other protocols that can run on Ethernet.
    * The switching latency applies to all Myrinet packets, being Ethernet-over-Myrinet packets or MX-over-Myrinet packets. There is a larger latency overhead in the 10G Myrinet/Ethernet bridges on the edges, but you can have as many Myrinet hops (32-port crossbar) as you want between the bridges. So for a single crossbar switch, yes the latency in the Myricom solution would be higher than a single integrated Ethernet crossbar like Fulcrum’s. However, the end-to-end latency is barely different for very large switches on the Myricom solution.

    In this context, think about Myrinet as the internal switch network, like Cisco or other Ethernet switch vendors have their own proprietary internal switch networks.


    Posted by Patrick | December 6, 2007, 1:38 PM
  30. I do not want to close this thread to comments but please refrain from further back and forth about Myricom. I can assure you that only a small fraction of the readers here are interested in this.


    Posted by Andrew Schmitt | December 6, 2007, 1:44 PM
  31. For those disputing the Google server count….I think you are making the assumption that these servers are 100% utilized. Many large server farms run at about 10-20% cpu utilization. So take any number you come up with and x7.
    This is why virtualization is growing so rapidly. Even though the Hypervizer takes up 10% of the cpu, you end up getting another 10-20% use out of your server farm.


    Posted by KN | December 18, 2007, 7:36 PM
  32. Do you still think Arastra is a viable story after both Cisco and Juniper announced next generation Nexus and EX?
    It is not hardware but the feature in software which decide the success. It is core business for both big networking companies and would be very hard for small companies like Arastra to break into that specially with one shot someone-elses-silica based approach.

    As for Andy, many companies he funded have gone out of business so that is hardly a plus.


    Posted by Andy | February 1, 2008, 1:22 AM

    Trackbacks / Pingbacks

  33. Data Center Knowledge | November 16, 2007, 3:41 PM
  34. Andrew@Nyquist muses on “Secret Goog 10GbE Switch” « Iain’s Chips & Tech | November 16, 2007, 5:52 PM
  35. Google Making Its Own 10Gig Switches « GigaOM | November 18, 2007, 10:55 AM
  36. Curious Cat Science and Engineering Blog » Google’s Secret 10GbE Switch | November 18, 2007, 5:47 PM
  37. Google Is Making Its Own 10-Gigabit Switches | Rob's Blog | November 18, 2007, 6:48 PM
  38. Storage Bits mobile edition | November 18, 2007, 11:31 PM
  39. Google’s Secret 10GbE Switch « Kevin Burton’s NEW FeedBlog | November 18, 2007, 11:58 PM
  40. CTI97:=(Creativitate,Tehnologie,Informatie) » Zvon: Google şi-a construit propriul switch 10GbE | November 19, 2007, 6:01 AM
  41. Illuminata Perspectives » Blog Archive » Google is Becoming a Computer Systems Company | November 19, 2007, 10:05 AM
  42. Google fabrica un switch 10GbE | El Blog de Jose Manuel Suárez | November 19, 2007, 11:19 AM
  43. Google’s Secret 10GbE Switch. A Game Changing Strategy « Hyper Passionate Entrepreneurs | November 19, 2007, 12:49 PM
  44. VentureBeat » Roundup: iPhone the spyPhone?, Google’s internet, and more | November 19, 2007, 2:56 PM
  45. Google eigene 10 GbE Switche (und iSCSI Performance) « augmented web | November 19, 2007, 4:22 PM
  46. xcke’s blog » Blog Archive » Saját 10Gib-es switchet tervezett a Google | November 20, 2007, 6:56 AM
  47. c0t0d0s0.org | November 20, 2007, 7:14 AM
  48. More Details About Google’s Gigabit Switches « GigaOM | November 20, 2007, 7:30 AM
  49. HB blog » Blog Archive » Google is producing their ows switches | November 20, 2007, 6:32 PM
  50. VentureBeat » Roundup: Alibaba’s ads, Flying cars don’t fly, Google’s switches, more | November 20, 2007, 6:54 PM
  51. El Mike’s Internet News Blog » Blog Archive » Is IBM Commoditizing IT? Or Kicking Off The Next Round Of IT Innovation? | November 21, 2007, 2:22 AM
  52. Running Large-Scale Apps With Massive Databases Over The Internet « GLORIAD Classroom | November 21, 2007, 4:29 PM
  53. InsideTrack: Google making its own 10GbE switches | insideHPC | November 21, 2007, 11:17 PM
  54. Bitacora del Destino » Google podría estar preparando sus propios conmutadores 10 Gigabit Ethernet | November 24, 2007, 12:46 AM
  55. Random Musing on Networking Semi « Iain’s Chips & Tech | November 27, 2007, 1:17 PM
  56. Network Jack » Blog Archive » A Gigabyte of Power | December 10, 2007, 7:02 PM
  57. 10GbE and SFP+ - This Time It’s Different | Nyquist Capital | December 21, 2007, 8:08 PM
  58. Future of computing: Forecast calls for partly cloudy | Bitcurrent | June 12, 2008, 12:11 AM
  59. De-centralized utilities and the case against Red Shift | June 30, 2008, 7:10 PM