Article Info

The Bandwidth Explosion Myth

Not a single day passes where we do not hear the mantra of a “Bandwidth Explosion” used to justify aggressive financial forecasts for equipment and component companies, carrier backbone demand models, even regulation or deregulation of the Internet.

Lacking in these sweeping statements is a reference to a crisp and concise quantitative explanation of traffic growth. This lack of hard data supporting this bandwidth explosion has weighed heavily on us, particularly because we have seen the damage that nebulous predictions of traffic growth caused in 1999-2001.

Everyone remembers the claims of Internet traffic doubling (even more prescient here) every 100 days in 1999? This was pure fiction, yet the political and investment communities accepted it because it was a useful tool for justifying the irrational activity underway. History does not repeat, it rhymes, and the “Video Bandwidth Explosion” sounds very similar to what was said in the Telecom bubble.

Using data from the Japanese Ministry of Internal Affairs (MIC) one can draw conclusions about the growth in Japanese Internet traffic on a per subscriber basis. The conclusions are not what you would expect given the advanced nature of broadband in Japan, and are troubling when compared with image created by the market.

Japanese Broadband Subscriber Growth

Japan is our favorite proxy for the future of broadband in the world given their aggressive deployment of broadband and more recently, Fiber to the Home. Japan leads the world in absolute FTTH deployment as well as per capita FTTH deployment. It is the worlds de facto FTTH testing lab (see “The Proving Ground of NTT“).

Most notable about Japan is that all broadband subscriber growth in the past two years is a result of FTTH deployment.

Subscribers (m) FTTH DSL Cable Total
Nov-04 2.4 13.3 2.9 18.6
May-05 3.4 14 3.1 20.5
Nov-05 4.6 14.5 3.2 22.3
May-06 6.3 14.5 3.4 24.2
Nov-06 7.9 14.2 3.6 25.7
May-07 9.6 14 3.6 27.2

(May-07 values are Nyquist Estimates)

Japan has 127M people, with 50M households, and just over 50% broadband penetration. This is in a country where 94% of the country has access to DSL, FTTH, or Cable access in excess of 1Mb/s. DSL growth went negative a year ago as conversion to FTTH accelerated. NTT’s stated goal is to have 30M FTTH connections within 5 years.

No other country in the world has such a widely deployed advanced residential broadband infrastructure.

Broadband Traffic Growth

The MIC released a white paper in August of 2007 that estimated the total average bandwidth used by DSL and FTTH subscribers in Japan. The MIC monitored traffic to and from 6 large ISP nodes that represented approximately 40% of all broadband traffic.

Gb/s Upload Download
Dec-04             276                   323 d>
Jun-05             320                   425
Dec-05             349                   468
Jun-06             412                   524
Dec-06             463                   637
Jun-07             517                   722

These are raw traffic numbers that represent the average Gb/s traffic rate over a given month. Peak rates in the evening are around 2.3x trough rates in the early morning. In short, total Internet traffic increased just over 2x in the last 2.5 years, with a CAGR of 38%. Not as high as you would expect, but not terrible either. However, these numbers need to be adjusted for broadband subscriber growth in Japan shown above.

Not the Kind of Shock and Awe We Like

When the above numbers are examined on a per subscriber basis, the CAGR for download growth drops from 38% to only 18%.

image

If peak rather than usage numbers are used, the CAGR for download bandwidth improves slightly to 22%. Japanese Internet use isn’t doubling every year. It isn’t even doubling every three years.

Conclusions

The aggressive deployment of fiber to the home is not driving high bandwidth growth rates in Japan.

Verizon (VZ) and AT&T (T ) have massive capex efforts underway to provide next generation broadband services. AT&T has come under attack by the digerati (including us) for failing to anticipate the demand in broadband growth. What if the demand for massive increases in bandwidth outside a small circle of users simply does not exist? Where could the data be wrong?

One explanation is that the MIC broadband data is flawed – by a lot. However, even if the absolute measurements are incorrect, it is the rate of change that is most important, and it appears the same measurement methodology was used for the last two and one-half years.

It is entirely possible that the behavior of Japanese
FTTH subscribers is a poor proxy for the behavior of users elsewhere in the world. No one has made mention of this before, and given the magnitude of investment based on the assumption a bandwidth boom is happening it would be useful to see these differences in behavior precisely quantified.

Disclosure: Author is long NTT.

Appendix: Source Data Summary

imageThe Japanese Ministry of Internal Affairs (MIC) released data illustrating traffic growth (Full .pdf source here). A1 (in) represents raw upstream traffic and A1 (out) represents raw downstream traffic from a large sample of DSL/FTTH subscribers. A2 is corporate leased line and dial up traffic. We would expect the corporate upload/download ratio to be symmetrical and indeed it is. It is also interesting to note that corporate data growth if far more rapid than consumer broadband.

image A proportional mechanism to extrapolate this number to represent all DSL/FTTH subscribers. This graph shows an estimate of the average bandwidth use for all DSL/FTTH subscribers in Japan. 721.7Gb/s is the average download rate for May 2007.

imageHere is subscriber growth through the end of 2006. (Full .pdf source here). Note that for the period bandwidth data is available, DSL and Cable subscribers are flat. This means that all increases in bandwidth were due to new FTTH subscribers or different online behavior among existing broadband subscribers. By factoring out growth in FTTH subscribers one can clearly see the increase in traffic growth due to user behavior.

Discussion

Comments are disallowed for this post.

  1. You’re not alone in seeking clearer explanations and better descriptions of the methodologies used in assessing “bandwidth growth”. As a point of information, and hopefully a step in the right direction towards meeting these ends, Andrew Odlyzko today announced a new Web site that he’s long been devoted to concerning this subject.

    New Web Site:

    Minnesota Internet Traffic Studies (MINTS)
    http://www.dtc.umn.edu/mints/

    Frank

    Posted by Frank A. Coluccio | September 10, 2007, 5:30 PM
  2. Andrew:
    I would say it is the lack of interesting applications that is the cause of such relatively slow growth in bandwidth usage. Application providers still have to target the “mainstream” of slowish broadband available in places like US. As high bandwidth infrastructure gets put in place worldwide, it will enable new classes of applications. Telepresence comes quickly to mind as one example.

    Sridhar

    Posted by Sridhar Vembu | September 10, 2007, 6:13 PM
  3. I have to wonder if there aren’t other bottlenecks in Japan. Despite FTTH, perhaps the carrier networks are overloaded and cannot handle that much bandwidth, just as it would be here in the US and as British ISPs are currently complaining about. Perhaps there still isn’t enough stuff to download because the movie companies haven’t allowed online downloads yet, again just as they haven’t here in the US. Your figures of 2-300 MBs a day is equivalent to 6-9GBs = 1-2 DVDs a month. If there is HD video online, there will be much more than that downloaded over a month. So, I have to wonder if the bottleneck in Japan is elsewhere than residential broadband equipment.

    Posted by Ajay | September 10, 2007, 7:50 PM
  4. Andrew:

    Very nice analysis. This blog post is a perfect example of the blogosphere at its best. A network with intelligent processing at all [heck even at 2%] of its edges will outperform a centrally controlled network. The Internet beats the PSTN. The blogosphere outthinks the “main stream media”.

    On the substance of your post, as someone who invests in the rate of network bandwidth growth and works in the field, I find that your analysis matches my observations and judgment.

    I also agree with Sridhar that sufficient distribution of true broad bandwidth will stimulate demand which will be chased by providers driving down prices to gain share. The price declines will stimulate more applications.

    Where does it net? If the canonical Moore’s Law rate is doubling every 18 months, the actual CAGR is 42%. We will not see growth above that rate. I believe that your report of 22% CAGR for peak use will represent the lower bound.

    Now it’s also difficult to measure “bandwidth usage” since the prevalence of CDN’s will only increase and the larger network providers with end customers [cable companies, telcos, etc.] will heavily utilize various forms of private CDN’s. In addition to creating measurement difficulties, the proliferation of CDN’s also represent an actual increase in the true end to end efficiency and utilization of the Internet.

    At any rate, putting this consideration aside, I predict that actual global bandwidth use will increase at 20% CAGR for the next two or three years and then accelerate to approximately the midpoint between the two numbers above: 30% CAGR.

    This will be a dynamic, but manageable number that will make all of us involved in this world pretty happy.

    Posted by Michael Cullina | September 10, 2007, 8:01 PM
  5. If readers didn’t notice, Sridhar works for Zoho, an online Office Productivity suite that is outrageously cool. It is incredible what this one small company has accomplished. If not for my need of Excel plugins I would consider converting. If I ran a business I would seriously consider the need to buy 50 copies of MS Office. Not that MS Office isn’t a good tool, but Zoho is a good tool and eliminates so much in the way of IT headaches.

    Anyway, I’m starting to sound like a Zoho bigot. It is a great example of an application that pushes the envelope of what is needed for connectivity and ultimately applications are what drive bandwidth deployment. I suspect the problem with Japan is no one has figured out what to do with the fat pipes.

    I’ve got an email out to see if the Japanese numbers factor in CDN traffic.

    Posted by Andrew Schmitt | September 10, 2007, 8:15 PM
  6. Not sure if Japan is a good example of a typical western society for understanding growth of internet traffic – minimal teleworking, dominance of DoCoMo and i-mode applications, strict rules related to content copyright etc. Still the 18 or 22% CAGR does not show much growth – maybe we need to look at data from Hongkong/ France (free) or from Verizon FiOS subscribers.

    Posted by Sajith | September 10, 2007, 9:34 PM
  7. But Japan is a good model of widespread FTTH deployment. If the assumption is that fat pipes to the home drive bandwidth use, the data would indicate that assumption is wrong.

    Posted by Andrew Schmitt | September 11, 2007, 7:57 AM
  8. Being a neophite.

    “If the assumption is that fat pipes to the home drive bandwidth use, the data would indicate that assumption is wrong.”

    I was under the assumption that unlike the “90s” new services like Netflix and others that were not available at the time will create the demand for bandwidth today?

    Posted by jmack | September 11, 2007, 8:30 AM
  9. While “fat pipes” do indeed support greater throughput, hence larger files per unit of time, there is another benefit that usually goes unspoken, and that is the greater degree of “headroom” that very-high bandwidth access lines afford in support of time-sensitive applications like voice, online gaming, video conferencing (assuming symmetry, or a more-symmetrical transmission profile existed, as well), etc.

    This is doubly important when other applications are running in background (as most Skypesters will confirm). The latter types of applications would not fair as well over the High-Speed Internet allocations of most DSL and cable modem bundled offerings, say, which are both asymmetrical and capable of less bandwidth (and therefore less headroom, as well).

    An argument could certainly be made that less headroom is needed if QoS is employed (this is AT&T’s bet), but only marginally and up to a point, although I don’t want to go there at this time.

    A good question to ask is, How does the application mix in Japan differ from the mix being used here and elsewhere? Are those differences a function of channel head room as opposed to channel fill? Some, I’m sure, are. The point being, head room makes possible a wide variety of user-initiated end-to-end, time-sensitive applications sans QoS.

    Frank

    Posted by Frank A. Coluccio | September 11, 2007, 9:22 AM
  10. Andrew

    How fast do you think bandwidth will grow if Apple puts a small camera on the face of its iPhone then allows users to see each other as they talk if their networks support the required BW?

    The next step would be to use Bluetooth to connect that iPhone to an iTV so you can use the iTV as a larger monitor.

    If Jobs has not thought of this, he needs a new thinking cap.

    As always, I enjoy your work

    Kirk

    Posted by Kirk Lindstrom | September 11, 2007, 10:40 AM
  11. Frank thanks and interesting points. If I understand this correctly, “Headroom” = enough available bandwidth to run multiple, simultaneous and time sensitive apps. without sacrificing QoS = quality of service.

    Think I understand your comment about AT&T (FTTN) VS FTTH but won’t go there as well.

    Application mix, or in other words hypothetically the US demand for gaming, VOIP, video conferencing could be ten times that of Japan therefore Japan would not be an accurate metric to compare bandwidth requirements. Thanks again.

    Posted by jmack | September 11, 2007, 11:18 AM
  12. I totally agree with Sridhar above. It’s the Killer App deal.
    Napster single handedly drove everyone I knew to broadband from dial up.

    I can’t speak in aggregate terms, but my house has gone from kb’s of emails, to mb’s in mp3’s, to now gb’s of youtube videos.

    My kids and I are quite frankly bored with youtube now and our household data usage is probably back down to the mb’s from emails and surfing the web.

    So it really doesn’t matter how fat the pipes are, how many ONU’s/OLT’s or CO/CPE’s there are, or 10G this and 10G that, all that matters are the software applications, or the future Killer Apps.

    Posted by Ed Draker | September 11, 2007, 1:18 PM
  13. Yes, apps are the key. No one can really predict how things evolve.

    This discussion reminds me of what I like to call the Industrial Accident:

    http://www.nyquistcapital.com/2005/12/21/more-bandwidth-industrial-accident/

    Though the contemporary term ‘Black Swan’ is far better.

    Posted by Andrew Schmitt | September 11, 2007, 2:18 PM
  14. Andrew,
    22% may represent the CAGR on a per subscriber basis, however, 38% CAGR represents the growth the network needs to support.

    Japan likely is not a representative case. Access b/w increases have been driven for the sake of b/w increases rather then driven by application need/development. What is special or leading about Japanese apps vs the rest of world?

    Interesting Cisco Whitepaper, “The Exabyte Era”. It pegs IP traffic growth at 37% CAGR, aligning with the MIC report. Traffic growth primarily due to video apps.

    Posted by Jon Loewen | September 11, 2007, 4:59 PM
  15. Agreed – network grows at 38% – unless subscriber saturation takes place. You must admit that you didn’t think usage growth was this low.

    Regardless, it is a good place to anchor expectations.

    The positive way to look at all of this would be in countries with low broadband penetration, the network needs to support the usage growth rate multiplied by the penetration growth rate. Places like China and India will simply explode (in a good way)

    All assumptions break once another Napster shows up, which one day will but no one knows when.

    If you have the Cisco link please post.

    Posted by Andrew Schmitt | September 11, 2007, 8:35 PM
  16. Andrew,

    Having seen “bandwidth explosion” in more than a few business plans, I agree that this phrase is often lacking in any quantitative data supporting it and the projections that are tied to it. Excellent job digging into this topic.

    Do you think that the demagraphics in Japan could play a role in the numbers? Perhaps an aging population is not tuned into YouTube, MP3, and social networking?

    Eric

    Posted by Eric | September 12, 2007, 11:38 AM
  17. * Cisco whitepapers are referenced in the MINTS link provided by Frank in the the 1st comment.

    * For those not wanting to scroll up — Again —
    Minnesota Internet Traffic Studies (MINTS)
    http://www.dtc.umn.edu/mints/

    Posted by Iain | September 12, 2007, 11:51 AM
  18. Yes, Applications are a big part of ‘whats required’ to see/get higher growth. These applications, however, will come in at several tiers. Much of the technology in use today in any scale-out application came about because the required bandwidth was already present at the right cost point. Whether it is google server farms or facebook, technologies such as memcache, GFS/hadoop MapReduce require cheap bandwidth between commodity machines. Infact any distributed application and data elements require bulk, cheap bandwidth. Most of it exists within the data centers so a whole new app-layer has already emerged. And I see no reason why innovations like these stop at the edges of the internet data centers IF sufficient bandwidth is pervasive across the entire infrastructure – from a 1RU Google/Amazon/Yahoo/Ebay/Microsoft server all the way across google-net, data backbones, peering/transit, and access to users. Already, applications built on top of these constituent technologies are appearing and influencing bit-consumption. Applications in the consumer-facing tier may just be the last black swan to appear…not , more .

    Posted by rohit | September 12, 2007, 2:57 PM
  19. Responding to jmack’s earlier post, and then to Andrew’s follow-on note: Agreed, applications are indeed key.

    Exactly how one goes about mining all of this value of broader application usage is obviously an interesting question, although doing so would take one outside the direct realm of service providers and platform manufacturers. But it is an interesting observation, in itself, for it lends itself handsomely to the “freeconomics” arguments that are receiving renewed interest these days. Bill St. Arnaud of Canarie expands on this subject by citing Christian Anderson, Susan Crawford and others, here:

    http://lists.canarie.ca/pipermail/news/2007/000502.html

    Yesterday, CommsDay in its analysis of the the coming transacific submarine cable expansion questions whether we’re in for another bubble. The one line that struck me the most, which was cause for me to reflect on domestic backbones and local distribution networks, as well, follows:

    From: http://www.commsday.com/comment/reply/180

    “Another factor driving the new builds is the cost-reward paradigm. A submarine cable is nothing but a network element and there is no law of economics that suggests it needs to be profitable in its own right– the margins can be made on downstream services.”

    Back on topic, the argument surrounding improved headroom -i.e., the amount of bandwidth that lies in wait to absorb spikes in line activity – is often lost to the ballistic nature of the metrics that are used to measure war head production during an arms race. Sometimes (but not always, obviously), there is far less value in pushing the pedal to the metal (as though downloading a movie in 19 seconds is all that one could ever wish for) than the value that an end user or appliance that is able to do more things, and most importantly, “more things that use or create value”, simultaneously, which otherwise might require special conditioning (tiering, QoS, etc.) or not be usable entirely.

    Frank

    Posted by Frank A. Coluccio | September 12, 2007, 3:50 PM
  20. Historical corollary: Nuclear power was supposed to be so cheap that there would be no need to meter use. Didn’t happen. I think (no data to support) that unlimited bandwidth to the home may be negatively impacting capex for the benefit of the applications guys.

    Posted by Andrew Schmitt | September 12, 2007, 4:38 PM
  21. Responding to Rohits comment that
    “Much of the technology in use today in any scale-out application came about because the required bandwidth was already present at the right cost point.”
    In general, I agree. Which is also why I don’t think Japan is a representative case. Their apps are not leading or unique inspite of the fact that they have significantly more b/w, cheap.
    It is possible that Japan b/w is simply ahead of global apps development and that global apps development is constrained by b/w (outside of Japan).

    Posted by Jon Loewen | September 12, 2007, 5:08 PM
  22. Capex is spent in many ways. If it is devoted mainly to OSS, billing systems, and make-believe value-added swidgetry that’s dependent on DPI and IMS-like “solutions” –as opposed to streamlining and improving transport capabilities, thereby mitigating future capex _and _ opex demands, then I agree with you.

    There is no doubt that extreme tensions are mounting on many technological, ideological and user-preference fronts here, so your observation is most apt, and duly noted. The irony that I see here is that, even if the alternative streamlined approach I posited above were to be proved financially beneficial for service providers and users alike, it would still impact investors of certain software and systems in a negative way, while, of course, benefiting others, perhaps some of whom belong to other industry segments, commensurately.

    Posted by Frank A. Coluccio | September 12, 2007, 5:10 PM
  23. Hi,
    I found this a very interesting posting, but I tried to redo the math so that I got a feeling of how the numbers were made.
    Under other assumptions:
    – As the DSL and cable market are mature since mid 2005 (rather than end of 2004, as there was growth in those two quarters), assume that their bandwidth consumption per user keeps being steady at the average of 20 kb/s peak, 223 MB/day.
    – All the growth in bandwidth since then is attributed to the FTTH users.

    Under those assumptions, the growth in bandwidth per FTTH user gets to around CAGR 34% on those two years:

    2005.5 3410 20,7 223,1
    2005.11 4640 22,0 237,3
    2006.5 6310 24,4 263,3
    2006.11 7940 33,8 365,5
    2007.5 9600 37,3 402,9

    Which is still far from exploding but more in line with Michel Cullina’s bet. And still rather low absolute usage, in my opinion.

    Posted by Carlos | September 13, 2007, 5:36 AM
  24. I thought about base lining the analysis as you said when I first looked at this. However, I don’t think it is reasonable to assume DSL users did not change their behavior in the last few years. I think it is possible the growth rate of FTTH bandwidth use is higher than DSL, but I have no way of reaching hard numbers on both.

    Posted by Andrew Schmitt | September 13, 2007, 7:41 AM
  25. Andrew,

    As one of the guys that was there at “The Big BroadBANG” I can tell you with a great deal of certainty that the numbers from Japan are absolutely spot on! Everyone incorrectly assumes that somehow increased bandwidth is synonymous with increased usage, which is absolutely wrong. Think of bandwidth as money; most people that suddenly come to money usually keep the same burn rate. Few will indeed ‘go hog wild’ but most will not change a thing. At the end of the day we are all creatures of habits so applying the technology adoption curve to bandwidth usage makes good sense (e.g.: early adopters, early majority, etc., etc.). As far as bandwidth is concerned we are in the ‘Chasm’ phase and will remain there for the considerable future. Do note that Napster, for all its glory, only increased Internet bandwidth by 3% though it seemed a lot higher then due to an access business model that was based on dial-up oversubscription at the time. Today, Napster wouldn’t even show up as a tiny radar blip.

    The problem that most proponents of increased bandwidth usage seem to forget is the following: however many devices are connected to the Net, we can only use one device at the time. Just look at your PC memory utilization; bet you top Vegas odds that even though PC (RAM) memory has increased exponentially over the last decade (giving us another example of famous last words) the average user uses no more memory now than they did a decade ago. One application at a time, which why hyper-connectivity is more hyper than connectivity. Usage will grow at 30% CAGR until we can achieve ‘omnipresence’. See you in the future :).

    Posted by Bill Baker | September 13, 2007, 8:12 PM
  26. The media is driven by advertising, that’s why TV is still on top. Most people are happy with the TV content they get, choice gets boring. It’s like being let loose in the greatest library in the world – after some time you miss the pleasure of the four month-old Newsweek at the barbershop.

    Meanwhile, the opportunity cost for choice starts with gasoline. For example, the drive down to the video store. Or catching a flight to visit a customer. Last week I couldn’t attend a wedding because of the travel, but I would have paid $20 to get my two minutes to congratulate the couple. Is this a viable busines model? That’s a complex cultural question. Will my customer be happy if I don’t show up in person? Can I clinch that big deal by eyeballing a webcam? Dunno today, but the asymptote is a certainty. We’ll get there.

    Posted by Bandgap | September 15, 2007, 9:05 AM
  27. http://www.livescience.com/technology/070712_broadband_slowing.html

    Growth in Broadband Slows Dramatically

    The number of Americans with broadband Internet access rose 40 percent between 2005 and 2006, but only 12 percent between 2006 and 2007—although certain segments of the population did much better than that.

    The figures, gathered by the Pew Internet & American Life Project in Washington, DC, showed that 47 percent of American adults had high-speed Internet access at the start of 2007, up from 42 percent in 2006, and 30 percent in 2005.

    If broadband is headed to 75 percent saturation, various parts of the population are getting there at varying speeds—and some of them have already arrived. Among those with an annual income of more than $75,000, the rate of broadband penetration is already 76 percent. The rate falls off with each lower income bracket until the rate is 30 percent for those making less than $30,000.

    The group that showed the highest growth rate of broadband adoption last year was the under-$30,000 income segment, where the growth rate was 43 percent. It was also the only group that exceeded the growth rate it achieved between 2005 and 2006, when its use grew by 40 percent.

    Posted by d333gs | September 19, 2007, 3:21 AM
  28. Andrew, it’s always fun to read your stuff. I have spent a lot of time working with my colleagues in Japan on various issues over the years and observing trends there.

    I have been poking around following your posting of these figures and something everyone might want to consider about this Japanese market as a future predictor relating to broadband traffic:

    There is a cultural situation in play here. Simply put, the Japanese are additced to television. Perhaps more than any other place in the world, the Internet and television are in direct competition for people’s disposable time and attention in Japan. I have discussed this at length with a variety of people and they agree that television viewership is a pervasive time-eater within the free time Japanese workers have after their work day closes off. This probably accounts for the very modest jump in Internet traffic seen in your numbers during what ought to be an explosive evening IP traffic increase when people get home – given the high rate of broadband connectivity available to most Japanese.

    There are certain programs on over there where it is actually considered uncool if you show up to work the next day and are not prepared to discuss what happened on the show the previous evening.

    For what it’s worth, some of these programs come from the best twisted genre of television ever – Japanese game shows, which are indeed addictive.

    Thought I’d toss this thinking into the mix. Cheers.

    Posted by John Harrington | October 23, 2007, 12:37 PM
  29. Interesting analysis. From my experience, let me say I’m not so much surprised by these growth rates: P2P file sharing applications can be said to have reached maturity in 2005-2006 and during the last year, I haven’t detected a killer application which can play a key role in short-term. The bandwidth impact of video streaming like youtube is quite limited although video applications and TV distribution are supposed to drive the change. Perhaps Joost could become a good example. Regarding CDNs I agree with Rohit about private CDN model will impose.

    By the way, in the analysis of traffic growth in Japan, I miss information about the speeds/services sold in the period for each technology so that any possible correlation between speed upgrades and traffic growths can be detected. Specifically, upload speed plays a key role in usage growth due to its effect in P2P/symmetrical applications.

    Alberto

    Posted by Alberto Arto | October 28, 2007, 7:23 PM
  30. Trackbacks / Pingbacks

  31. Thoughts on Bandwidth Consumption - Don’t forget Data Caps. « Iain’s Chips & Tech | September 14, 2007, 2:37 PM
  32. MINTS — Great Internet Traffic Resource « Iain’s Chips & Tech | September 16, 2007, 5:27 PM
  33. Internet Traffic Growth Doesn’t Matter | Nyquist Capital | June 3, 2008, 3:48 PM
  34. Still No Japanese Exaflood in Sight | Nyquist Capital | January 13, 2009, 6:04 PM