There is an optimistic sense of inevitability among investors that the optical network will transition from 10Gbs to 40Gbs. This inevitability is misplaced, as the rules that applied during the transition to 10Gbs no longer apply today. There is little technological, architectural, or economical reason for a widespread move to 40Gbs technology.
The technical barriers to deploying 40G are rapidly falling. DQPSK modulation, OTN multiplexing, and components for conditioning the optical link are available today. Cost per bit for 40G transmission is now cheaper than 10G transmission on ultra-long haul ( >1000km) links according to our sources.
The last transition from 2.5G to 10G is often used as a roadmap for how the transition to 40G will unfold. The key statistic (and undocumented) from this transition is that 10G volume grew rapidly once the cost of a 10Gbs link fell to twice the cost of a 2.5Gbs link. In other words, when the cost per bit of the more expensive technology (10G) was 1/2 the cost of the legacy technology (2.5G), customers started switching.
But it isn’t 1999 and the same rules that applied then certainly do not apply now. A number of technical and economic changes have taken place since. Let’s look at what is different today from 10 years ago.
2.5Gbs –> 10Gbs
10Gbs –> 40/100Gbs
|Protocols and Traffic||
The investment environment of 1999 was a perfect catalyst to spark a transition to 10G. There was massive R&D investment in 10G electronics, optics, and equipment during the speculative frenzy of the Telecom bubble. Fifteen companies raced to win the IEEE lottery, each duplicating the others efforts to enter a market that would ultimately support a handful of vendors. In the end, there were no winners among component and equipment makers, only different degrees of losers. The subsequent shakeout required a massive write down of billions of dollars of R&D investment of shareholder value.
The net result was the cost of 10G Telecom technology rapidly hit the floor. Component and Equipment prices were in free fall, as suppliers struggled to recover whatever they could from R&D costs that were sunk. It didn’t matter that a component that used to sell for $500 was now going for $100 – so long as it was above manufacturing cost it was all upside. This led to a highly deflationary period that drove the cost of 10G optics down.
Today we have a cautious investment environment with component makers looking for more immediate ROI and equipment makers unwilling to engage in speculative R&D. The deflationary environment that drove the cost of 10G down doesn’t exist today, and as a result 40G components costs will not drop at the same rate they did in 2002.
As for carriers, their barrier to rapid adoption in the 2001-2003 timeframe was not high costs, it was surplus capacity. There was little need to purchase any new equipment when the capacity already deployed was not being used. Carriers priced this unused capacity aggressively and eventually created additional demand.
When the demand appeared 10G pricing wasn’t substantially different from 2.5G and 10G took off.
Carriers continue to add capacity as needed and show little inclination towards speculative capacity investment; it is hard to envision a near term future where this attitude changes. 40G is attractive to carriers only where 10G cannot get the job done, such as specific fiber routes where no additional wavelength capacity remains. These areas do exist but are not great in number.
Meanwhile, companies like Infinera continue to drive density through PIC technology while Huawei uses homebuilt optics to cut the cost of optics. 10G is still getting cheaper and is still a moving target 40G needs to chase.
Bottom line: 40G component costs won’t decline at the rate they did for 10G. There are few areas where 40G is the only solution carriers have.
Protocols and Traffic
In 1999 SONET/SDH was king. Any data protocol that needed to be sent through the network was adapted to a signal suitable for transport through the SONET/SDH network. Even though SONET was invented as a voice technology, it’s ability to combine separate signals into a single faster signal (Time Division Multiplexing or TDM) was a critical feature that extended it’s use to data transmission. A single T1 line could be combined with 27 others and inserted into a 2.5Gbs payload (OC-48), occupying 1/48th of the total capacity.
Once an 2.5G OC-48 link reached 75% capacity it became difficult to pack the remaining free capacity efficiently. Imagine a busy restaurant with 36 diners and a capacity of 48, and a party of eight arrives to be seated. It turns out the remaining twelve seats are scattered across the room in tables of two and four, and the party of eight, which must sit together, cannot be accommodated. Moving from 2.5G to 10G was like quadrupling the size of the restaurant, allowing large parties to be seated again.
This is the stranded bandwidth problem and billions of dollars of switching equipment, like Ciena’s Coredirector, were used to solve it. But often the cheapest way to solve it was increase capacity (make the restaurant bigger), something that helped drive the deployment of OC-192. This was ‘throwing bandwidth at the problem’ and given the dropping costs of OC-192 it was a popular solution.
The nature of traffic has now completely changed. All traffic growth is now data, and increasingly it is carried as native Ethernet, not SONET/SDH. Five years ago, every DSLAM used a SONET port to uplink ATM traffic. Today, all non-legacy deployments use Ethernet.
This has large implications:
Bottom line: 40G is a SONET/SDH speed that adds no architectural value when used with the dominant source of traffic growth, Ethernet.
The promise of all optical transport and switching is upon us (for real this time) with optical add/drop multiplexers (ROADMs) taking the place of the electronic equivalent, particularly in metro environments. The beauty of all-optical is it eliminates electronics and the need to perform an optical to electrical then back to optical conversion. The majority of the cost in optical transport equipment is in the optics and electronics.
A traditional electrical add/drop approach uses a single 10G wavelength carrying 4 unique traffic sources. They might be multiplexed using TDM or Ethernet, but the single wavelength is shared. In an all optical approach, four 10G wavelengths share the same fiber, each one connected to a different source using ROADMs.
The individual 10G connections deployed are overkill from a bandwidth perspective, but the ROADM based architecture eliminates the electronics previously needed to pack and unpack traffic from a single wavelength.
Optics and ROADMS eliminate electronics, and the cheaper they get the more electronics they eliminate. This has the side effect of driving carriers to use more wavelengths and eliminate the need for a single, faster, and more expensive optical link – like 40G.
Bottom Line: ROADM based architectures reduce the value of a single fast connection carrying multiple slower connections, which it the primary value proposition of 40G.
Carriers certainly need 40G for capacity constrained areas but they do not need large quantities of it. In the areas where Carriers have no other option due to fiber or wavelength exhaust 40G is a critical technology and it will be used.
Outside of these areas, the evolving nature of traffic from TDM to Packet eliminates the benefits of multiplexing, and the decreasing cost of ROADMs makes it less attractive in higher volume metro environments.
Component and Equipment companies are playing a much more rational game this time around and will not collapse prices in an effort to create synthetic demand.
It is extremely unlikely that 40G will ever see anything near the breadth of deployment 10G will see. It is and always will be a niche technology. Of much greater interest is 100G, which represents a much larger quantum leap and a speed that is compatible with Ethernet.