Article Info

The Fuzzy Future of 40G

There is an optimistic sense of inevitability among investors that the optical network will transition from 10Gbs to 40Gbs. This inevitability is misplaced, as the rules that applied during the transition to 10Gbs no longer apply today. There is little technological, architectural, or economical reason for a widespread move to 40Gbs technology.

The technical barriers to deploying 40G are rapidly falling. DQPSK modulation, OTN multiplexing, and components for conditioning the optical link are available today. Cost per bit for 40G transmission is now cheaper than 10G transmission on ultra-long haul ( >1000km) links according to our sources.

The last transition from 2.5G to 10G is often used as a roadmap for how the transition to 40G will unfold. The key statistic (and undocumented) from this transition is that 10G volume grew rapidly once the cost of a 10Gbs link fell to twice the cost of a 2.5Gbs link. In other words, when the cost per bit of the more expensive technology (10G) was 1/2 the cost of the legacy technology (2.5G), customers started switching.

But it isn’t 1999 and the same rules that applied then certainly do not apply now. A number of technical and economic changes have taken place since. Let’s look at what is different today from 10 years ago.

 

  1999-2006
2.5Gbs –> 10Gbs
2008-????
10Gbs –> 40/100Gbs
Economics
  • Large component and equipment R&D boom followed by massive deflationary pressures
  • Scarce fiber capacity followed by large glut.
  • Cautious R&D environment with success based incremental investment
  • Limited areas of fiber exhaust
Optics
  • Lack of production grade all-optical switching capability. Lack of standardized optical form factors.
  • WDM technology immature
  • Optical ADMs (ROADMS) seeing widespread deployment. Widespread acceptance and use of 300pin MSA and XFP
  • Broad availability of tunable lasers and filters
Protocols and Traffic
  • Traffic growth dominated by TDM based protocols, primarily SONET/SDH
  • Inverse muxing not feasible
  • Traffic growth dominated by packets, primarily 1Gb and 10Gb Ethernet.
  • Multiple link aggregation is straightforward

Economics

The investment environment of 1999 was a perfect catalyst to spark a transition to 10G. There was massive R&D investment in 10G electronics, optics, and equipment during the speculative frenzy of the Telecom bubble. Fifteen companies raced to win the IEEE lottery, each duplicating the others efforts to enter a market that would ultimately support a handful of vendors. In the end, there were no winners among component and equipment makers, only different degrees of losers. The subsequent shakeout required a massive write down of billions of dollars of R&D investment of shareholder value.

The net result was the cost of 10G Telecom technology rapidly hit the floor. Component and Equipment prices were in free fall, as suppliers struggled to recover whatever they could from R&D costs that were sunk. It didn’t matter that a component that used to sell for $500 was now going for $100 – so long as it was above manufacturing cost it was all upside. This led to a highly deflationary period that drove the cost of 10G optics down.

Today we have a cautious investment environment with component makers looking for more immediate ROI and equipment makers unwilling to engage in speculative R&D. The deflationary environment that drove the cost of 10G down doesn’t exist today, and as a result 40G components costs will not drop at the same rate they did in 2002.

As for carriers, their barrier to rapid adoption in the 2001-2003 timeframe was not high costs, it was surplus capacity. There was little need to purchase any new equipment when the capacity already deployed was not being used. Carriers priced this unused capacity aggressively and eventually created additional demand.

When the demand appeared 10G pricing wasn’t substantially different from 2.5G and 10G took off.

Carriers continue to add capacity as needed and show little inclination towards speculative capacity investment; it is hard to envision a near term future where this attitude changes. 40G is attractive to carriers only where 10G cannot get the job done, such as specific fiber routes where no additional wavelength capacity remains. These areas do exist but are not great in number.

Meanwhile, companies like Infinera continue to drive density through PIC technology while Huawei uses homebuilt optics to cut the cost of optics. 10G is still getting cheaper and is still a moving target 40G needs to chase.

Bottom line: 40G component costs won’t decline at the rate they did for 10G. There are few areas where 40G is the only solution carriers have.

Protocols and Traffic

In 1999 SONET/SDH was king. Any data protocol that needed to be sent through the network was adapted to a signal suitable for transport through the SONET/SDH network. Even though SONET was invented as a voice technology, it’s ability to combine separate signals into a single faster signal (Time Division Multiplexing or TDM) was a critical feature that extended it’s use to data transmission. A single T1 line could be combined with 27 others and inserted into a 2.5Gbs payload (OC-48), occupying 1/48th of the total capacity.

Once an 2.5G OC-48 link reached 75% capacity it became difficult to pack the remaining free capacity efficiently. Imagine a busy restaurant with 36 diners and a capacity of 48, and a party of eight arrives to be seated. It turns out the remaining twelve seats are scattered across the room in tables of two and four, and the party of eight, which must sit together, cannot be accommodated. Moving from 2.5G to 10G was like quadrupling the size of the restaurant, allowing large parties to be seated again.

This is the stranded bandwidth problem and billions of dollars of switching equipment, like Ciena’s Coredirector, were used to solve it. But often the cheapest way to solve it was increase capacity (make the restaurant bigger), something that helped drive the deployment of OC-192. This was ‘throwing bandwidth at the problem’ and given the dropping costs of OC-192 it was a popular solution.

The nature of traffic has now completely changed. All traffic growth is now data, and increasingly it is carried as native Ethernet, not SONET/SDH. Five years ago, every DSLAM used a SONET port to uplink ATM traffic. Today, all non-legacy deployments use Ethernet.

This has large implications:

  • The stranded bandwidth problem no longer exists since data traffic can be multiplexed in a fluid fashion. In the Ethernet world, the party of eight doesn’t need to sit together at the restaurant.
  • In many cases the party of eight doesn’t even need to dine at the same restaurant. Link aggregation allows multiple physical links to emulate a single, faster link. Four 10Gbs Ethernet links can emulate a 40Gbs connection (with some loss of efficiency).
  • Ethernet and 40Gb/s simply don’t match as there is no such thing as 40G Ethernet. Therefore, the only value 40G brings to the table is the ability to combine 4
    x10Gbs Ethernet links onto one wavelength.

Bottom line: 40G is a SONET/SDH speed that adds no architectural value when used with the dominant source of traffic growth, Ethernet.

Optics

The promise of all optical transport and switching is upon us (for real this time) with optical add/drop multiplexers (ROADMs) taking the place of the electronic equivalent, particularly in metro environments. The beauty of all-optical is it eliminates electronics and the need to perform an optical to electrical then back to optical conversion. The majority of the cost in optical transport equipment is in the optics and electronics.

A traditional electrical add/drop approach uses a single 10G wavelength carrying 4 unique traffic sources. They might be multiplexed using TDM or Ethernet, but the single wavelength is shared. In an all optical approach, four 10G wavelengths share the same fiber, each one connected to a different source using ROADMs.

The individual 10G connections deployed are overkill from a bandwidth perspective, but the ROADM based architecture eliminates the electronics previously needed to pack and unpack traffic from a single wavelength.

Optics and ROADMS eliminate electronics, and the cheaper they get the more electronics they eliminate. This has the side effect of driving carriers to use more wavelengths and eliminate the need for a single, faster, and more expensive optical link – like 40G.

Bottom Line: ROADM based architectures reduce the value of a single fast connection carrying multiple slower connections, which it the primary value proposition of 40G.

Conclusion

Carriers certainly need 40G for capacity constrained areas but they do not need large quantities of it. In the areas where Carriers have no other option due to fiber or wavelength exhaust 40G is a critical technology and it will be used.

Outside of these areas, the evolving nature of traffic from TDM to Packet eliminates the benefits of multiplexing, and the decreasing cost of ROADMs makes it less attractive in higher volume metro environments.

Component and Equipment companies are playing a much more rational game this time around and will not collapse prices in an effort to create synthetic demand.

It is extremely unlikely that 40G will ever see anything near the breadth of deployment 10G will see. It is and always will be a niche technology. Of much greater interest is 100G, which represents a much larger quantum leap and a speed that is compatible with Ethernet.

Discussion

Comments are disallowed for this post.

  1. In my mind, the title of the article could be changes or there is a big hole in this analisys in my mind.

    The cost per bit on Long Haul links is enourmusly different for 40G and 10G LAG.

    If you take a 50GHz Grid on your WDM fiber in a Long Haul system and you transport 4x10G you use 4 Lambdas, If you transport 1x40G DP-QPSK you end up using 1 lambda in your grid.

    One important metric for Long Haul systems is Bits per Hz or Spectral efficiency.

    This metric is important because all the EDFAs (erbium-doped fiber amplifier) have to be installed no matter if you use 1 signal or a packed 50GHz DWDM composed signal.

    The cost of EDFAs is split between all the different lambdas in a DWDM system.

    The more bit you can add to the same grid, the lower the cost per bit of installing and operating the EDFAs.

    All the optical reserach is going in that direction for Long Haul … more bits per Hertz.

    40G DP-QPSK packs 2bits (QPSK) for every symbol, and two of these streams are muxed in two polarizations (DP Dual Polarization). This signal occupies circa the same spectrum of the 10G RZ / NRZ or PSK signal used in 10G technology. So the DP-QPSK signal will fit in the same 50GHz grid as the 10G signal.

    The same reasoning goes for ROADMs, 4x10G will use 4 ROADMs ports while 40G will only use one.

    The evolution for 100G is to bump the symbol rate to 28G while using DP-QPSK modulation. This has a broader spectrum and will be more difficult to fit in 50GHz grid.

    Any technology that does not increase the spectral efficiency bits/Hz going forward, will be a mute point for Long Haul communication. As well described by this article !!! (Maybe the title should be: “It’s bits per herts that counts …) All the points in the article are correct otherwise.

    Other approaches to solve the same problem in a different way, would be to have lasers which can network in a 20GHz grid and this will effectively increase the WDM grid capacity. This solutions while possible, will involve a higher integration in the optical space and are not yet standard as they do not fit the standard WDM grid of 50GHz.

    A lot of the points in the article are correct, and they could also build a good case for Enterprise/datacenter for not having 40G as a important technology.

    LAG has served well a lot of datacenters not willing to transition 1G to 10G and falling back to LAG until the price allows the transition.

    LAG offers operational challanges, so, at the right price point, a bigger pipe still becomes a better proposition.

    If 40G in the datacenter comes out of the gate at the right pricepoint, it will take all the LAG market, otherwise LAG will keep its place.

    In general my take is that only 100G is the real competition fo 40G in the long haul.

    Spectral Efficiency is the king in Long Haul communication.

    Posted by Francesco Caggioni | September 28, 2008, 11:13 PM
  2. LAG is a horrible operational technology. Theorists, vendors, and now investors can all continue to think that 4x10G is the same as 40G, but the operational reality is that it is not. This is because in the real world algorithims are required to “load balance” traffic for transmission across each of the links in the aggregated group. None of those algorithims is perfect so the use of 4x10G is always less than 40G. Add to this the complexity of multi-hop routing and path calculations — and you get limitations of calculating multi-path load balancing. Finally, there are really no mechanisms to signal changes in available bandwidth if a link in a group fails — to higher layers — so that a single 40G link failure is better than a loss of a 10G link in a 40G LAG — operationally.

    Posted by Victor Blake | October 10, 2008, 10:36 AM