There were a number of cool presentations today with a focus on the semiconductors and optical components … and various network processing units and multi-core general purpose processors for high-speed backbone networking. It’s actually a fascinating subject area that few people seem to really be aware of. We all take the bandwidth to our homes as a given … and to our businesses, and to our hosted servers, blogs, flickr, twitter, and YouTube and on and on. But how is all of that backbone bandwidth … running over the fiber that connects us to our favorite sites, services, applications, and videos … actually built out?
Well … there are a number of vendors that provide the bulk of the equipment, and within that equipment there are providers of the subcomponents and silicon behind the massive amounts of bandwidth provided by the Internet. Cisco, Juniper, and Alcatel add up to 90% of the market for the really high-end backbone switching and routing gear.
The dominant solution in this space is using DWDM – Dense Wavelength Division Multiplexing – to place multiple colors of light on the same fiber, with each color carrying it’s own data. The current “commodity” speeds that are being sold are running at 40Gbps per link, and these are then installed in 4, 8, and 16+ slot chassis providing up to 1.2+Tbps packet switching speeds. Yes … that is 1.2 Terabits per second … 1.2 trillion bits per second … or about 120 billion characters per second. Kinda’ fast.
Some of the presenters today were Infinera, Luxtera, Photonic, and BroadLight. If you want to learn more about DWDM … Infinera has some cool videos that provide some details about their products on the Infinera Videos page. Their first video demonstrates how DWDM works …
During the presentations, there were a few stats that really stood out to me. One of these was the current average backbone bandwidth, per US carrier. Here is the US, although the presented admitted that it varies, the average was pegged at about 400Gbps+ on their backbone links. The key is that the estimates are an average of a 75% growth in the next year!
One of the examples of the calculations was based on taking this forward for 10 years … so if the internet grows at 70% per year for 10 years … using the current 40Gbps DWDM optical technologies:
- 15 million DWDM transponders will have to be added
- 165 million mechanical fiber couplings will have to be installed
- 4 GigaWatts of additional power will be required
- AND … In 10 years, they would be installing 4000 DWDM transponders PER DAY …
- … requiring 2000+ more technicians!
Infinera was presenting on their upcoming 100Gbps optical technologies, and also mentioned their eventual 400Gbps product, followed by 1Tbps, 2Tbps, and 4Tbps chipsets. Obviously as they – and other companies – are able to deliver these higher capacity solutions, there will be smaller numbers of units required to keep pace.
To me it’s impressive to see that people are working on creating these next-generation solutions to boost the capabilities offered …. to ensure that the Internet backbones can keep pace with the demands for bandwidth being created by us users.
There is a lot going on in the industry to cope with Internet bandwidth demands …