Cisco Checks on the Cloud and Confirms That It’s Getting Bigger
Ever heard of a zettabyte? I’m going to use that word a few times in this story, so it will probably help if I define it first.
You know its smaller siblings, the gigabyte, the terabyte, and maybe petabyte and exabyte. Your average PC hard drive is usually a terabyte or two, and external hard drives are now hitting six terabytes. Big companies with data centers routinely deal with data at the petabyte level. Earlier this year, Facebook said it was setting up an exabyte-scale cold-storage facility at its data center in Prineville, Ore., intended to hold photos of its members forever.
But a zettabyte is second-to-last of the words we have to quantify data storage. If you think of a terabyte as 1,000 gigabytes, then a zettabyte is a trillion gigabytes. Beyond that is one more word, yottabyte, which would be a quadrillion gigabytes. After that, there are no more words that have yet been agreed upon.
So, here’s why I’m getting into all this: Cisco Systems today put out another one of its big trend surveys meant to blow your mind a bit and start you thinking long-term about the demands being put on your network and data center. It’s called the Global Cloud Index, which it says measures the combination of three types of data in motion: Traffic between real people and data centers, surfing video and websites, and the like; traffic between data centers, using shared resources; and traffic within a data center. (Cisco explains its definition and methodology in agonizing detail here.)
The headline is that Cisco says that all this traffic will grow by 4.5 times to 7.7 zettabytes a year by 2017. Then it goes on to try and make that amount of data real: It’s enough data to play a continuous stream of music for a year and half, or stream 2.5 hours a day of high-definition video for every person on earth.
It’s worth noting that this isn’t Cisco’s only ongoing exercise to quantify how much data is flowing through the Internet’s pipes. Its Visual Networking Index tracks the growth in another kind of data, and isn’t quite as big.
The biggest portion of the Cloud Index’s traffic, about 76 percent, takes place inside data centers, while about 17 percent is generated between people like you and me, using Web and cloud services. The remaining seven percent of traffic is between data centers.
To reach this estimate, Cisco says it generated some statistical models based on an analysis of what it describes as “various primary and secondary sources,” including a sample of about 40 terabytes of traffic taken from data centers around the world, gathered in 90 million network tests. So there’s clearly some extrapolation involved.
Next, Cisco breaks it down by regions of the world, and determines that the fastest growth in all this traffic will take place in the Middle East and Africa, probably because it’s growing off the lowest base. The biggest share of the traffic will be in North America, naturally (1.886 zettabytes annually by 2017) followed by Asia (1.876 zettabytes) and Western Europe (770 exabytes).
Also, next year is a key one in the evolution of cloud services: Cisco says that, in 2014, 51 percent of data-center workloads will be processed in the cloud. (Again, refer back to this for details on that definition.) That would be an increase from 39 percent in 2012, and will grow to 63 percent by 2017.
There’s more on a site that Cisco has built to explain all this in more detail than most people would care to get into. But, then again, you may not be most people, so have at it.