The Building Is the New Server
This is rocking the industry. Dell is being taken private — closing a curtain to start the dirty work of restructuring. HP, Microsoft and Intel are all trading well off their peaks when the Dow has recently hit an all-time high. IBM looks like the sole winner, jettisoning its PC business years ago to China-based Lenovo. Well, it’s a good thing all of these companies also play a big part in the $55 billion server market1 — that’s not going away anytime soon, right? The worst days are over, and hopefully their collective market caps will recover? Not so fast …
Modern Web services, such as Google, Apple and Facebook, are pushing the limits of data center scaling to unprecedented levels as they deal with an exponential growth in user traffic. They are playing a massive game of Tetris as they grapple with deploying and operating data centers with tens of thousands of servers versus hundreds. They are all on the bleeding edge of trying to contain costs while cramming as much capacity into a physical building as possible. The result is a complete architectural rethink of data center designs, and the incumbent server vendors are struggling to stay relevant in this new reality.
The new data center designs use only commodity “vanity-free” components procured directly from the original design manufacturers (ODM) — the current incumbent’s suppliers. For easy serviceability, components are Velcro-ed together, versus mounted in a box. All bells and whistles are stripped off, and the hardware is purpose-built for a specific application and therefore carefully tuned. As compute-utilization rates skyrocket from virtualization and parallel processing, the CPUs are running harder and hotter, and therefore the new expense bottleneck is all about power and cooling.
Take, for instance, Facebook’s Open Compute initiative, which lays out a blueprint for an energy-efficient hyper-scale data center that is 38 percent more energy efficient and 24 percent less expensive than current data centers. Locating in cold climates and next to super-cheap hydro power has become de rigueur. Power distribution, cooling and building layouts have been redesigned from the ground up to maximize mechanical performance and electrical efficiency of the data center. And unfortunately for Intel, the relentless march of Moore’s law no longer affords them differentiation, as customer needs have shifted from performance to power efficiency, an area where they lag rival ARM processors.
The evolution of the modern hyper-scale data center reflects the hyper-scale needs of the applications that run on them. Modern Web 2.0 (and increasingly SaaS) applications need to handle thousands of user requests per second, processing terabytes of information in real time across hundreds of customers. They are by necessity massively parallel, and work in concert to service a user request. This is the modern equivalent of a giant supercomputer — except cobbled together from commodity server components and interconnect fabrics. It’s a profound software and hardware architectural shift that is taking us from a world where data centers consisted of a small number of independent high-performance branded servers to a brave new world where the giant data center building is the server.
Meanwhile, on the enterprise front, the corporate data center is becoming increasingly sedate as on-prem applications give way to their SaaS counterparts. The new data center architectures, born of necessity from the giant Web service providers, have the potential to massively drive down the cost of providing software as a service, the new winner in enterprise applications. As such, the cloud service providers (CSPs), such as Amazon and Rackspace, are adopting these “scale-out” architectures.
So, fast-forward: SaaS will win the enterprise market. Face it — it’s just so much better, and now infinitely cheaper than any of the alternatives. And modern SaaS applications will be delivered through hyper-scale data centers that do not have branded servers from Dell, HP or IBM, but rather highly optimized, scale-out white-box servers made by Asian ODMs. In addition, the operators of these massive data centers will be experts in servicing their creations — monitoring, fixing and rapidly swapping out their expected-to-fail components. Therefore, there will no longer be a need for the recurring revenue, high-margin service and maintenance contracts that have been a mainstay of the OEM server industry.
I wonder if Lenovo is in the market for a server business, too.
________________________
I would like to thank my partner, Ramu Arunachalam, for his research, analysis and material contributions to this blog.
1IDC estimates (2012)
Scott Weiss is a partner at Andreessen Horowitz and the former co-founder and CEO of IronPort Systems, which was acquired by Cisco in 2007. Follow him on his blog or on Twitter.