Is It Time for Just-in-Time Data Centers?
The 1970s were a challenging time for automakers, especially in the U.S. The model that had worked so well for decades — building new, bigger models often focusing more on design than function and practicality — was being challenged as oil shocks and inflation took its toll on the American buyer.
In the midst of globally difficult economic times, the Japanese saw an opening. Taiichi Ohno, working for Toyota after World War II, saw inefficiencies in the American assembly line system, especially in the inventory and quality control systems. He pioneered a new system of production for the company, called “just-in-time.” The objectives were to introduce consistency in the production process and eliminate waste of all kinds, including overproduction, excess inventory, time (waiting for parts) and ineffective or defective products being produced. American supermarkets, ironically, held the answer for Ohno; they only restocked shelves with enough product to replace inventory that was sold to customers. In the just-in-time system, this translated into keeping about 24 hours’ worth of inventory in the factory, with parts being ordered and shipped on an as-needed basis.
The success of Toyota’s transformation is now well-known; it became a global company that would eventually surpass sales of GM, the world’s largest auto maker, in 2008. The just-in-time system pioneered by Toyota would be widely copied, with U.S. automakers implementing the new system in the 1980s, to varying levels of success.
Lessons for the Data Center
You may be asking what all of this fascinating automotive history has to do with data centers today, in 2013. The fact is, the data center industry is undergoing a transformation of its own. While the U.S., thankfully, hasn’t seen oil shocks on the scale of those experienced in the 1970s, it’s clear that the era of cheap energy is over, especially with increasing demand for energy supply in high-growth markets like China and India.
Rising energy prices and volatility mean that data centers can’t afford to operate the way that U.S. auto manufacturers did in the 1950s and 1960s. Traditionally built to accommodate future growth, wastefully inefficient data centers were built in previous decades with overcapacity in mind, in an era when energy was much cheaper and more readily available. Today, data center owners and managers face the challenges of legacy builds, with these facilities suffering from the same kind of waste that plagued the auto industry before lean manufacturing changed it. As James Glanz outlined last year in the New York Times story “Power, Pollution and the Internet,” the worst offenders not only use huge amounts of energy to power parts of data centers that aren’t used, but also contribute to pollution.
Just as gains in the speed of transportation made just-in-time a reality for the automotive industry in the 1970s, the data center industry’s transformation may be driven today by recent technology advances and supplemented with an overall recognition in the industry that the old way of powering and cooling data centers will no longer be ignored by the public nor economically advantageous for companies. These recent advances include:
- The rise of readily available modular data center components, including Uninterruptible Power Supplies (UPSs), cooling and power distribution systems. These components are the infrastructure that can efficiently and reliably power and cool expensive computing equipment. Prior to the availability of modular components, data centers were almost always custom-built, and because of this, they were almost always overprovisioned to accommodate future growth or spikes in demand.
- The rise of standardized components in the data center, especially related to data center infrastructure. Part of the success of just-in-time was a recognition that relying on trusted suppliers to manufacture parts, so that the automaker could focus on its core competency of manufacturing the automobile, would actually bring about more reliable and better-performing products. Data center infrastructure is much the same — custom-built systems come with custom problems that take man-hours and money to fix. Standardized components are time-tested to be effective, reliable and predictable, with flaws having been driven out in the manufacturing and testing/validation process.
- The availability of monitoring software that provides visibility into energy usage in the data center. The just-in-time system relies upon signals that would tell managers what the rate of demand for parts was, so that production could be scheduled accordingly. Similarly, advances in data center software and analytics today provide that kind of intelligence for data center managers by monitoring equipment usage and helping to plan for future needs through benchmarking, indicating in advance when capacity may need to be increased.
Standardization is the key to consistency in the data center, while modularity is the key to flexibility. Both characteristics are needed to respond effectively to changes in the IT environment that result from changing business needs. Modular components are pre-engineered and “plug and play,” so that they can evolve with a changing data center’s design over time. Because they are scalable, changeable, portable and swappable, they confer a speed of installation that is akin to the flexibility that most car owners get today when they need to replace or repair auto parts.
Benefits of a Just-in-Time Approach
When Toyota pioneered the just-in-time system, it also built an entire system that the company would adhere to outside of production that would contribute to its success. Part of the system was to cultivate a culture of continuous improvement where its people’s talents would be stimulated and performance would be maximized. To be sure, recent events such as the tsunami in 2011 affected global supply chains and devastated production and supply in Japan, which exposed vulnerabilities in the system. However, the success of the overall system over time, as evidenced by its proliferation into all types of manufacturing as well as other markets including business intelligence in recent years, serves as a testament to the substantial benefits that can be conferred through applying the overall principles of the system.
In the data center, this requires a new approach to thinking about designing and building new systems, one which promises substantial gains in operational efficiencies. Some of these include:
- Outsourcing of non-core competencies. In the auto industry, this included logistical functions such as storage and distribution to third party providers. Modular data centers, effectively, have outsourced the headaches of production, configuration and installation to the manufacturer of the components. They are plug-and-play, making initial deployment and subsequent changes achievable without a great deal of custom work, and they are built in factories under strict quality control and engineered to deliver peak performance.
- Reduced response time to business demands, again due to availability of standardized and modular components. Growth can be planned for in the initial design of the data center, without being deployed and operationalized, which means data centers are “right sized” — operating only with the computing power and infrastructure to fit needs today rather than anticipated needs in the future.
- Improved customer satisfaction and better return on equity, due to the reliability of components as well as energy savings and reduced manual labor. Custom-built data centers can take years to achieve designed performance specifications with respect to measurements such as Power Usage Effectiveness (PUE, a measure of data center efficiency). Prefabricated modules come with design specifications that are verified in the factory before they are shipped.
- Less downtime, as failures in modular components can be identified quickly and easily and replaced in little time, compared to disparate or custom engineered components.
Data centers today, in fact, are already taking advantage of modular data center builds. The Beneficial Financial Group, one of the oldest insurance companies in Utah, needed to add more power and rack space in its data center, with better monitoring and management capabilities and minimal downtime. If it bought a new system or a larger version of the existing system, it would have had to re-engineer the data center completely; instead, Beneficial chose a modular system that integrated power, cooling management and servers with a universal rack design. The installation was completed over a weekend, with no interruption to business, and forecasts estimate the company will gain an annual ROI of 74 percent with a payback period of ten months, due to maintenance and service savings and increased user productivity. In addition, because it can now add servers as the company grows, Beneficial has expanded its data center capability by 300 percent with room for future growth.
Toyota, which spearheaded the just-in-time model, was able to achieve unparalleled growth and eventual industry dominance in the years following its shift. It’s an example of how turning traditional thinking on its head was able to change the entire industry and spawn whole new industries and innovations. Data centers today face a fork in the road, but there is a unique opportunity to take advantage of gains in technology to respond and adapt, making sure they are built not only for the needs of today but for the unforeseen challenges that may come in the next few decades.
Kevin Brown is Vice President, Global Data Center Offer for Schneider Electric and is responsible for leading Schneider Electric’s portfolio strategy and leading innovation to respond to emerging data center industry trends. Kevin is an experienced industry professional in both the data center and HVAC industry with over 20 years of experience in various senior management roles including software and hardware development, sales and product management, and holds a BS in mechanical engineering from Cornell University.