Crisis in Computing

It may not be obvious, but if you’ve checked the weather today, ridden in a car or an airplane, made a phone call, or used any number of consumer products, down to the clothing you wear and the detergents that keep them clean, you’ve relied on a supercomputer.

Virtual design, forecasting and simulations are now essential for smarter science, faster innovation and better product development. Which means that High Performance Computing (HPC) is critical to U.S. competitiveness and standards of living.
But the traditional CPU-based technology that once put America in the lead is now the anchor holding us back. Our legacy computing is no longer scaling cost-effectively and power-efficiently enough. The effects of this lost leadership will soon be severely felt in every aspect of American business and economic life unless we decide to do something about it.
It’s past time for private industry and the public sector to get our HPC act together, before other nations steal the show.
We want safer oil and gas discovery, workable alternative fuels, lower emission combustion engines, more efficient electricity production and smart grid management. We know we need to forecast weather, understand climate change and design micro-organisms to absorb environmental wastes. Doctors want better tumor models and surgical decision support, and they’re racing to understand the molecular mechanisms underlying diseases like Alzheimer’s. Across these and any number of medical, manufacturing, services and research applications, our ability to design better, safer and more cost-effective processes and products and find the energy to make and move them–all of which generate business activity, domestic jobs and economic growth–depends on our HPC capacity.

So, you’d think we’d stay on top of our game. Instead, we’re getting jumped by moves we invented.

From nearly a standing start in 2005, by this November China is expected to have developed the world’s fastest computer–based, ironically, on our own American hybrid parallel processors that are far more cost-effective and power-efficient than traditional CPU chips. Tokyo Institute of Technology, CSIRO in Australia and CEA France are similarly focused. They’re not hamstrung by legacy CPU-based computing. They’re jumping straight into next-generation, hybrid HPC by adding graphics processing units (GPUs) to drive far better price, efficiency and performance. The result? Our competitors are securing the same capabilities at a fraction of the cost.

Typically, where technology leadership is concerned, most of us think first of education as the bottleneck. But we Americans are completely missing a second critical choke-point in computing capacity and infrastructure. Most of our government’s research-oriented supercomputers are already 2X over-subscribed at our current level of demand. And before the next decade, our level of science will be 1,000-fold in its computational demands.

To sustain and extend our lead in High Performance Computing, we don’t have to revive the decades-old debate about industrial policy and the government picking winners through massive bets on industry sectors. We just need to spend smarter to get cost-effective hybrid HPC on the national agenda, and equip our best minds with the computing capacity they need to innovate and create jobs.

The Council on Competitiveness and its HPC initiative are a great place to see how organizations are accelerating innovation, advancing R&D, and reducing new product cycle time to drive revenue and reduce costs.

The Senate should get behind Senator Mark Warner (D-VA) and his amendment to the reauthorization of the America Competes Act. Government agencies need to coordinate around the opportunity that GPUs and hybrid architectures offer. And the business community should be clearer with the public about what’s at stake. The first large-scale hybrid GPU hosted cloud was launched just last month. With more and more companies moving data and software to cloud computing services, HPC ushers in huge operating advantages for oil and gas, finance, medical devices and services, and any sector with massive quantities of data that can be crunched more efficiently with hybrid parallel processors, significantly reducing costs. Why would we allow our position as world leader in HPC to slip, the way we have with automobiles, battery technology and memory chips? Why would we surrender the business growth, job creation, and competitiveness delivered by supercomputing in a vast range of affected industries?

As U.S. Undersecretary of Energy Steven Koonin put it last month at a conference of computer scientists, “High Performance Computing feeds itself. Once you fall off the curve, it’s really hard to get back on.”

If we don’t decide to win at this game, we will be pushed out of the way.


Must-Reads from other Websites

Panos Mourdoukoutas

Why Apple Should Buy China’s Xiaomi

Paul Graham

What I Didn’t Say

Benjamin Bratton

We Need to Talk About TED

Mat Honan

I, Glasshole: My Year With Google Glass

Chris Ware

All Together Now

Corey S. Powell and Laurie Gwen Shapiro

The Sculpture on the Moon

About Voices

Along with original content and posts from across the Dow Jones network, this section of AllThingsD includes Must-Reads From Other Websites — pieces we’ve read, discussions we’ve followed, stuff we like. Six posts from external sites are included here each weekday, but we only run the headlines. We link to the original sites for the rest. These posts are explicitly labeled, so it’s clear that the content comes from other websites, and for clarity’s sake, all outside posts run against a pink background.

We also solicit original full-length posts and accept some unsolicited submissions.

Read more »