American-Made Titan Tops World Supercomputing List
It wasn’t so long ago now that the president of the United States, speaking before Congress, bemoaned the loss of American leadership in supercomputing. At the time, the world’s most powerful computer, as measured by the authoritative Top 500 list jointly prepared by the universities of Tennessee and Mannheim, resided in China. Time went on, and new lists were published, to no better result. Japan took the crown.
Then American systems made a comeback. An IBM-made system called Sequoia, at the Lawrence Livermore National Lab, took the No. 1 spot. That machine is now No. 2 in the world. Six months later — for that is how often the list is updated — another American system has taken the crown.
This one is called Titan, and it is installed at the Oak Ridge National Laboratory in Tennessee. It was built by Cray, and is bolstered significantly with GPU chips from Nvidia. It is powerful enough to conduct 17.59 quadrillion calculations per second. Let me express that numerically: 17,590,000,000,000,000. Titan has 560,640 processors, of which 261,632 are Nvidia-made accelerators. The rest are Opteron chips made by Advanced Micro Devices.
The headline on this system and what makes it indicative of certain wider trends in computing is that most of the computing horsepower in Titan comes not from the traditional CPUs — in this case the AMD Opterons — but from the Nvida GPUs. Getting to the top of the supercomputing world was part of a plan that Oak Ridge announced about a year ago, and from the the start, Nvidia GPU chips were a key piece of the strategy.
A Graphics Processing Unit (GPU) is a chip that’s really good at doing a certain kind of math known as a floating point operation, and it does it much faster than a typical CPU chip from Intel or AMD that you’d find inside a PC or server. They also do these math problems using less power — electrical power — than CPUs.
GPUs were originally designed for gaming and to make professional graphics applications like editing movies and visualizing complex problems for engineers and scientists — all of those are big floating point operations. Basically, a GPU chip is designed to render what happens to every pixel on a computer screen 50 times a second or even faster, which is lots of small computational jobs carried out at once. It’s called parallel computing, and CPU chips aren’t as good at the parallel stuff as GPU chips. CPUs are better at doing one job at a time, getting it done really fast, and then moving on to the next one.
American computers now constitute five of the top 10 on the Top 500 list, as well as slightly more than half — 251, to be exact — of the entire list. Systems in Asia number 122, and systems in Europe number 105.
Here’s a screen grab of the top 10 systems on the list. You can go through the full list here.