Fujitsu Beefs Up Its Best Supercomputer
It’s November, and in the rarefied world of supercomputing, it means that a new edition of the twice-a-year Top 500 list of the world’s most powerful publicly-known computers is due out any day now. That also means that the people who assemble the world’s most powerful bean counters are bragging about them and jockeying for placement on the list.
Today it was Fujitsu’s turn. The Japanese computing giant teamed up with RIKEN, the quasi-public Japanese research institution, to announce that they had built a machine they call the K Computer, which can perform 10.51 petaflops, or 10.51 quadrillion floating point operations per second.
And while all that may sound very impressive, it’s not quite as muscular as the Titan machine being assembled in the U.S. at the Oak Ridge National Labs, which can — or will — perform 20 petaflops.
The machine (pictured) is made up of 864 racks with 88,128 interconnected CPU chips, all of them based on the SPARC architecture for which Sun Microsystems, and therefore Oracle, are best known, though Fujitsu has long been a SPARC licensee. The new K Computer is basically an improvement and extension to the same K computer that took the top spot on the last Top 500 list in June, supplanting in the process a Chinese machine that had taken the crown last November.
Never mind that it contained all U.S.-made chips, the Chinese feat caused the leader of the free world to kvetch about the apparent sorry state of U.S. supercomputing, thus prompting, perhaps indirectly, the Titan machine at Oak Ridge.
It’s not as though China hasn’t been heard from on the supercomputing front recently. Last week its Sunway BlueLight MPP raised eyebrows not for its performance — a relatively pokey 795 teraflops — but rather for the fact that it’s built using all Chinese-made components.
So what will it be used for? Weather simulations, research into drugs and solar cells, and simulating earthquakes and tsunamis.
Here are the more formal descriptions from the announcement:
–Analyzing the behavior of nanomaterials through simulations and contributing to the early development of such next-generation semiconductor materials, particularly nanowires and carbon nanotubes, that are expected to lead to future fast-response, low-power devices.
–Predicting which compounds, from among a massive number of drug candidate molecules, will prevent illnesses by binding with active regions on the proteins that cause illnesses, as a way to reduce drug development times and costs (pharmaceutical applications).
–Simulating the actions of atoms and electrons in dye-sensitized solar cells to contribute to the development of solar cells with higher energy-conversion efficiency.
–Simulating seismic wave propagation, strong motion, and tsunamis to predict the effects they will have on human-made structures; predicting the extent of earthquake-impact zones for disaster prevention purposes; and contributing to the design of quake-resistant structures.
–Conducting high-resolution (400-m) simulations of atmospheric circulation models to provide detailed predictions of weather phenomena that elucidate localized effects, such as cloudbursts.
So what’s a petaflop anyway? A FLOP is a floating point operation. Its a type of mathematical function that involves decimal points. Adding 5.6 and 11.21 is a floating point operation and is therefore slightly more complicated from a computing standpoint than adding 11 and 5. But in computing, even day-to-day computing, it’s massively more complicated than all that.
A top-of-the-line NVidia GeForce GTX 590 graphics card, which specializes in floating point operations, can run about 2,400 gigaflops. Since a gigaflop is a billion flops, I guess that technically puts the GeForce GTX 590 into the teraflop, or trillion-flop range.
Petaflops are then in the quadrillion-flop territory, which as I noted before makes them fun because they’re among those rare numbers that are larger than the U.S. national debt. So 10.51 quadrillion flops gets written like so: 10,510,000,000,000,000. Didn’t I say this was fun?
All this is leading up to a big supercomputing conference starting in 10 days in Seattle. So expect lots more supercomputing news in the coming days!