What’s Next for GPU Chips? Maybe the Network.
Remember that GPU chips were designed and intended initially to enhance the experience of computer games by bringing some added computing muscle. And for years the primary place you’d find them is in PCs either built in or sold as an add-on card for gaming enthusiasts.
Since then GPUs have gone on to become a significant force in some of the more high-end fields of computing work: They’re critical in animation and special-effects studios, oil and gas exploration, and as the release of the latest Top 500 list earlier this week showed, supercomputing.
What’s next for the GPU to infiltrate? A presentation at this week’s SC13 conference on supercomputing in Denver by Wenji Wu, a researcher at Fermilab, the U.S. Department of Energy Research Lab near Chicago, suggests it may be networking.
In a two-page abstract of a paper (the full paper is here), Wu argues that GPU chips might be put to good use in the field of network monitoring. Tracking the second-by-second performance of a network in a data center is a difficult computing problem. You need to monitor what’s going on, live, and that requires a lot of computing power. You have to look for individual packets that meet a particular set of rules, grab them and analyze them on the fly as fast as you can find them.
Usually the job is done by specialized chips called Application Specific Integrated Circuits that are programmed to do a specific task. The problem there is that ASICs are hard to reprogram if the task changes. They’re also expensive to replace.
General-purpose CPU chips, like an x86 processor from Intel or Advanced Micro Devices, could do the job because they’re flexible, but Wu suggests that they’re not a good fit because they’re not fast enough.
Wu and his team have been experimenting with using Nvidia GPUs to do the work and found that when compared to CPU chips, a GPU is faster at network-monitoring tasks. Versus a single-core CPU, the GPU was anywhere from about nine to 17 times faster, Wu found. A six-core CPU did better, but even then the GPU was 1.5 times to more than three times faster. The next step, he writes, will be to add some security analysis features.
Obviously Wu’s demonstration is just a prototype. Certainly, it’s a long way from commercial implementation. But it’s an interesting read that suggests a new direction for GPU chips in the years to come. They’re not just for gaming anymore.
Here’s Wu’s abstract.