An international consortium hopes to build exaflop supercomputers from mobile CPUs
A European private-public consortium wants to make supercomputers using smartphone and tablet CPUs. And not just any supercomputers. They’re shooting for the moon—aiming for exaflops (1018 or quintillions of floating-point operations per second), some thousandfold faster than the top of today’s high-performance heap.
Supercomputing has always offered a kind of turboboosted reflection of everyday computing. In the 1970s and ’80s, Cray supercomputers and their ilk were like supercharged mainframes, with just handfuls of processors that had each been designed for speed. In the 1990s and 2000s, as PCs and then laptops predominated, supercomputers became agglomerations of hundreds, thousands, and now even millions of PC and server cores. (The world’s fastest supercomputer today is China’s Tianhe-2, powered by 3.1 million Intel Xeon cores but capable of only about 5 percent of an exaflop.)
So in the tablet and smartphone age, it was probably only a matter of time before someone decided to make supercomputers out of the engines of present-day digital life. The thinking goes that because ARM cores are designed to run on small smartphone and tablet batteries, a supercomputer built around them could yield more speed with less power. In an age when high-performance computing, or HPC, is often constrained by heat production and electricity consumption, that could mean a more scalable machine.
Mont-Blanc, as the effort is called, began in late 2011 at the Barcelona Supercomputing Center, known for housing a supercomputer in a 19th-century cathedral. It’s now got 14 partners and €22 million in backing through September 2016, and it recently unveiled a prototype blade server meant as a stepping-stone toward a full system. The prototype contains Samsung dual-core Exynos 5 CPUs—each a system-on-a-chip that includes ARM’s Cortex-A15 core with a GPU. It would consume between one-fifteenth and one-thirtieth as much energy per processor as today’s HPC systems, its proponents project.
Jean-François Lavignon, president of the coalition behind Mont-Blanc (European Technology Platform for High Performance Computing), says that today Intel x86 CPUs still offer the best performance for most supercomputer customers. And he expects that x86 processors combined with accelerators, such as GPUs, will continue to dominate the Top500 list of the world’s fastest high-performance computers. But, he says, ARM-based computing appears to be a wise investment for the future.
ARM cores are an interesting but hardly universally agreed upon path to exascale computer architecture, says Jack Dongarra, professor of computer science at the University of Tennessee, in Knoxville. For instance, in December Japan announced its plans to build an exaflop supercomputer by 2020 using the usual processors.
“The Japanese exascale system, which will use commodity processors with an accelerator, will draw about 30 to 40 megawatts of power,” he says. “One megawatt per year in the United States is about a million dollars. So just to turn it on and power it will cost you between $30 million and $40 million.”
Either path that high-performance computing takes will not just have to make power consumption manageable but also reduce the challenge of writing code for what are sure to be extremely complex machines. Such code will have to run a few billion concurrent threads of instructions instead of the mere 12 million Tianhe-2 does today.
Addison Snell, CEO of HPC consulting firm Intersect360, says that HPC customers today, unsure of what next-generation supercomputing will look like, could be wary of investing too much in ARM-based systems out of fear that the software won’t be there to support them down the line. On the other hand, he says, there’s no guarantee that x86-based supercomputing is going to remain the dominant model for the 2020s either. ARM architecture could yet prove itself to be the secret sauce needed to make the best exascale computer. Or not.