Skip to content

Why the US lead in technology is under threat

November 24, 2011

The path to exascale is uncharted, which opens the door to challengers
Although the US has not produced a plan for exascale development, it has outlined some requirements for a system. The system must be ready by 2019-2020 and can’t use more than 20 MW of power, which is a small amount of power for a system that may have millions of processors.

The need for low power systems is prompting new approaches to development. The Barcelona Supercomputing Centre, as part of Europe’s exascale initiative, is working with ARM, the smartphone chip maker, on technology that combines its processors with Nvidia’s graphics processors. They may use expected ARM co-processors as well.
Alex Ramirez, computer architecture research manager at the Barcelona centre, said the project is demonstrating that you can build a high performance computing cluster based on ARM architecture. It is also building a complete software stack for the cluster.
“There are a big number of challenges ahead,” said Ramirez, mostly getting the software to work in an environment that is different from servers or mobile computing. “The human effort and investment in software development is going to be significant,” he added.
Europe has other exascale developments in progress, including one using Intel technology.
Ramirez said the Barcelona effort is now two years old, and the ultimate goal is to build a system that can reach exascale performance at reasonable power levels. But he also sees European-wide goals in this effort.
“There is an opportunity to keep embedded and high performance industry in Europe in the front line,” said Ramirez. “There is a clear convergence between embedded technology and high performance computing technology.”
If the US doesn’t lead in exascale, what happens when planning for zetascale begins?
A computer science freshman today should know in four years the pathway to an exascale system. By the time this same student completes his or her graduate work, there will be discussion about a zetascale system, something that’s one thousand times more powerful.
If high performance computing maintains its historic development pattern, a zetascale system can be expected around 2030. But no one knows what a zetascale system will look like, or whether it’s even possible. Zetascale computing may require entirely new approaches, such as quantum computing.
The White House says it doesn’t want to be in an “arms race” in building ever faster computers, and warned in a report a year ago this month that a focus on speed “could divert resources away from basic research aimed at developing the fundamentally new approaches to HPC that could ultimately allow us to ‘leapfrog’ other nations.”
But the US is in a computing arms race whether it wants it or not. To develop technology that leapfrogs other nations, the country will need sustained basic research funding as well as building an exascale system.
“A lot of countries have realised that one of the reasons the US became so great was because of things like federally funded research,” said Luis von Ahn, an associate professor of computer science at Carnegie Mellon University and a staff research scientist at Google
“There are lot of countries that are trying to really invest in science and technology. I think it’s important to continue funding that in the US. Otherwise it is just going to lose the edge, it’s as simple as that. ”
The US hasn’t explained what’s at stake
President Barack Obama was the first US president to mention exascale computing, but he didn’t really explain the potential of such systems.
Supercomputers can help scientists create models, at an atomic level, of human cells and how a virus may attack them. They can be used to model earthquakes and help find ways to predict them, as well design structures that can withstand them. They are increasingly used by industry to create products and test them in virtual environments.
Supercomputers can be used in any way imaginable and the more power, the more compute capability, the more precise the science.
Today, the US dominates the market. IBM alone accounts for nearly 45% of the system share of the Top 500 systems, followed by HP at 28%. Nearly 53% of the most powerful systems on the list are in the US.
At the SC11 supercomputing conference held earlier this month in Seattle, there were 11,000 attendees, more than double the number from five years ago. A key reason: The growing importance of visualisation and modelling.
This conference draws people from around the globe because the US today is the center for high performance computing, something the world is beginning to challenge on the path to exascale.