Everyday desktop computers are now faster than the fastest supercomputers in the world from the early 1990s.
Five years from now, says Intel, your phone could double as a supercomputer.
That’s the goal of Intel’s new experimental Single-chip Cloud Computer project, or SCC. The company is now researching potential mobile applications for the chip, as well as developing tools that will make it easier for developers to take advantage of this technology without becoming supercomputing experts.
In other words, as ARM seeks to put cellphone chips into our supercomputers, Intel is doing the reverse. The lines between the mobile hardware and data-center hardware are blurring. That may seem odd at first, but if you step back and look at the bigger picture, it only makes sense. Big-time data-center operations want the ulta-low-power profiles of the hardware in our cellphones, and the mobile world is hungry for the computational punch you get from much larger systems.
Intel Labs technology evangelist Sean Koehl says that its 48-core creation, first discussed in 2009, acts as a “network” of processors on a single chip, with two cores per node. The nodes actually communicate to each other much the same way nodes in a cluster in a data center would. “We thought there may be some advantages to having an architecture within a chip that resembles the architecture around it,” he explains.
Intel Labs has been working on many-core chips since around 2004, and the more immediate applications will probably be in servers and, yes, supercomputers, which are essentially a bunch of servers working in tandem. This is often called high-performance computing, or HPC.
Whether you’re dealing with a high-end supercomputer, a cluster of commodity servers running Hadoop, or a cluster built out of Legos and ultra-cheap Raspberry Pi computers, HPC depends on parallel processing — breaking down big problems into smaller problems that are solved by different processors running in parallel. What Intel Labs is now researching is whether this approach will make sense for mobile computing.
Although today’s most serious big data applications run on a server and deliver information to a client, Koehl points out that there are actually many cases where a hybrid model would make the most sense. Machine vision applications, for example.
In an augmented-reality application — such as Google Goggles — you may want to overlay some information on top of video captured by the phone. You might want to identify the faces that the camera is pointing at, or the name of the business housed in a particular building. Some of this processing is best done on a server somewhere, but some is more suited to the client — i.e., the phone. Such tasks might include determining where the faces or buildings are in a particular frame. It may then be best to let a server determine particular information — whose face, or which building — but the client needs to do a fair amount of work.
Other applications could include rendering 3-D graphics for games. Koehl says that even on mobile parallel applications may eventually outnumber traditional “serial” applications.
One challenge for developers is that they will need to start thinking about parallelism when designing applications. As part of his job as an evangelist, Koehl is promoting parallelism education through outreach programs for educational programs at various levels, including high school. But Intel is also working on tools to make it easier for developers to work with parallelism, including some that abstract the entire problem away.
Today, ARM dominates the mobile chip world, designing the core chip architectures used in iPhones and most Android devices. But it’s slowly moving into the server world. On Tuesday, the companyannounced a new chip called Atlas that it hopes will accelerate this movement. Intel is determined to maintain its iron grip on the server world, but at the same time, it wants to make up lost ground on smartphones. Interesting times are ahead.
(To see the full article, visit: Wired Enterprise)