SVC on Twitter    SVC on Facebook    SVC on LinkedIn

 

Going Ballistic

Ballistic deflection transistors are among the ways that researchers hope to speed up processors.

The sandwich that flops

Moore's Law is named after Gordon Moore, co-founder of Intel, which is also working on new designs to increase processor speeds. One focus is on cores, which is the bundle of transistors that make up the processor. By splitting that bundle into multiple cores, it's possible to increase the processing power but spread the heat generated over a larger area so it's easier to cool.

Multi-core processors are increasingly common, including in the latest PCs. In September, Intel unveiled a prototype TeraFLOP processor — named after the 1 trillion floating point operations it can execute every second — that consists of 80 cores. The 3.1-GHz processor isn't being groomed for commercialization. Instead, it's being used to test designs that could be used in processors with tens or hundreds of cores.

“As an experimental chip, it's focused on specific experiments rather than running real software,” says Sean Koehl, a technology strategist in Intel's Tera-scale Computing Research Program. “So one shouldn't compare this research directly to the latest multi-core processors on the market today. It uses very simple cores with a simple instruction set that are used to generate tera-scale data traffic.”

Another key difference between TeraFLOP and the multi-core processors available today is the interconnects, which are the links that transfer data between the cores and memory. Like highways, the more traffic that an interconnect can handle, the less likely things will get jammed up. The TeraFLOP processor features 80 small tiles, each with a core that has a link to a network that connects it to the other 79 cores. Each core also uses this network to access a 20-MB memory chip that's bonded to the processor. This sandwich design enables a mesh of thousands of interconnects, which in turn boosts performance.

“The on-chip mesh network will enable significantly higher data throughput, as well as much lower latency (time of flight) for data exchanged between cores,” Koehl says. “That means each core will be able to transfer more data in a timelier manner. Collectively, these cores will generate over a TeraFLOP of computational performance, and the mesh network will provide over a 1 Terabit per second (TB/s) aggregate data bandwidth.”

By comparison, today's processors have only a few dozen interconnects.

“Today these busses have scaled to between 10 and 30 Gigabits per second (GB/s), but stacking will allow us to jump to bandwidths of more than 100 Megabits per second (MB/s),” Koehl says. “Also, because the vertical connections between the chips are so short, the circuits that drive them can be simpler and much more energy-efficient.”

But as revolutionary as it is, the TeraFLOP design still uses enough existing devices and manufacturing techniques so that producing them wouldn't involve re-inventing the entire wheel.

“This research effort is intended to make the best use of the additional transistors that Moore's Law will continue to provide for the near future,” Koehl says. “Tera-scale computing wouldn't require a diversion from existing manufacturing technology, though we'll certainly have to continue to advance certain new manufacturing capabilities, such as 3D stacking.”

Working in parallel

But if Intel's prototype processor is running at 3.1 GHz, why does it get a name that begins with Tera? The answer has to do with what Intel refers to as a new paradigm in computing.



Previous 1 2 3 Next
Browse Back Issues
BROWSE ISSUES
  July 2014 Sound & Video Contractor Cover June 2014 Sound & Video Contractor Cover May 2014 Sound & Video Contractor Cover April 2014 Sound & Video Contractor Cover March 2014 Sound & Video Contractor Cover February 2014 Sound & Video Contractor Cover  
July 2014 June 2014 May 2014 April 2014 March 2014 February 2014