Veteran EDA industry watcher Peggy Aycinena visited MIT recently and spoke with Professor Srini Devadas about a manycore processor project called “Angstrom.” The purpose of the project is to develop massively parallel hardware—as in 1024 processors—to explore better ways of developing massively parallel software. I found this quote to be telling:
“…there’s a disconnect between the Grand Vision of highly multicore and the reality of parallel programming.”
Currently, the Angstrom project is developing a 121-core device configured as an 11×11 processor array. Each processor has local memory because the processors in the array would starve to death otherwise. It’s a 45nm chip and is taping out at 10x10mm. IBM is the fab.
There’s no question that even a 121-core chip is a complex piece of hardware. I’m sure the interconnect is interesting—likely a network on chip (NOC). But with stamp-and-repeat, the hardware issues aren’t nearly as large as figuring out how to harness 121 (or 1024) identical cores efficiently. That’s a real jump into the software Twilight Zone.
For now, we’re more likely to be seeing a mix of symmetric or homogeneous processing for multithreaded operating systems like Linux/Android mixed with asymmetric or heterogeneous processing for accelerators for graphics, networking, security, file-management, and the like. Partitioning the hardware that way pre-allocates hardware to tasks and simplifies the overall system design.
Sometimes, I feel that the massively parallel symmetric processing designs are asking the question “Wouldn’t it be great if all processors were identical?” Personally, I’d rather see the hardware more dedicated to the task for better task efficiency and lower power consumption. I don’t think that concept flies in the face of the Angstrom Project’s goal of creating self-aware systems. It’s just that the awareness is somewhat more complex.
What do you think?