For the past couple of weeks, I’ve been writing about several aspects of Semico’s IP Subsystem report. (see “Are IP subsystems the next big IP category?”) The report’s premise is that the rise of IP Subsystems—IP blocks that deliver complete functions such as video or audio through a collection of design IP, software stacks, application software, and verification IP—fundamentally change the way SoCs are and will be developed in advanced process nodes. (Note: this is a concept that’s firmly embedded in the EDA360 vision.) A significant factor not yet covered in this blog is how the hierarchy of SoC definitions and metrics changes with this step up in IP complexity. This blog entry rectifies that omission.
The author of the study, Semico’s Rich Wawrzyniak, has created an interesting graphic that attempts to classify SoCs by complexity as measured by a few good, easily quantified metrics. Here’s a slightly modified version of that graphic.
Revised Complexity Definitions for SoCs
On the left, you see three classic classification levels for SoCs, as measured by gate count and memory capacity. These two simple metrics are probably sufficient for classification in simple SoCs with one processor, some memory, and some I/O. At the bottom of this existing hierarchy is the “lowly” microcontroller. I’ve put the word “lowly” in quotes because Gartner’s recent estimate is that the worldwide microcontroller market was about $15 billion in 2010. We should all be so lowly.
A microcontroller is a very basic SoC. It contains a processor (sometimes more than one), memory, and I/O. Customization is usually only done with the software running on the microcontroller. These are very flexible devices and there’s a lot of change occurring in this market right now as 32-bit devices (overwhelmingly ARM-based) are taking over from the 8-bit devices that have ruled the roost for several decades. Many 32-bit microcontrollers now cost less than a dollar in high volumes.
One step up from the microcontroller is the “Value SoC,” which is essentially a custom microcontroller with more memory and perhaps some specialized hardware accelerators that pitch in where software running on a processor just can’t do the job. At the top of the heap of the “conventional” SoCs is the Performance SoC with more gates and more memory.
Note that metrics used to define all of the conventional SoCs simply count gates and memory bits. With the advent of significant IP in the form of processing, memory, and I/O cores, those simple metrics no longer make much sense. So Semico’s report suggests two or three new metrics.
The first of these new metrics is the interconnect used to move instructions and data around on the SoC. As someone heavily involved in I/O design for systems over the past several decades, I am gratified to see this amount of weight placed on on-chip interconnect. You can place as many processors and accelerators you want onto a chip, but if you can’t get instructions and data to these blocks and if you can’t adequately move computational results to where they’re needed, then you’ve got a deficient SoC design. So interconnect is getting some well-deserved attention here, in my opinion.
The next new metric is the number of IP subsystems used. This metric is really a superset of the third metric, which is the number of IP blocks used. Together, these three metrics give you a good handle on the SoC’s complexity and so there are now four complexity levels for the revised SoC definitions shown on the right side of the graphic. At the bottom of the new hierarchy is a class called commodity controllers. These are really today’s microcontrollers. They don’t cost much but they can do a lot. If you can achieve a system design in software alone, commodity controllers are probably a good hardware choice.
One step up in SoC complexity is the Basic SoC, which exhibits the classic definition of an SoC developed in 1995: a processor, memory, and some additional blocks all tied together by a bus. This architecture is a dinosaur held over from the board-level system architectures of the 1980s. Dinosaur or not, this type of architecture is still useful for attacking a number of low-complexity system-design problems.
Above the Basic SoC is the Value Multicore SoC. Here we have a number of on-chip processors all competing for memory and other system resources. A simple bus structure will starve the cores in this architecture to death so complex interconnect in the form of a network on chip (NoC), bus hierarchies, multiple point-to-point connections, or some combination of these three interconnect schemes is the right approach to ensuring that no block starves for lack of instructions or data. As an example, TI’s OMAP 4 and recently announced OMAP 5 SoCs employ complex interconnects. At the top of the heap is the truly complex, Advanced Performance Multicore SoC. Here we’ve reached a complexity level that demands even more thought for the interconnect design.
I like this hierarchy and its focus on function rather than gate counting. When you’ve got nearly a billion gates to play with at 28nm, counting them seems sort of primitive.
What’s not accommodated in this hierarchy are some new devices that you can expect to see this year. For example, this year both Xilinx and Altera will be announcing SoCs based on hardened IP cores with integral FPGA fabrics attached to the on-chip interconnect. What do we call these platypus SoCs? Where do they fit in the hierarchy? Do we need another definition to accommodate new chips like these?