Want to see the future of low-power SoC design? Have a look into Gary Smith’s crystal ball.

Last week at the Electronic Design Process Symposium held in Monterey, EDA analyst Gary Smith put together a session on low-power design and he prefaced the other presentations with one of his own showing where he though the improvements in SoC power consumption would be coming from through the year 2026. Smith charts a lot of data for the ITRS (International Technology Roadmap for Semiconductors) and his charting is based on his own research plus a consensus  gathering process. In his presentation, Smith first look back into the past, listing 11 advances from 1996 through 2007 that have reduced SoC operating power including:

1996  Clock Gating (Macro Level)

1997  Low-Power Libraries

1999  Frequency Scaling

1999  Clock Gating (Micro Level)

2004  Body Biasing

2004  Power Gating

2006  Power Islands

2007  Voltage Scaling

2007  Architecture for Low Power

2007  Hardware Accelerators

2007  RTL Power Optimization

The total power-consumption improvement for all of these advances combined, said Smith, was a 2.9x improvement in dynamic power consumption and a 2.2x improvement in static power consumption.

Then Smith turned his attention to the future and, as Yogi Bera supposedly said, “It’s tough to make predictions, especially about the future.” Tough or not, Smith made his predictions and I found this chart version of his prognostications particularly interesting:

Now very likely, you won’t believe everything on this chart or the timing associated with the low-power technologies. However, just consider the technologies and the order in which they’re presented. Symmetric multiprocessing is a fait accompli. It’s rare to read about application processors that have single-core CPUs. Dual-core ARM processors are becoming common and quad-core machines are already starting to appear. In addition, there are a variety of other indicators: the Xilinx Zynq-7000 EPP is an FPGA that contains a dual-core processor complex; multicore microcontrollers are starting to appear; ARM is evangelizing its big.LITTLE multicore architecture; and I’ve recently read of a dual-core processor installed in an SSD controller SoC—the Indilinx Everest SSD Controller from OCZ. So there’s very little surprise in this first listed innovation. Besides, 2009 is already three years ago.

The next predicted advance was a kind of a shocker for me: Software Virtual Prototyping. I wasn’t shocked because Smith included the technology—software development on early virtual prototypes can certainly help to shape hardware architecture and influence the RTL spec at an early design stage where it’s still easy to add or remove compute resources and hardware accelerators based on performance/power-consumption tradeoffs. However, I was surprised to see that Smith thinks this powerful design technique is widely used already. But perhaps it is.

Following the virtual prototyping advance, Smith’s chart shows the use of frequency islands. I think this technique is already in wide use so there’s no surprise there.

Then comes hardware/software codesign, which is a foundation of the EDA360 design concept—so no surprises there either. True system-level optimization requires hardware/software codesign or many elements for optimization will be left on the table.

Next up is heterogeneous or asymmetric multiprocessing and, again, this should not be a surprise. Even low-cost microcontrollers are getting into the game here. (See “Asymmetric, dual-core NXP LPC4300 microcontrollers split tasks between ARM Cortex-M4 and -M0 cores, cost $3.75 and up”.) Increasingly, we find that it makes far more sense—given the number of available transistors at current SoC process nodes—to use appropriately sized processor cores for a variety of on-chip tasks rather than multitask a big, honking processor core with all sorts of power-consuming instruction-level parallelism, thus forcing the system design to run that core at multi-GHz clock speeds to accommodate the multiple tasks.

The next step three steps, from the year 2021 through 2025, are somewhat speculative in my opinion. The first breakthrough shown during this time period is power-aware software. To accomplish this breakthrough, I think we need a real attitude adjustment away from the idea that the power should somehow adjust automatically. Software developers must first want to take more responsibility for power control and then they’ll need the tools that help them to take that control.

Asynchronous design is even further into the ether for an industry steeped in synchronous logic design. It’s not that asynchronous design doesn’t work. I’ve seen enough examples to know that it will. It’s that very few logic designers are versed in asynchronous design and the EDA tools are all optimized for synchronous logic design. So I think big changes are required for asynchronous design to enter the toolbox.

Finally, Smith’s chart shows near-threshold computing coming on line in 2026. That means transistor operation at 400 to 500 mV. Well there’s certainly research going on in that area. For example, Intel apparently demonstrated a near-threshold x86 processor named “Claremont” running on a solar cell at last year’s Intel Developer Forum. I guess we’ll see on that one.

What do you think of Smith’s power predictions?

About sleibson2

Principal Analyst Emeritus, Tirias Research
This entry was posted in EDA360, Low Power and tagged , , , , . Bookmark the permalink.

5 Responses to Want to see the future of low-power SoC design? Have a look into Gary Smith’s crystal ball.

  1. bradpierce says:

    14 years until near-threshold voltages? Timothy Miller et al. are already showing how to deal with the process variation problem http://www.cse.ohio-state.edu/~millerti/booster-camera-ready.pdf .

  2. sleibson2 says:

    Brad, I think Gary Smith is referring to near-threshold use in production designs, not academic research. As I pointed out, Intel already demonstrated a proof-of-concept vehicle at IDF in 2011.

    • Gary Smith says:

      My reluctance to show near-threshold computing earlier on the chart is the reliability problem. Many researches believe redundant logic and near-threshold computing go hand in hand. How much redundancy is the big question. Thanks for the tip on Millers work. I need someone to explain to me how the reliability issue will be solved before I become a believer.

  3. William Ruby says:

    Apart from esoteric process and device driven technologies, I think that the real key to solving the power challenges will be hardware/software co-design and power-aware software. Actually, those two are quite complementary, if not identical. And this is happening now, not in 2015 and 2021.

Leave a comment