SoC Realization: Eight questions from the IP and IP Selection Panel at the EETimes SoC 2.0 Virtual Conference

Today, I participated in a particularly good panel on IP for SoC Design during the EETimes System-on-Chip 2.0 Virtual Conference. Often, these panels deteriorate into sales fests—but this one didn’t, even if it was paneled by a bunch of my fellow marketeers.  The panel was led by EETimes’ West Coast online editor Dylan McGrath and I think it was successful because McGrath was thoughtful enough to create eight meaty questions to guide the discussion. Truthfully, McGrath’s questions were so meaty that we were only able to finish half of the meal. Four questions remained unanswered.

The panelists were:

  • Rick Tomihiro, Marketing Director, Plug-and-Play IP, Product Solutions Management at Xilinx
  • Frank Ferro, Director, Marketing at Sonics, Inc
  • Kalar Rajendiran, Senior Director, Marketing at eSilicon Corp
  • And myself.

As soon as I get a link, I’ll let you know how to plug into the archive of the panel discussion. Until then, I decided to write up my answers to all eight questions:

1. There is a lot of talk out there about IP platform subsystems. What is behind this trend? Will it dramatically change the IP market? How will it affect the IP selection process?

The three key drivers for any engineering make/buy decision are:

  1. Reduce my risk
  2. Improve my time to market
  3. Reduce my development and manufacturing costs

The purchase of a piece of isolated design IP alone will not adequately address these drivers. To address all of these issues, the design IP must be accompanied by appropriate verification IP (to prove that the design IP works in the system), software drivers (sometimes called bare-metal drivers), and any required protocol stacks.

Star IP such as processor cores are perfect examples. These days, nearly anyone can create an RTL design for a 32-bit RISC processor over a long weekend. However, that processor IP is essentially useless without a complete set of software tools (compiler, assembler, debugger, instruction-set simulator, etc.), an integrated design environment (usually based on Eclipse), and an entire ecosystem of suppliers who can add value to the processor selection in terms of tools and vertical software packages.

The desire to use IP is growing quickly because of the immensely positive effect on designer productivity. Some studies suggest that good purchased IP can improve designer productivity by 350%. (!)

If true, and I find this statistic very easy to believe, then IP can easily be shown to save money. Proven, pre-verified IP clearly reduces both risk and time to market (assuming it’s easy to integrate into your system). A no-brainer.

However, IP selection can still be quite a grab bag…it’s just like IC quality back in the 1970s and 1980s. There were good vendors who sold rock-solid parts that met specs and then there were other vendors who cold parts that didn’t meet specs or that conformed to “hidden” specs that weren’t on the data sheet. Actually, that’s not all that different from today. It still applies to ICs and it also applies to IP.

2. New design starts at 45nm and below allow for an unprecedented number of IP cores to be integrated on a single SoC. What are some of the key challenges facing customers bringing these SoCs to market?

Systems implemented on a chip are becoming increasingly complicated. In the early days of ASICs, were were just gathering up TTL chips with a few hundred gates (at most!) each. In the early days of SoC design, we had one processor connected to one bus and controlling one or two memories and some peripherals. Such simple designs did not require system-level design and modeling and so IC design teams didn’t use system-level design tools or system-level modeling and simulation. Such chip-level designs were simple enough so that their high-level architectures could be expected to work by inspection.

Today, the typical SoC has half a dozen or a dozen processors (the record I think is 192 processors on one chip), many memories, many complex peripherals such as video and Ethernet controllers, and complex interconnects such as bus hierarchies or NoCs (networks on chip). There is simply no way to know if such intricate architectures will work without extensive modeling and simulation at the architectural level. Such high-level modeling is now a “must.” Cadence calls this approach “System Realization.”

At the other end of the scale, the lithographic requirements at 32/28nm call for some pretty heroic mathematical algorithms to produce the desired patterns on silicon. At this point, we’re literally playing billiards with individual photons to get them to fall in the right patterns on the photoresist. Masks look nothing like the end layout. IC design teams must employ tools that will smoothly take them from system-level simulation all the way down to mask making if they want to get a design out the door efficiently and within budget.

3. With so many mobile and “green” consumer electronics, how is the IP industry addressing the need for efficient energy management?

ASIC and SoC designers relied on Dennard Scaling to get them out of power problems for 30 years. However, Dennard Scaling broke at 90nm and circuit tricks alone will no longer suffice to control power and energy consumption. The heat monster isn’t at the door; it’s through the door. For years, academics and leading industry lights have said that real energy efficiency can be best realized not at the circuit level but at the architectural level, yet most IC design teams ignored this venue. There really is very little architectural exploration taking place.

Architectural exploration will come as more IP is used, as SoCs are built using more IP integration and less creation from scratch. Because IC design teams will spend less time creating IP blocks and the verification IP that is needed to test the design IP, there will be more time to experiment with alternative system architectures to more fully realize the power and energy savings we desire.

4. What efforts or strides are being made to allow independent IP blocks from multiple vendors to work together?

Right now, what we have for interoperability is bus-level compatibility (such as AMBA or OCP). This situation is reminiscent of the early days of Intel and Motorola when system designers had to deal with competing, incompatible 8-bit buses for peripheral chips. Back then, a handful of gates could create a “universal translator” called a MOTEL (a contraction of Motorola/Intel) circuit that allowed one vendor’s processor to use another’s peripherals. Today, we’d call the MOTEL circuit a “gasket.” However, today’s on-chip buses are far more complex than the simple 8-bit buses of the 1970s and gaskets aren’t the answer. Conformance to standards is the answer.

You can see why we only made it through the first four questions. The answers are long and these are just my answers. The other panel members had equally interesting things to say.

Now here are the remaining questions that we did not have time to discuss, with my answers.

5. Are FPGAs now a viable SoC implementation for high-volume applications? Why or why not?

It all depends on your definition of “high volume” and the size of the FPGA you need. Ivo Bolsens, CTO of Xilinx, said at the recent 8th International SoC Conference in Irvine that it takes 20 transistors in an FPGA to do the work of one transistor in an SoC. That means that the FPGA incurs more silicon cost (area), consumes more energy (20 transistors versus one), and generally delivers less performance (more area means more capacitance) than SoCs for the same IC process technology.

A rule of thumb: FPGAs are 10x off in terms of price, power, and performance (the three “P”s) compared to SoCs built from the same IC node.

However, that may not matter to you if NRE costs and time-to-market considerations override the three “P”s. There’s a simple break-even equation based on all of these costs plus expected sales volumes that any project leader can construct in less than an hour to know if FPGAs are the way to go.

That said, FPGA companies are about to deliver new ways to exploit their FPGA fabrics in faster, more cost-competitive ways. For example, Xilinx has announced a new line it calls EPP (Extensible Processing Platform) that marries hard-core processors, memory, and I/O blocks to a reduced-size FPGA fabric, resulting in an affordable FPGA with a significant amount of high-performance, on-chip processing. Altera has announced a similar plan.

6. Who is in the best position to select the best hard IP for a given application? What about soft IP?

The RTL team will have a lot to say about soft IP. The physical implementation team will have a lot to say about hard IP. In reality, both teams need to share their design intent with the other teams to get the best design from all teams. The sharing of design intent across tools is a core concept in the recently announced Cadence “Silicon Realization” concept.

7. What is the status of the third-party IP industry in terms of maturity? What needs to happen to make it so? Will consolidation/shakeout be part of this process?

ARM and MIPS, two of the most successful IP companies, have passed their 20th birthdays. Others such as Tensilica and Denali (which Cadence purchased earlier this year) are more than a decade old. These companies and others prove that commercial IP is viable, when done well. As far as consolidation/disaggregation is concerned, it’s a dynamic process. Both occur simultaneously.

8. What is the single most important thing that an IP buyer wants from an IP provider?

Prove to me that you will reduce my risk, improve my time to market, and cut my costs.

When I get that link to the panel archive, I’ll share it. I think you’ll want to hear the others’ answers to these questions.

About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at https://eda360insider.wordpress.com/)
This entry was posted in EDA360, IP, Silicon Realization, SoC Realization, System Realization. Bookmark the permalink.

Leave a comment