Deconstructing EDA360: Paul McLellan writes about the evolution of SoC design methodology in the era of re-aggregation

I’ve written before about Part 1 of the article in EE Times written by EDA veterans Jim Hogan and Paul McLellan (The SoC is dead! Long live the SoC!). EE Times published Part II, attributed to Paul McLellan, on December 20 and it’s an excellent read, just like Part 1. Essentially, McLellan deconstructs many of the elements that constitute the EDA360 vision. Here are several absolutely prescient statements from the article:

“Unlike previous changes to the abstraction level of design, the block level not only goes down into the implementation flow, but also goes up into the software development flow. Software and chip-design must be verified against each other. Since the purpose of the chip is to run the software load, it can’t really be optimized any other way.”

Put another way, SoCs have become, first and foremost, software-execution machines. They are processor heavy and the bulk of the effort in developing systems based on such SoCs is in the software development (just check the cost curves in the latest International Technology Roadmap for Semiconductors.) Therefore, any SoC design methodology must include a way to model an SoC in sufficient detail for meaningful hardware development on a virtual platform while allowing deep dives into lower modeling abstraction levels when more simulation detail is needed.

McLellan continues:

“There is, today, no fully-automated flow from the block level all the way into implementation. A typical chip will involve blocks of synthesizable IP typically in Verilog, VHDL or SystemVerilog along with appropriate scripts to create efficient implementations. Other blocks are designed at a higher level, or, perhaps pulled from the software for more efficient implementation. These blocks are in C, C++ or SystemC.”

Note that it’s some of the blocks that are written in C, C++, or SystemC, not all of them. Some blocks are existing, proven IP and there’s no reason to rewrite these existing blocks in a higher level language when they’re already working. You can develop C/C++/SystemC behavioral models for such blocks if such models don’t already exist.

McLellan writes: “Designs like this are really very difficult to verify efficiently due to the inevitable mixture of languages and accuracy.”

How to run these software-centric models at the upper abstraction level? McLellan writes:

“Large FPGAs are the medium of choice: they can accept this mixture and they are fast enough to run a large verification load. FPGAs have another advantage in that they introduce no silicon variance. They are by definition already silicon proven.”

Note: There’s an alternative to FPGAs for running these software-centric models. It’s called the Candence Virtual Computing Platform, which evolved from the old Quickturn FPGA emulation technology but is now processor-based.

McLelllan continues: “Going up from the block level allows a virtual platform to be created. The big challenge here is transitioning enough blocks so that fast hardware models exist with fidelity, for otherwise the delay and effort to do the modeling makes the software development schedule unacceptable.

Virtual platforms, and some other hardware-based approaches such as emulation, straddle a performance chasm. Software developers require performance millions of times faster than is appropriate for chip design. Of course at some level, if the technology were available, everyone would like high accuracy and high performance. We would all use Spice all the time if it ran faster than RTL but it is impossible to do that. Instead, performance is purchased by throwing away accuracy.

However, it is still necessary to be able to move up and down this stack dynamically: boot Linux at high performance (seconds not hours), and then drop to a higher level of accuracy to run a couple of frames to a display processor to check the hardware functions correctly. Run fast until just before a bug seems to occur, then drop down and investigate what is really going on. High performance or high accuracy is not good enough, both are required: the software performance model doesn’t have enough accuracy to debug the system hardware and the slower models can only boot Linux on a geological timescale.

This approach—the block level IP integration with virtual platforms—considerably shortens the number of steps between simply expressing design intent and actually having working hardware and software. This change enables design creation once again to move to the electronic system company, where the most important knowledge—the system knowledge—is found.”

However, system-level models are not pervasive. The piecemeal evolution of the industry’s design methodologies means that some IP blocks exist only at the physical-implementation level. Others exist only in RTL. McLellan’s article discusses this situation as well:

“But the biggest problem with virtual platforms is creating models quickly and in a way that guarantees or delivers fidelity between the ultra-high-performance models needed for some blocks—such as processors (to boot Linux for example) —and some other known good representation of the block. This necessitates a range of tools that allow virtual platform models to be created efficiently with a strong tie back to the implementation RTL. One of the key challenges is how to achieve a reasonable level of model fidelity between the levels of abstraction. This is a classic problem of higher-level design abstraction and corresponding validation and verification.”

And, McLellan continues later in the article:

“If the RTL already exists, then a model can be harvested as required with a guarantee that it is faithful to the implementation. This is the task of automated modeling technologies. If the software already exists, then RTL is required with a guarantee that the implementation will match. This is the task of high-level synthesis. There will likely be emphasis placed on IP quality with a corresponding opportunity to aggregate value.”

This article and its preceding companion, then, outline a direction for EDA and for SoC design that is very much aligned with the EDA360 vision. You can read the full article here.

About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at https://eda360insider.wordpress.com/)
This entry was posted in EDA360, IP, Silicon Realization, SoC Realization, System Realization. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s