Brian Bailey is a well-known consultant in the EDA industry and he’s just published a short but pithy blog on EETimes examining the state of the art in hardware/software co-design. Bailey does something I really admire in this blog. He waves away the long academic efforts searching for automatic partitioning by writing “they all assumed a single thread, a single function, a single processor, a single bus…” SoCs have not looked like that for years, so creating tools that strive to force a design into that mold are clearly going in the wrong direction. A bit later in the blog, Bailey writes “What we do know is that the systems of today are a lot more complex and defy complete static analysis, such that the notion of an optimal partition or indeed anything automatic is not on the table.”
In addition to describing what we don’t need or want, Bailey describes what we do need: “…performance is related to how the hardware resources are going to be used by software, and that means we have to analyze how the software interacts with the hardware, requiring dynamic analysis. To enable this we have to ensure that the software folks have the necessary tools to give them a fighting chance.”
Further, Bailey writes: “One such area that may lead to co-design is based on code profiling. We have seen several companies who can extract performance information from running code on a virtual prototype, use that to make decisions about what to partition into hardware and then various degrees of help in completing that task.”
Unsurprisingly (since I’m writing about Bailey’s blog in the EDA360 Insider), these ideas closely align with the EDA360 System Realization concepts, which are embodied in the Cadence System Development Suite including the Virtual System Platform, which enables pre-RTL software design, verification, and system analysis before committing to hardware design. SoCs today are inherently multicore machines with very sophisticated memory, interconnect, and I/O arrangements that essentially defy automated partitioning. These systems are about as far from single-tasked, single-threaded systems as you can possibly get at any given technology node and that gap widens with each click of Moore’s Law. What these sorts of do designs require is exactly the kind of virtual prototyping and code-profiling tools Bailey describes. I commend his blog to you.