EETimes just published a blog titled “FPGAs advance, but verification challenges increase,” written by GateRocket’s president and CEO Dave Orecchio. The article makes the point that FPGAs are rapidly losing their “easy-to-use” market positioning because, to put it quite simply, they are now reaching complexity levels that make verification efforts difficult. Just as difficult as when ASICs evolved into SoCs more than a decade ago (and then MPSoCs) and created similar problems. Increasing FPGA complexity is actually a very good thing, because it means that you can use FPGAs to do more things. Meanwhile, ASIC-based SoCs continue to evolve with process technology advances and they can also do more things—many, many more things. However, design complexity brings problems regardless of the implementation technology used and certainly verification at all design levels is one of the thorniest problems to come along.
One approach to solving this complexity problem is to turn nearly everyone into a verification engineer. Unfortunately, this trend has actually been happening and has been well documented in presentations made over the past few years by Wally Rhines, chairman and CEO of Mentor Graphics.
However, you have to ask: Is that really what we want to do? Do we really want all of the planet’s available engineering talent harnessed to the verification wheel? Probably not.
The alternative, of course, is to move to a higher abstraction level which does two terrific things: it simplifies verification—which becomes easier and faster with more abstraction—and it places more reliance on pre-verified blocks—which need only be verified once and can then be reused at will without incurring substantial, additional verification effort.
Another name for those pre-verified blocks is “IP.” IP blocks are now the foundation of complex systems. Everyone uses IP these days. Nearly any processor core used in an SoC or FPGA will be a standard or configured processor core from one of the reputable processor core vendors. (ARM, of course, is the leader in the processor IP arena.)
Why? Because processor core IP is more than the RTL describing that IP. Along with the RTL, you get pre-written, pre-validated verification IP to prove the core in your system design. You get pre-written, debugged software-development tools and IDEs (integrated development environments) for that processor core. Even more important, you get a partner who maintains those tools for you as part of the deal. (Maintenance is actually the biggest stumbling block in the creation of IP blocks. Nearly anyone can now code up a working processor block over a long weekend, but that doesn’t mean they can support and maintain it.) In addition, you can get application-specific software drivers and protocol stacks for that processor core and these software IP elements are also pre-written and pre-debugged.
Memory IP is the same. Most teams designing SoCs and certainly those using FPGAs employ memory IP in a big way. There’s no percentage in developing your own memory IP when you can get reliable, proven memory compilers to do that work for you.
IP blocks that implement standardized memory and I/O protocols are also quickly becoming the smart way to go as well. How much value can a team add by designing its own memory or USB controller and writing the necessary verification suite and drivers? Not much. Increasingly, commercial IP makes sense here too.
Intelligent use of IP in all complex design, whether for SoCs or for FPGA-based designs is a core concept of the EDA360 vision.
Note: These are some of the reasons that Cadence and Xilinx have just jointly announced the creation of a microsite under the ChipEstimate.com banner, to support and to advance IP-centric design using Xilinx FPGAs.