The man who’s probably done as much primary research on EDA and best ASIC design practices as anyone in the business, Ron Collett, just published an article in EE Times titled “Optimal team sizes for chip projects.” In his article, Collett writes “What’s the optimal team size for a given IC design project? It’s a question I hear often from engineering managers and senior executives. What they’re actually asking is whether they’re over-staffing projects and therefore wasting resources. Implicitly, they’re also asking ‘what’s the fewest number of engineers I can put on a given project and still finish on time?’” Collett concludes by writing “In sum, adding staffing increases throughput but average productivity declines.”
I certainly can’t argue with all of the above, with a big caveat. I think there’s a big piece missing from that answer. I think there’s a big unasked question in response to the asked question “What’s the optimal team size for a given IC design project?”
What’s that unasked question?
It’s “How many people on the team do you have reinventing the wheel and how many more people do you have verifying that new wheel?”
That’s a critical question in the world of GigaGate SoCs because the reasons for reinventing the wheel—usually performance and area—are no longer as operative in the land below 65nm. Just as we’ve long ago stopped worrying about individual transistors in digital designs, it’s time to start thinking about 3rd-party IP in a way that reflects today’s realities.
No one questions (well, almost no one questions) the sense of using standard processor cores. ARM’s built a huge business out of its RISC processor IP cores. The same is true for DSPs. And very few design teams try to develop their own standard interface cores (USB, SATA, etc.) because they’re increasingly complicated and they don’t add much value to an SoC design. In other words, you don’t get special credit from the customer if your USB port is “better” than the same port implemented with an off-the-shelf core. Why? Because standard interfaces are largely check-box items. USB 2.0? Check. SATA II? Check. No extra points for any special features because these are standard interfaces.
The same is true for memory controllers—with an added twist. Memory controller cores such as the Cadence Databahn DDR memory controller and the Cadence NAND Flash memory controller already know more about controlling their respective target memories than your design team is likely to discover while creating a memory controller from scratch. It’s not all that hard to get one DDR memory cycle to work, but put a burst together while interleaving and optimizing commands from several on-chip masters and you’ll have an interesting time optimizing throughput to your DDR memory.
Then there’s the question of how many verification engineers you have on the SoC design project. There’s plenty of research and documented evidence that more than half of the effort needed to design an SoC now goes into verification. A lot of that effort goes into developing verification IP (VIP). Again, there’s little point in creating your own VIP to verify standard interfaces such as USB 2.0 or SATA. These are complex interface specifications with many twists and turns. Purchased VIP saves development time, development effort, and avoids the likely outcome of developing incomplete VIP in the name of schedule. That’s one of the big reasons Cadence just launched a comprehensive VIP catalog.
In short, SoC Realization is far more dependent on the integration of purchased IP than it has ever been and this trend will no doubt continue. As we place more processors on chip, software increasingly will differentiate designs rather than hardware. So it clearly makes sense to assemble a design from as many proven IP blocks as quickly as possible.
Think that will affect an SoC project’s team size? I sure do.
(Note: Ron Collett is now CEO of Numetrics.)