Last week on the EDA Café Web site, EDA Editor and Industry Observer Gabe Moretti discussed my DAC blog post on Wally Rhines’ discussion of software’s role in the rising cost of SoC development. (See “Some chip-design reality from Mentor’s Wally Rhines at last week’s annual DAC ESL panel”) To refresh your memory, the rising costs of SoC development have far more to do with the development of software that will run on the SoC and less to do with development costs for the silicon. Rhines presented data to back up his assertion, and Rhines if nothing if not thorough with data when he makes a presentation so you might want to look at the earlier EDA360 Insider blog post before reading on.
Moretti writes “This should not come as a surprise to careful observers of the industry” and I absolutely agree with him. You cannot place two, four, eight, or a dozen (or more) programmable processing elements on an SoC and not expect the associated software-development costs to skyrocket.
Towards the end of his blog post, Moretti takes a left turn that’s worthy of discussion. He writes:
“…should we let foundry based companies develop and market standard operating platforms and just develop applications that differentiate products from one another? May be we should take advantage of 2.5D technology and build a die containing memory and a “simpler” hardware module that is connected through a standard interposer to a commercial computer.”
Readers of EDA360 Insider know that I am a strong advocate for 2.5D IC assembly, I write about the topic every Thursday, so I think Moretti’s conjecture is an interesting one and has merit. If I am reading Moretti correctly, he is suggesting that the combination of standardized SoC die with specialized silicon and memory using 2.5D assembly techniques would yield lower overall software- and hardware-development costs.
From a hardware perspective, the standardized silicon processing elements would be used on multiple SoCs and therefore would benefit from much larger manufacturing volumes, driving down manufacturing costs for that die. The standardized processing die used across multiple SoCs would also aggregate support software and applications the way that standardized motherboard hardware in the PC world does, which would also tend to reduce software development costs.
Think this can’t happen? I think I see something along these lines happening already at Xilinx and Altera. Just this month, Xilinx announced the Virtex-7 H580T that combines two FPGA silicon slices with a high-speed serial transceiver die using the FPGA slice and 2.5D interposer technologies that the company developed for its Virtex-7 2000T FPGA. (See “Friday Video + 3D Thursday: Xilinx Virtex-7 H580T uses 3D assembly to merge 28Gbps xceivers, FPGA fabric”.) Back in March, Altera showed an experimental FPGA that combined an FPGA die with optical transceivers using 2.5D package-on-package assembly technology. (See “3D Thursday: Altera adds Avago MicroPOD optical interconnects to FPGA package to handle bidirectional 100Gbps Ethernet”.) With these new devices, both Altera and Xilinx acknowledge that monolithic IC design has limitations (which should not be a surprise) and that 2.5D assembly techniques can play an interesting role in going beyond the limits of monolithic silicon design.
So I think that Moretti has a viable idea here and am curious to see if it gets picked up and turned into reality.