I’ve written several times about Wide I/O DRAM and how its speed and power advantages make it a slam dunk and killer app for 3D IC assembly. I saw another such 3D IC killer app this week at the Ethernet Technology Summit. The speaker was Chris Bergey, VP of Marketing at Luxtera, and the topic was 100Gbps Ethernet (100GbE). This realm of 40GbE and 100GbE is presently the domain of optical interconnect, used primarily in data centers. Optical interconnect burns a lot of power, on the order of 20W/channel, so there’s a lot of interest in reducing the power consumption of these Ethernet connections because data centers tend to use a lot of them and the electricity needed to cool these systems is a substantial fraction of the data center’s operating cost.
Currently, optical interconnect is dominated by Active Optical Cables (AOCs) and plug-in, front-panel optical modules called QSFPs (Quad Small Form-factor Pluggable). These modules contain high-speed Ethernet PHYs, an optical transmitter and receiver pair (or pairs), and optical connectors for the plugging in of the transmission fibers. According to Wikipedia, the optical modules for 40 and 100Gbps Ethernet “are not standardized by any official standards body but are in multi-source agreements (MSAs)” so there’s considerable leeway in developing new alternatives.
Luxtera is a silicon photonics company and its goal is to develop small optical components (receivers and transmitters) that can be located very near to the SoC receiving and generating the Ethernet data streams. When you’re running multiple streams at 10 to 100 Gbps, there are some big challenges in running the signals all over the place, so the closer you can get these optical components to the SoC, the better. What better way to do that than 2.5D assembly and silicon interposers?
And so you end up with an evolution of optical interconnect as show in this diagram:
The current situation appears as a photo at the upper left. It shows a number of optical modules plugged into a 1U front panel in a server rack. There are both real-estate and power limitations with this approach. The next evolutionary step appears at the lower left of the image, which shows the continued use of MSAs for front-panel interconnect but the use of packaged optics mounted on the blade pcb for backplane and intra-rack interconnect. The final evolutionary stage is to incorporate these optics in the ASIC. How? 2.5D IC assembly using interposers, said Bergey as he showed this image taken from a Xilinx presentation.
In this image, imagine that the “Bridge Chip” is the Ethernet Switch SoC. The driver is a high-speed PHY built with somewhat different process technology (more on that in another blog post) than the logic-optimized process used to make the SoC. The PD and LD could be the integrated optics, built using yet another IC process technology.
This is another killer 2.5D assembly app in my opinion.
By the way, in case you missed it, Cadence just introduced IP blocks for implementing 40G and 100G Ethernet controllers and the digital portion of the PHYs at the same Ethernet Technology Summit.
Has anyone actually built and demonstrated a Wide I/O memory part?
Yes.
OK – but who? And where can I read about it?
Samsung and Elpida both have shown Wide I/O SDRAMs. I’ve covered both in both the EDA360 Insider and Denali Memory Report blogs.
Thanks! I’ll find those articles & read ’em.