As a docent for the Computer History Museum in Mountain View, California, I’d become accustomed to telling people stories of the computer revolution while leading them through a room euphemistically called “Visual Storage,” which was essentially a big, well-lighted room where hundreds of artifacts were set up but lacked museum-quality signage to help explain why the displayed artifacts were important. There were no interactive exhibits for the kids either. Now the museum has reopened and thanks to the generosity of many donors including Bill Gates, there’s a real exhibit and timeline titled “Revolution, The First 2000 Years of Computing,” an apt title if there ever was one.
David Thon and I spent an afternoon
playing hookey from Cadence viewing the exhibit yesterday. It’s an impressive review of the last 2000+ years of computing technology going as far back as the Greek Antikythera mechanism, an astronomical calculator used around 100-200 BC. The exhibit also includes more modern calculating mechanisms such as slide rules and Napier’s Bones. There’s a good assortment of Hollerith tabulating card equipment—so important to the development of mainframe computers.
There’s a replica of the important but little-known Antansoff-Berry Computer first conceived in the late 1930s and then built and tested in 1942. There are pieces of the original ENIAC computer, the first programmable electronic computer—on loan from the Smithsonian Institution. There are numerous examples of computers from the several main computing eras: mainframe, minicomputer, microcomputer, and PC.
What’s the connection to EDA360? There are few places where you can spend a few hours and get more insight into many system-design issues than the new Revolution exhibit at the Computer History Museum if you know where to look. Take memory issues for example. You’ll find many strange approaches to creating memory for computing hardware including Williams tubes, mercury delay lines, magnetostrictive wires, core memories, and semiconductor memory. All of these technologies had their day, then died off (except for semiconductor memory). The “how” and “why” of these memories, the reasons that they came into being, those are still around. It’s simply the implementation technology that changes.
For example, take a look at this Fabri-Tek display of the evolution of magnetic core diameters from 1954, shortly after magnetic-core memory was invented, to 1972 when the magnetic-core era essentially died.
Although a core-size reduction from 80 to 14 mils might not seem like much in the day of 28nm IC process technology, you need to know that these cores were all hand-strung. Magnetic-core memory was hand-woven like Renaissance tapestries just like this 1-kbit core plane from the 1955 Whirlwind project.
When Intel introduced the first commercially successful semiconductor DRAM in 1971, it quickly killed off further magnetic-core development. The manually assembled memory technology could no longer compete. It died.
There are many, many lessons in system design just like this one all over the Computer History Museum’s Revolution exhibit. If you think you’re the first one to ever face whatever design issue you’re currently facing, chances are someone else stumbled into the same problem and solved it—albeit on some different scale. Perhaps your answer is in the wiring of a Cray 1A supercomputer, which allowed Seymour Cray to avoid the use of registered pipelining so he could speed up his processor designs. Perhaps it’s in the rigorous quality and power issues of a processor-based pacemaker (also on display). You never know where you’ll find your answer, but it’s probably there somewhere in this exhibit.
The Computer History Museum’s Revolution exhibit officially opens on January 13. Highly recommended.