Super Nintendo video game emulator highlights important emulation facets for EDA

An article just published by ARS Technica’s Gaming Editor Ben Kuchera on video game emulation seems an unlikely source for EDA emulation advice, but the long and detailed article is a textbook study in why accurate emulation is so important to System Realization. The article begins by stating the classic tradeoff between emulation accuracy and processing horsepower:

“It doesn’t take much raw power to play Nintendo or SNES [Super Nintendo Entertainment System] games on a modern PC; emulators could do it in the 1990s with a mere 25MHz of processing power. But emulating those old consoles accurately—well, that’s another challenge entirely…”

The article then captures some additional important truths about emulation:

“Put simply, accuracy is the measure of how well emulation software mimics the original hardware. Apparent compatibility is the most obvious measure of accuracy—will an old game run on my new emulator?—but such a narrow view can paper over many small problems. In truth, most software runs with great tolerance to timing issues and appears to be functioning normally even if timing is off by as much as 20 percent.”

What the article is saying here is that there are varying accuracy levels for emulation. Several levels may provide adequate emulation but the more critical the software timing, the more emulation accuracy you’ll need. The article also has some interesting, game-centric things to say about emulation accuracy versus speed:

“But this accuracy comes at a serious cost. Making an emulator twice as accurate will make it roughly twice as slow; double that accuracy again and you’re now four times slower. At the same time, the rewards for this accuracy diminish quickly, as most games look and feel ‘playable’ at modest levels of emulator accuracy. (Most emulators target a “sweet spot” of around 95 percent compatibility with optimal performance.)”

How can you tell how much emulation accuracy you need? Again, here’s the game-centric explanation:

“So if an emulator appears to run all games correctly, why should we then improve upon it? The simple answer is because it improves the things we don’t yet know about. This is particularly prominent in less popular software.”

There’s a really important idea here. Your assumptions about what you know and don’t know about your design will lead you to a certain conclusion about your emulation needs. However, realize that your conclusion is largely based on assumptions that may not account for things you don’t currently know about your design. As an experienced systems engineer, I can tell you that there are a lot of things you don’t know about your design. I can say this without ever seeing your system design. There are always surprises.

Now here’s a really surprising aspect to this article. It has something very relevant to say about emulating multiple-processor SoCs (MPSoCs). That’s really surprising because this article is about a Super Nintendo Entertainment System—a product sold between 1990 and 1993. In other words, it’s a product designed before the advent of single-processor SoCs, which first appeared in 1995. However, the SNES incorporates several processors in separate chips including a CPU, a Picture Processing Unit (PPU), and an audio processor plus plug-in coprocessors including the SuperFX for polygon rendering and sprite rotation (used in Starfox and Super Mario World 2) and the DSP-1 used for 3D math. Coprocessors were built into the SNES game cartridges. Here’s what the article has to say about emulating multiple-processor systems:

“The primary demands of an emulator are the amount of times per second one processor must synchronize with another. An emulator is an inherently serial process. Attempting to rely on today’s multi-core processors leads to all kinds of timing problems. Take the analogy of an assembly line: one person unloads the boxes, another person scans them, another opens them, another starts putting the item together, etc. Synchronization is the equivalent of stalling out and clearing the entire assembly line, then starting over on a new product. It’s an incredible hit to throughput. It completely negates the benefits of pipelining and out-of-order execution. The more you have to synchronize, the faster your assembly line has to move to keep up.”

Those words are pretty surprising, considering that they’re written long before MPSoC design became common. Coincidentally, EDA virtual prototyping systems like the Cadence Virtual System Platform need to deal with all of these issues even when not being used to develop video games.

Tip of the hat to Anton Klotz, who is a Lead Application Engineer at Cadence. He alerted me to this diamond in the rough. Many thanks, Anton!

About sleibson2

EDA360 Evangelist and Marketing Director at Cadence Design Systems (blog at https://eda360insider.wordpress.com/)
This entry was posted in Apps, EDA360, Firmware, System Realization and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s