Pushing Interactive Exhibit Technology

It’s not that often that I get to re-relive a childhood fantasy. In this case, it’s creating video games, or rather, using the technology to tell some kind of story.

Let me back up first: I worked as a video game developer about ten years ago, in a variety of artist, designer, and developer roles. Working in games was a natural progression, so it seemed, from my childhood fascination with computers. When I was deciding what to study in college, it was a toss-up between English or Computer Engineering; I figured I could always write stories, but I didn’t know anything about how computers really worked at the hardware level. That ate at me. Computer circuitry fascinated me, because I couldn’t understand how it worked. So I got sidetracked for 10 years, gradually switching from computers to computer graphics then to interactive media, until I realized that I liked people and rediscovered writing.

Despite my recent focus on design, I’ve still maintained an healthy interest in the hardware and software, because I love the power that technology brings to storytelling. In particular, I like how video games can make you forget you’re just looking at bunch of glowing dots moving around on a flat screen. When a game is done right, you are fully drawn into its world as an active participant. The very best games use tons of presentational tricks to augment the experience so every pixel and every frame pulses with life.

I have a chance to revisit this world now, being in the process of picking up a new project for a large scale interactive for a museum in Illinois. The goal of this particular exhibit, which is targeted at younger children, is to convey the concept of how our choices and actions—and lack thereof—effect changes in the world. We think this can be done by creating a participatory interactive environment where groups of children spontaneously cooperate to create the world together. We’ll know we’ve done our job if children leave with a memory of their actions having made a difference that really mattered to them. This is a project that I can totally get behind, and the company developing the concept, understands the educational and philosophical issues that are important for an exhibit of this nature.

So what’s the problem? We want to push the envelope, just like we used to do in the game industry. However, the perception out there is that this is “too hard” or “too expensive”. People are comfortable with their existing tools. What we want to do, however, is turn that thinking on its ear, and bring some ass-kicking tools into the educational / museum interactive space to raise the bar. Of course, that’s easier said than done; I’m trying to figure out the best way of finding the right people and technologies that are intrigued by the possibilities. I’ll be picking up my rusty game development toolbelt again, using what I know to to redefine new workflows so we can wow the crowd, on time and on budget.

A tall order. I’m wrapping my head around it still, and I figured I might as well write a blog post about it and state my case. Read onward!


I’ve done a few interactives in my time, using programs like Director. Director is what we New Media Designers call an authoring tool. It’s sort of like a fancy version of PowerPoint; you don’t need to be a programmer and build everything from absolute scratch, because the tool provides ways to animate things on the screen, play sounds, and respond to mouse clicks and key presses. Director and tools like it allow you to create much more sophisticated behaviors than you can with Powerpoint; you can think of it as the difference between Microsoft Word and a professional Page Layout program like InDesign or Quark XPress.

Director is showing its age, so Flash has started to replace it in several areas. As it turns out, most interactives share a lot of common DNA:

  • Some set of related ideas is presented through words and moving images
  • Keyboard and mouse actions cause something to react differently on the screen
  • The behavior of the interactive depends a lot on what the user does

More “advanced” interactives typically incorporate novel presentational and input elements:

Director can also incorporate elements created with the other major authoring tool, Flash. Flash has the advantage of animation quality and a “better” programming language (Actionscript), but it lacks the extensibility of Director’s Xtras to interface with barcode scanners, burn CDs…heck, just about anything you need to do. You can also write your own with the Xtra Software Development Kit (SDK) Flash can do some of these things by using 3rd-party extenders, but it’s not built-in functionality.

The great advantage of using an authoring tool like Director or Flash is that they create programs that will run on practically any computer, either as a desktop program or in a web browser that has the Shockwave or Flash Player plugin installed. This simplifies deployment and installation quite a bit. However, this universality comes at the expense of using the really cool cutting-edge capabilities already built-into your computer:

Don’t get me wrong: the general-purpose graphics and audio engines in Flash are very fast and capable, and clever interactive designers are doing astonishingly great stuff with them. However, sometimes you want something that’s more specialized. Metaphorically speaking, you can think of the Flash graphics engine as the “computer software equivalent” of a nice 2007 Toyota Camry. It’s fast, powerful, comfortable, and refined: a great daily driver. However, you would never think of driving your Camry through a slurry of rock and mud in a rally race, or doing 500 laps against highly-tuned purpose-built race cars. It would be futile. Your Camry is a perfectly fine general-purpose automobile, but there are times when the level of performance can not be delivered.


The kind of performance I am thinking of, with regards to museum interactives, is to achieve a digital semblance of life. This is a combination of aesthetics and speed: meticulous pixel-level craftsmanship, highly optimized graphics and special effects engines, and super smooth animation.

I know, I know…I said general-purpose graphics engines are pretty darn good already, so can’t we just use our existing tools? This is where I have to introduce some personal design philosophy: I can feel my pulse beat faster when engaged by nuanced silky-smooth animation. To me, that’s achieving, at minimum, a rock-solid 30 frames per second (fps) refresh rate; in other words, the screen is redrawn (“refreshed”) 30 times every second. Even better is 60 frames per second; the old Atari 2600 console refreshed its crude graphics display at 60 frames per second because the game designers didn’t have a choice; the 1970s-era design dictated this as a major design constraint. If you’ve ever been struck by the eerie smoothness of early video games, the high frame rate is why. 60 frames per second is awesome. 60 frames per second is ALIVE. There is something much more visceral about graphics projected at this speed.

If we were able to have 60fps games in the 1970s, we should be doing pretty well today, right? Well, not quite. Modern computers are tens of thousands of times more powerful than those early video games, but the technical necessity for 60fps animation has long past. Back then, the screen needed to be updated that quickly because if you didn’t draw fast enough, the screen would literally flicker and look terrible. Since computer memory was very expensive, the graphics were created “just in time” for the television electron beam to draw them. Other early game systems used what are called vector displays, which control the screen’s electron gun directly. Today, just about all computers use a framebuffer, which is sort of like a “drawing slate” inside the computer, compromised of a whole bunch of pixels. There are often two framebuffers: one is shown by the computer monitor, redrawn from 60 to 120 frames per second. The other is used for new graphics, say the next frame of your animation. When the computer is done drawing the new frame, it tells the monitor to “switch framebuffer”, thus showing the newly-completed slate. Because switching framebuffers is practically instantaneous, the screen appears to change immediately. The framerate of the animation is determined by how fast the computer can keep drawing and flipping those framebuffers; if the computer takes a long time to draw each new frame, the result is a slower stream of images. The result: a lower framerate.

In practical terms, you can still maintain the illusion of smooth motion with as little as 12 frames per second. In hand-drawn animation terms, this would be like watching an old Bugs Bunny cartoon; at 12 frames per second on a 24 frame-per-second film reel (the standard for film projection), you are drawing an entirely new image on every other frame (this is called animating on twos). Japanese animation tends to be animated with fewer frames, perhaps on 3s or 4s (corresponding to 8 and 6 frames per second). If you’ve noticed that Japanese TV animation looks a lot choppier than “American” animation from Disney or some other studio, this is probably what you’re seeing.

Other than frame rate, the other main contributor to animation smoothness is consistency of the frame rate. It’s far better to have a consistent 10fps than 30fps half the time and 15fps the other half. Many new interactive designers don’t understand how the graphics engine works, and sometimes create situations where frame rate variations cause “jerkiness” and “hitching”. They just heap graphic after graphic on the overloaded graphics engine, like Dad loading 1000 pounds of cinderblocks in the back of his Camry, then wondering why the car’s making that weird noise and it’s driving so slow. Every graphic element has its own cost, and the savvy interactive designer knows how to account for every overdrawn pixel to maximize available computing power.


Humor me and presume that 60fps is the holy grail of framerates. There are only three ways you can do this:

  • Draw fewer objects — the fewer graphics you draw, the faster each frame is drawn. However, you lose visual richness.

  • Be clever about HOW you draw — You might only update a few things at a time, or update different parts of the screen at different framerates; you just need a few things moving at 60fps to start to benefit. You can avoid “overdrawing”. You can pre-render some graphics sequences to avoid real-time compositing. The disadvantage: the optimization techniques start to get in the way of implementing new ideas.

  • Get a faster graphics engine — Oh, yeah :-)


p>So let’s look at the graphics engine choices we have!


Compared to 10 years ago, there are a lot more game development tools available, both commercial and open source. They fall into three general categories:

  • Tools that expose the raw underlying power of the hardware to programmers. For example, DirectX.
  • Tools that provide a mature implementation of a useful game element. Simulating physics and gravity, for example.
  • Tools that provide the framework for a typical type of game. For example, there are game toolkits that are targeted toward First Person Shooters, so developers can focus more on creating the customized game experience, as opposed to first building one from scratch.

There’s a lot of tools out there. Unfortunately, they require that you are a game programmer! This, of course, is is the domain of intense individuals who have learned to deconstruct their very existence into constructs of optimized code to recreate entire worlds in their own image. It’s pretty heady stuff. I can’t do it myself, but I’ve had occasion to peer into the abyss.

That said, there’s not really any reason we can’t apply some of that technology to our interactive work. We just need to know that the tools are not quite as forgiving as Photoshop, and that there is new workflow to learn.


Other than the technical complexity, there’s the perception that game development is very expensive. And yes, it is. Modern commercial game development budgets regularly exceed a million dollars, and require dozens of specialists—these are the experienced programmers, artists, animators, and producers that you’re unlikely to find on the street. The high cost of game development is sometimes cited as a reason why this would never work for something like a museum interactive, which can have much more modest budgets. However, a great deal of the development cost is from creating content for hours lavish gameplay. That’s expensive to produce, and why a lot of artwork creation is going overseas. A second expense is all the play balancing, refinement, and testing to make sure the game is a commercial success. A museum interactive, by comparison, needs enough content to fulfill maybe 5-10 minutes of interaction, is “successful” if it integrates with a larger supporting educational experience, and doesn’t cause a maintenance problem. With the lower content requirements and less brutal fitness metrics, cost can be contained within a reasonable (that is, doable) scope. Theoretically, anyway :-)

There’s a second advantage: we’re designing for a specific computer. Developing for the mass market requires a defensive programming mentality to ensure that the code runs on every supported machine. There are thousands of small-but-deadly variations between video cards, motherboard bugs, installed software, and operating system variations. This is where using Flash Player shines: the hard engineering work to make sure the player works on all computers has already been done by Macromedia’s engineers, so it will run pretty much the same on all supported computing platforms. If we were developing a product for use by individuals, this would be very important. However, for a museum interactive, we get to specify the hardware ourselves and control what’s installed on it. Our interactive will run on a machine of our designated speed, with a particular video card, and our own code libraries. This is absolutely controllable, if you know what you’re doing.


What is a challenge is acquiring the know-how to work with game development technologies. There is no click-and-play application that you can use to create an awesome cutting-edge game experience, though I’ve been very intrigued by Unity as a potential solution.

The great advantage that visual authoring tools like Flash and Director has over “traditional” game development is:

  • The common workflow — Flash is in broad use, and there’s a lot of books and tutorials available. It is less the case for game development tools, though it’s actually getting a lot better because this is a popular subject.
  • The refined user interface for integrating artwork and code — You can do just about everything you need with Flash, Photoshop, and Illustrator and a decent sound editor like Sound Forge. Not so with game engines…you’ll be learning how to model and rig elaborately-textured 3D models and export them as esoteric file formats that you’ve never heard of, using utilities that work maybe 80% of the time. You’ll have to learn all the messy details of how the graphics engine really works and bend it to your will. Otherwise nothing will work at all, and you’ll be wondering why.

Game development is kind of like building a race car that you’re learning to drive at the same time. When it works , it’s fast and exhilarating. When it doesn’t work, nothing works at all. However, you’re building a specialized machine that will do things that just aren’t possible with general-purpose authoring tools.


This post seems to have become an introduction to interactive technologies, so I should bring it home and explain why all this is important to me.

  • 60 Frames Per Second. Beautiful, compelling, living motion. If you want to attract kids, you need to make something pretty compelling. Some of this you can just do with bright colors (kids are suckers for that), but holding their interest requires animation talent and enough graphics horsepower to move a lot of objects on the screen.
  • Advanced Graphics. I’m a big fan of Flash, and I also know how to code up a rock-solid Director app for a kiosk application. Video game technologies, however, are a quantum leap in visualization capability…if I can get them to work right. I’m tired of seeing ratty 2D graphics, amateurishly hacked out of a low-quality JPEG file, lurching across the screen at 7 frames per second in a freakish parody of “fun”. I believe that things can be much better. It’s time to push harder on this.
  • Parity of Experience with Other Media. Kids can tell the difference between a gimpy 12 fps Flash animation and a professionally-produced game running on their PlayStation 3. They can rent 20 million dollars worth of production for 3 bucks at the local Blockbuster. As small interactive producers, we can’t possibly compete with those massive budgets and teams, but we can apply some of the same techniques with less money. We have an advantage in having a controlled space in the museum, and thus we can employ environmental graphics and attraction design techniques to our advantage.
  • Magic. It really doesn’t matter what kind of tools you’re using; what counts is your ability to wield your presentation techniques with graceful surety. This is showmanship. However, there is something to be said for having that awesome explosion effect, that bone-crushing sound system, and novel visual ideas that have never been seen before. This kind of spectacle often requires some investment in serious hardware and software.

For the next few months, I’ll be looking at ways of trying to achieve this higher level of interaction. I’ve started to look at open source game engines and some commercial products, re-immersing myself into technologies that I haven’t looked at in a long time. If there are people out there with the interest and the skills, I’d like to hear from you to see how we might combine forces and do some cross-education. Leave a comment!

NOTE: The views expressed in this post are mine, not necessarily that of the company I’m working with! :-)