It’s not that often that I get to re-relive a childhood fantasy. In this case, it’s creating video games, or rather, using the technology to tell some kind of story.
Let me back up first: I worked as a video game developer about ten years ago, in a variety of artist, designer, and developer roles. Working in games was a natural progression, so it seemed, from my childhood fascination with computers. When I was deciding what to study in college, it was a toss-up between English or Computer Engineering; I figured I could always write stories, but I didn’t know anything about how computers really worked at the hardware level. That ate at me. Computer circuitry fascinated me, because I couldn’t understand how it worked. So I got sidetracked for 10 years, gradually switching from computers to computer graphics then to interactive media, until I realized that I liked people and rediscovered writing.
Despite my recent focus on design, I’ve still maintained an healthy interest in the hardware and software, because I love the power that technology brings to storytelling. In particular, I like how video games can make you forget you’re just looking at bunch of glowing dots moving around on a flat screen. When a game is done right, you are fully drawn into its world as an active participant. The very best games use tons of presentational tricks to augment the experience so every pixel and every frame pulses with life.
I have a chance to revisit this world now, being in the process of picking up a new project for a large scale interactive for a museum in Illinois. The goal of this particular exhibit, which is targeted at younger children, is to convey the concept of how our choices and actions—and lack thereof—effect changes in the world. We think this can be done by creating a participatory interactive environment where groups of children spontaneously cooperate to create the world together. We’ll know we’ve done our job if children leave with a memory of their actions having made a difference that really mattered to them. This is a project that I can totally get behind, and the company developing the concept, understands the educational and philosophical issues that are important for an exhibit of this nature.
So what’s the problem? We want to push the envelope, just like we used to do in the game industry. However, the perception out there is that this is “too hard” or “too expensive”. People are comfortable with their existing tools. What we want to do, however, is turn that thinking on its ear, and bring some ass-kicking tools into the educational / museum interactive space to raise the bar. Of course, that’s easier said than done; I’m trying to figure out the best way of finding the right people and technologies that are intrigued by the possibilities. I’ll be picking up my rusty game development toolbelt again, using what I know to to redefine new workflows so we can wow the crowd, on time and on budget.
A tall order. I’m wrapping my head around it still, and I figured I might as well write a blog post about it and state my case. Read onward!
TYPICAL INTERACTION TECHNOLOGY
I’ve done a few interactives in my time, using programs like Director. Director is what we New Media Designers call an authoring tool. It’s sort of like a fancy version of PowerPoint; you don’t need to be a programmer and build everything from absolute scratch, because the tool provides ways to animate things on the screen, play sounds, and respond to mouse clicks and key presses. Director and tools like it allow you to create much more sophisticated behaviors than you can with Powerpoint; you can think of it as the difference between Microsoft Word and a professional Page Layout program like InDesign or Quark XPress.
Director is showing its age, so Flash has started to replace it in several areas. As it turns out, most interactives share a lot of common DNA:
- Some set of related ideas is presented through words and moving images
- Keyboard and mouse actions cause something to react differently on the screen
- The behavior of the interactive depends a lot on what the user does
More “advanced” interactives typically incorporate novel presentational and input elements:
- 360 degree panoramic movies
- Accelerated realtime 3D graphics
- High-quality animation
- Using computer vision techniques for user input
- Controlling external hardware through various extensions
Director can also incorporate elements created with the other major authoring tool, Flash. Flash has the advantage of animation quality and a “better” programming language (Actionscript), but it lacks the extensibility of Director’s Xtras to interface with barcode scanners, burn CDs…heck, just about anything you need to do. You can also write your own with the Xtra Software Development Kit (SDK) Flash can do some of these things by using 3rd-party extenders, but it’s not built-in functionality.
The great advantage of using an authoring tool like Director or Flash is that they create programs that will run on practically any computer, either as a desktop program or in a web browser that has the Shockwave or Flash Player plugin installed. This simplifies deployment and installation quite a bit. However, this universality comes at the expense of using the really cool cutting-edge capabilities already built-into your computer:
- Instead of hardware-accelerated graphics, a slower general-purpose graphics engine is used.
- Instead of multi-channel, low-latency 3D positional audio, simple stereo playback is supported.
- Instead of interesting input devices, you’re limited to basic keyboard and mouse input
Don’t get me wrong: the general-purpose graphics and audio engines in Flash are very fast and capable, and clever interactive designers are doing astonishingly great stuff with them. However, sometimes you want something that’s more specialized. Metaphorically speaking, you can think of the Flash graphics engine as the “computer software equivalent” of a nice 2007 Toyota Camry. It’s fast, powerful, comfortable, and refined: a great daily driver. However, you would never think of driving your Camry through a slurry of rock and mud in a rally race, or doing 500 laps against highly-tuned purpose-built race cars. It would be futile. Your Camry is a perfectly fine general-purpose automobile, but there are times when the level of performance can not be delivered.
THE HOLY GRAIL: 60 FRAMES PER SECOND
The kind of performance I am thinking of, with regards to museum interactives, is to achieve a digital semblance of life. This is a combination of aesthetics and speed: meticulous pixel-level craftsmanship, highly optimized graphics and special effects engines, and super smooth animation.
I know, I know…I said general-purpose graphics engines are pretty darn good already, so can’t we just use our existing tools? This is where I have to introduce some personal design philosophy: I can feel my pulse beat faster when engaged by nuanced silky-smooth animation. To me, that’s achieving, at minimum, a rock-solid 30 frames per second (fps) refresh rate; in other words, the screen is redrawn (“refreshed”) 30 times every second. Even better is 60 frames per second; the old Atari 2600 console refreshed its crude graphics display at 60 frames per second because the game designers didn’t have a choice; the 1970s-era design dictated this as a major design constraint. If you’ve ever been struck by the eerie smoothness of early video games, the high frame rate is why. 60 frames per second is awesome. 60 frames per second is ALIVE. There is something much more visceral about graphics projected at this speed.
If we were able to have 60fps games in the 1970s, we should be doing pretty well today, right? Well, not quite. Modern computers are tens of thousands of times more powerful than those early video games, but the technical necessity for 60fps animation has long past. Back then, the screen needed to be updated that quickly because if you didn’t draw fast enough, the screen would literally flicker and look terrible. Since computer memory was very expensive, the graphics were created “just in time” for the television electron beam to draw them. Other early game systems used what are called vector displays, which control the screen’s electron gun directly. Today, just about all computers use a framebuffer, which is sort of like a “drawing slate” inside the computer, compromised of a whole bunch of pixels. There are often two framebuffers: one is shown by the computer monitor, redrawn from 60 to 120 frames per second. The other is used for new graphics, say the next frame of your animation. When the computer is done drawing the new frame, it tells the monitor to “switch framebuffer”, thus showing the newly-completed slate. Because switching framebuffers is practically instantaneous, the screen appears to change immediately. The framerate of the animation is determined by how fast the computer can keep drawing and flipping those framebuffers; if the computer takes a long time to draw each new frame, the result is a slower stream of images. The result: a lower framerate.
In practical terms, you can still maintain the illusion of smooth motion with as little as 12 frames per second. In hand-drawn animation terms, this would be like watching an old Bugs Bunny cartoon; at 12 frames per second on a 24 frame-per-second film reel (the standard for film projection), you are drawing an entirely new image on every other frame (this is called animating on twos). Japanese animation tends to be animated with fewer frames, perhaps on 3s or 4s (corresponding to 8 and 6 frames per second). If you’ve noticed that Japanese TV animation looks a lot choppier than “American” animation from Disney or some other studio, this is probably what you’re seeing.
Other than frame rate, the other main contributor to animation smoothness is consistency of the frame rate. It’s far better to have a consistent 10fps than 30fps half the time and 15fps the other half. Many new interactive designers don’t understand how the graphics engine works, and sometimes create situations where frame rate variations cause “jerkiness” and “hitching”. They just heap graphic after graphic on the overloaded graphics engine, like Dad loading 1000 pounds of cinderblocks in the back of his Camry, then wondering why the car’s making that weird noise and it’s driving so slow. Every graphic element has its own cost, and the savvy interactive designer knows how to account for every overdrawn pixel to maximize available computing power.
HITTING 60
Humor me and presume that 60fps is the holy grail of framerates. There are only three ways you can do this:
- Draw fewer objects — the fewer graphics you draw, the faster each frame is drawn. However, you lose visual richness.
- Be clever about HOW you draw — You might only update a few things at a time, or update different parts of the screen at different framerates; you just need a few things moving at 60fps to start to benefit. You can avoid “overdrawing”. You can pre-render some graphics sequences to avoid real-time compositing. The disadvantage: the optimization techniques start to get in the way of implementing new ideas.
Get a faster graphics engine — Oh, yeah :-)
<
p>So let’s look at the graphics engine choices we have!
PEERING INTO THE TECHNICAL ABYSS
Compared to 10 years ago, there are a lot more game development tools available, both commercial and open source. They fall into three general categories:
- Tools that expose the raw underlying power of the hardware to programmers. For example, DirectX.
- Tools that provide a mature implementation of a useful game element. Simulating physics and gravity, for example.
- Tools that provide the framework for a typical type of game. For example, there are game toolkits that are targeted toward First Person Shooters, so developers can focus more on creating the customized game experience, as opposed to first building one from scratch.
There’s a lot of tools out there. Unfortunately, they require that you are a game programmer! This, of course, is is the domain of intense individuals who have learned to deconstruct their very existence into constructs of optimized code to recreate entire worlds in their own image. It’s pretty heady stuff. I can’t do it myself, but I’ve had occasion to peer into the abyss.
That said, there’s not really any reason we can’t apply some of that technology to our interactive work. We just need to know that the tools are not quite as forgiving as Photoshop, and that there is new workflow to learn.
BUT ISN’T THIS WAY TOO EXPENSIVE?
Other than the technical complexity, there’s the perception that game development is very expensive. And yes, it is. Modern commercial game development budgets regularly exceed a million dollars, and require dozens of specialists—these are the experienced programmers, artists, animators, and producers that you’re unlikely to find on the street. The high cost of game development is sometimes cited as a reason why this would never work for something like a museum interactive, which can have much more modest budgets. However, a great deal of the development cost is from creating content for hours lavish gameplay. That’s expensive to produce, and why a lot of artwork creation is going overseas. A second expense is all the play balancing, refinement, and testing to make sure the game is a commercial success. A museum interactive, by comparison, needs enough content to fulfill maybe 5-10 minutes of interaction, is “successful” if it integrates with a larger supporting educational experience, and doesn’t cause a maintenance problem. With the lower content requirements and less brutal fitness metrics, cost can be contained within a reasonable (that is, doable) scope. Theoretically, anyway :-)
There’s a second advantage: we’re designing for a specific computer. Developing for the mass market requires a defensive programming mentality to ensure that the code runs on every supported machine. There are thousands of small-but-deadly variations between video cards, motherboard bugs, installed software, and operating system variations. This is where using Flash Player shines: the hard engineering work to make sure the player works on all computers has already been done by Macromedia’s engineers, so it will run pretty much the same on all supported computing platforms. If we were developing a product for use by individuals, this would be very important. However, for a museum interactive, we get to specify the hardware ourselves and control what’s installed on it. Our interactive will run on a machine of our designated speed, with a particular video card, and our own code libraries. This is absolutely controllable, if you know what you’re doing.
THE WORKFLOW CHALLENGE
What is a challenge is acquiring the know-how to work with game development technologies. There is no click-and-play application that you can use to create an awesome cutting-edge game experience, though I’ve been very intrigued by Unity as a potential solution.
The great advantage that visual authoring tools like Flash and Director has over “traditional” game development is:
- The common workflow — Flash is in broad use, and there’s a lot of books and tutorials available. It is less the case for game development tools, though it’s actually getting a lot better because this is a popular subject.
- The refined user interface for integrating artwork and code — You can do just about everything you need with Flash, Photoshop, and Illustrator and a decent sound editor like Sound Forge. Not so with game engines…you’ll be learning how to model and rig elaborately-textured 3D models and export them as esoteric file formats that you’ve never heard of, using utilities that work maybe 80% of the time. You’ll have to learn all the messy details of how the graphics engine really works and bend it to your will. Otherwise nothing will work at all, and you’ll be wondering why.
Game development is kind of like building a race car that you’re learning to drive at the same time. When it works , it’s fast and exhilarating. When it doesn’t work, nothing works at all. However, you’re building a specialized machine that will do things that just aren’t possible with general-purpose authoring tools.
THE HOLY GRAIL
This post seems to have become an introduction to interactive technologies, so I should bring it home and explain why all this is important to me.
- 60 Frames Per Second. Beautiful, compelling, living motion. If you want to attract kids, you need to make something pretty compelling. Some of this you can just do with bright colors (kids are suckers for that), but holding their interest requires animation talent and enough graphics horsepower to move a lot of objects on the screen.
- Advanced Graphics. I’m a big fan of Flash, and I also know how to code up a rock-solid Director app for a kiosk application. Video game technologies, however, are a quantum leap in visualization capability…if I can get them to work right. I’m tired of seeing ratty 2D graphics, amateurishly hacked out of a low-quality JPEG file, lurching across the screen at 7 frames per second in a freakish parody of “fun”. I believe that things can be much better. It’s time to push harder on this.
- Parity of Experience with Other Media. Kids can tell the difference between a gimpy 12 fps Flash animation and a professionally-produced game running on their PlayStation 3. They can rent 20 million dollars worth of production for 3 bucks at the local Blockbuster. As small interactive producers, we can’t possibly compete with those massive budgets and teams, but we can apply some of the same techniques with less money. We have an advantage in having a controlled space in the museum, and thus we can employ environmental graphics and attraction design techniques to our advantage.
- Magic. It really doesn’t matter what kind of tools you’re using; what counts is your ability to wield your presentation techniques with graceful surety. This is showmanship. However, there is something to be said for having that awesome explosion effect, that bone-crushing sound system, and novel visual ideas that have never been seen before. This kind of spectacle often requires some investment in serious hardware and software.
For the next few months, I’ll be looking at ways of trying to achieve this higher level of interaction. I’ve started to look at open source game engines and some commercial products, re-immersing myself into technologies that I haven’t looked at in a long time. If there are people out there with the interest and the skills, I’d like to hear from you to see how we might combine forces and do some cross-education. Leave a comment!
NOTE: The views expressed in this post are mine, not necessarily that of the company I’m working with! :-)
4 Comments
Another possible way of looking at this issue is optimizing the quality of the actual gameplay.
What is it about Lemonade Stand, Circus Atari, and Tetris that made them so engaging? 1 solid idea and a great algorithm.
I’d take a look at Civilization – which seems to mirror what you are trying to accomplish. Then take a look at what the museum is trying to communicate.
From mucking around with games recently in my own attempts to educate, I’m finding realism in the actual gameplay is more important than realism in the drawings and animations. Just make the animation “good enough” so that the kids aren’t distracted by the quality.
Can’t wait to hear what you do with this project….
Great post Dave. I have one question and two comments
Question:
It seems like you want your work to be the best it can be. How do you measure the success of this particular piece? (I am assuming these goals are: it is both fun and at the same time conveys important meaning.)
I think that we often try to design an interactive experience or game to be ‘fun’, however the ‘fun’ is a measure of successful engagement and interaction of the participant with the virtual. The hardest part about this is that it is completely subjective. They deem what part of their experience is fun.
Comments:
There are a few ways you can circumvent the constraint of ye old keyboard while still using Flash as an authoring tool. For advanced custom alternative inputs try adding a TELEO block to your project (from making things http://www.makingthings.com/teleo/) and for another simpler solution try the IPAC (http://www.ultimarc.com/ipac1.html). I have tried both and they work wonders for developing interesting ways of interacting with the computer.
I would like to add that I believe an installation can succeed in engaging its participants to the point where they are completely engrossed by mixing the physicality of the alternative input with the virtual; and thus create a whole and complete interactive metaphor. However it needs to be seamless i.e. the input method needs to be transparent; it should not remain the focus of the participant and distract from the content of the exhibit. If we want to use an alternative input device, then the interactive installation or exhibit should be considered a communication vessel, as a whole in its entirety, for the rhetoric or idea being conveyed.
Secondly, I agree with Wendy gameplay is important, but it is such a broad term to use that encompasses so many aspects of what makes a successful interactive experience. This becomes even broader and more complex when you consider adding an alternative means of input. But get all these aspects right and damn it works well! Take a look at the Nintendo Wii as an example.
Best of luck and looking forward to seeing how things progress
Wendy: Yes, that’s an excellent point to raise! Gameplay is a given, and optimization arises through playtesting and refinement. One needs to plan for this kind of iteration throughout the development cycle, because if you wait until the end, you’re screwed because you’re out of time.
In my opinion, there is a kind of “trueness” between the gameplay and the presentation technology that arises as an interactive comes together. I guess one could call this “engagement”, which is the mental state where one forgets that they’re looking at pixels on a screen and the game is actually occurring IN THE MIND. The presentation technology is one of those important design constraints that affects the representation and interaction of a game, though the core game concepts can be relatively unchanged…they boil down to concepts like counting, balancing, surrounding, building, etc…then the backstory gives motivation and shapes expectation of the rules.
I like how you phrased it as “realism in the gameplay”…there’s a kind of logic that needs to be present and reinforced for the gameplay to hold true, and there have to be enough mental variables to juggle within that logic to be interesting. But that’s a whole other post! :-)
Stu asks: How do you measure the success of this particular piece? It’s very subjective!
Ultimately it comes down to watching how people react. You develop the interactive the best you can until you can test with the target audience. Then you watch their faces. Smiles? Looks of absorption? A reluctance to leave? You’ve got a winner. Or it’s back to the drawing board. The first 10 seconds are most critical in hooking people in a museum interactive; you can extend this time with some prodding by an adult or teacher, but this is not an ideal situation. The next metric is how long it takes to successful complete an entire “interaction cycle”, which I think of as the “observe, interact, assess results” steps. There are usually many such cycles in a given game and interactive, and a good one will overlap them. If you can get at least 2 or 3 in, then you have the basis for something that will be more engaging than a single one. As you build more of these successful interaction cycles, you are able to continual to draw the attention of the player into an extended period of play time.
On a side note, I’m not such a big fan of using the word “fun” to describe what it is we’re trying to do. It’s such a loaded and ambiguous word. I would break this down into at least three stages: “hold interest”, followed by “enabling active exploration and inquiry”, followed by “engagement in the life of the world.” The first is about visual design and sequencing of the presentation. The second is about providing the means to affect the game world based on the understanding gained from the first stage. Finally, engagement comes from the combination of free action, natural inquisitiveness, and getting answers to your questions in a way that builds your knowledge about the game world.
Thanks for your other comments, Stu…great! I was looking for one of those keyboard adapter blocks (I didn’t know what they were called, but had worked with one on another museum project). I also agree in general with the comment about transparent interfaces, though I might also say that transparency here means “not getting in the way”…that may actually mean that the interface IS a challenge: the computer interactive portion is really just part of a physical puzzle then.