Combining Opportunities for Game-Making Goals (GHD030)

Combining Opportunities for Game-Making Goals (GHD030)

Wednesday was a day of combining work goals with personal goals. The personal goal in this case is Make a Video Game, and the work goal is an NSF grant programming gig that extends prior work that happens to have a lot of game-like technology. As I reviewed the new grant information, it occurred to me how much system work would be involved, and how that system work was directly applicable to certain types of multiplayer games. And thus, I have decided to classify this new work as GHDR goal-relevant work.

The Work

I’ve mentioned this project before, but to recap I’m starting a new chunk of work for an educational research project called “Promoting Learning through Annotation of Embodiment (PLAE)”. This is an extension of the work we’ve done on “Science through Technology Enhanced Play (STEP)”, which is a kind of augmented reality learning environment that uses motion-tracking camera data pulled into our web application, allowing really young elementary school students to engage with science concepts under the guidance of their teachers. The new project pushes into new research ground, adding new features to help students further reflect on their activities in the augmented reality environment; even with the simplified models being used, it’s quite a challenge for the kids (or anyone, really) to notice everything that is going on in the group and on the screen. The enhancements, described as “annotations” in the grant application, are intended to make it easier to notice what happened through the combination of (1) events monitors and (2) playback. The research itself will, among other things, compare two different approaches to using this annotation system in classrooms for the next three years.

I’m the main system programmer on this project, having developed a web browser-based app that interfaces with the Open PTrack motion tracking system. It’s a bit of an unusual gig for me since I’m not the principle designer, which is good because development in the academic world is less about delivering targeted solutions than it is furthering the research process so certain QUESTIONS can be posed via the software and DATA can be analyzed. It took me a while to really grasp how to manage my own expectations with this model, but I think we’ve found a rhythm. I’m largely responsible for developing the best system I can for expressing the ideas that the research team want to pursue, and then someone else builds the actual tool to the evolving curriculum and researcher needs. I’ve discovered that I really like systems and tool programming (in hindsight, it seems like a no-brainer given my other design work), especially now since I have allowed myself to pursue excellence in the system design rather than try to hurry up and do “coverage” of requested features.

Relation to Game Making

The research system is essentially a multi-client Javascript/HTML5 web application that is served on a LAN by a NodeJS server. The experience itself takes place in a classroom with 6-12 students participating at a time, anchored by a giant screen at the front doing video overlay of our graphics on top of a webcam feed. It uses the WebGL graphic features of modern browser to do the drawing, receiving player inputs remotely from the motion tracking system and updating the screen in response. Hence, it is “augmenting reality” by doing graphics overlay, and the curriculum design allows students to directly “embody” objects on the screen to act-out what is happening. The focus of the research is on science education, so the kids get to be “inside the science simulation” and see what happens under the guidance of a teacher who helps them understand what is going on.

From a game engine perspective, the system is a simple game engine that provides a “live” environment. The PLAE expansion adds the notion of “annotations” that will help notice events that have been overlooked in all the excitement of running around the motion capture stage with their friends. One way I’ve been thinking about it is that it’s a way of “instrumenting” objects in the system to (1) monitor the value of a property and (2) perform an action when the value meets a certain condition by (3) emitting an “event” message. For example, a kid might decide she wants to know when a particular object among many exceeds a certain speed, turning that object a different color so she can see when it happens. Each kid with an iPad can create a variety of such “annotations” representing their own inquiry, and the teacher can then choose to show a particular student’s set of annotations on the screen where everyone can see it. That’s the basic idea.

As I was translating the annotation concept to game system concepts, it occurred to me that the annotation system was not unlike a starship bridge simulation, with the teacher serving the role of the captain and the kids serving in various capacities monitoring the ship systems. It is the same underlying architecture, so developing that will give me an awesome two for one bonus. My personal design goal is to make implementing this system as clear and easy-to-use as possible so a variety of different activity types can be supported. It should be fun, and it will totally count toward game making. This is really important because I anticipate this project draining a lot of my time for the next 12 months, and I am concerned about having the energy to attend to daily work on my major goals. We’ll see, though.

Other Game-related Activities

I also spent some time today distilling a set of notes I’ve been maintaining with an online friend regarding the design of a “social first” MMORPG. I have not been too pleased at the lack of progress in how social elements are modeled and represented in online games, probably because it’s a hard problem that requires a strong multidisciplinary approach to the design and it’s not as appealing to the mainstream as advanced graphics effects, environments, and animation. However, the social modeling challenge is one that lends itself very well to the kind of programming I am finding that I like, and it does not require the huge investments in time. All you need is the ability to model relationships between social elements, and a really fantastic user interface that excels at telling a story. Yeah, I know…easier said than done, but it’s something that I think is small enough in scale to handle with a small team.

Stuff Learned and GHDR Points Earned

I considered today’s activity as related to my Make a Game goal. Again for today, no direct results were logged, but supporting actions were:

PTS DESCRIPTION
5 I helped myself by using a past result/discovery.
2 I made a model of the annotation system, which clarified decision making.
2 Posted here on the blog.

That’s 9 points. Not a lot of points, and the low number doesn’t seem to reward me for my strategic insight for the day! But perhaps that’s OK; even the best ideas are worthless if they do not deliver tangible results, in my opinion. The payoff for this strategic decision today is increased tangible results in the days to come. Let that be my position on this!


About this Article Series

For my 2016 Groundhog Day Resolutions, I'm challenging myself to make something goal-related every day from February 2nd through December 12. All the related posts (and more!) are gathered on the Challenge Page.

0 Comments