An E3 Of User Interfaces



Normally, in any technology-driven business, widespread industrial shifts tend to be brought about by two things. The first is balance — when an organization conceives an idea that gracefully walks the line between the requirements of both producer and consumer.


All it takes is a single breakout hit for people to sit up and take notice, and start scribbling notes based on their observations of why it was successful. Following this, these notes take the shape of a product or service that hopes to improve upon the originator of that particular idea.


The second is timing. Technology and entertainment are expensive businesses to run, and while their success rides entirely on consumer satisfaction, it is also heavily dependent on culture and mindset. If the audience isn’t ready something, they won’t bite no matter how profound the idea.


This is why every games industry event inevitably has its own “theme” that people associate with it. Something that reflects what the industry as a whole — consumer, middleman and producer — is thinking about at that point in time, and whether or not we’re ready for it. For instance, GDC 2010’s theme was that of affordable web-based games, which is a sector people have been actively looking into for the past year, as part of an effort to reach out to a broader audience.


In line with this movement toward audience expansion, E3 2010’s theme appears to be that of user interfaces and streamlining the way you “talk” to your machines.


A Window of Opportunity



The idea for this article first came to me two weeks ago, when I upgraded to a new laptop that came with Windows 7. While I certainly don’t dislike 7, ironically, the first thing I did upon opening my new laptop was turning off the “Aero” interface — which is Windows Vista and 7’s defining visual trait — so I could prevent the operating system from eating into my RAM. It made me wonder why the most striking feature of Microsoft’s new OS came at the cost of performance.


Beyond the performance loss, there was the matter of Windows 7’s UI. Within about 24 hours of using 7, I was ready to dislike the UI immensely. Where was my “recent documents” menu under Start? Why couldn’t I hit F4 and see a drop-down list of all the important places on my hard disk? Why did my Windows Live Messenger refuse to minimize to the system tray unless I ran it in compatibility mode?


A friend informed me I could turn most of those options on if I wanted to. Once I’d done that, it made navigation that much easier. But then, I was left wondering why, if Microsoft finally had a chance to make me learn my way around a new OS, they had made the process so unintuitive. Perhaps all of this would have been easier if I’d had a way to get my laptop to understand what I wanted without using a keyboard and mouse.


Wave to the Future



The most prominent presence in this regard at E3 is, of course, Microsoft’s upcoming Project Natal — or “Kinect” if you prefer. Contrary to what some believe, it remains to be seen if Kinect is the next big thing in the marathon for “greater immersion.” While there’s been no end to the discussion surrounding whether or not Kinect sports the accuracy and functionality required to be compatible with both traditional games and the wave of new experimental software Microsoft hope to inspire, we’ve also got to ask ourselves if that’s really the important question here.


Kinect’s motion and voice-recognition applications go well beyond gaming. The device is meant to be able to “understand” what you want from your body language and the tone of your voice. When it comes down to it, its ultimate goal is to facilitate a greater degree of control over your Xbox with greater ease, not make you leap off your couch and kick imaginary footballs at the screen. If anything, it’s the first step toward exerting even less effort than pushing a sequence of buttons. And this means its uses for, say, the disabled, are vast. Or for companies giving presentations. Or for medical purposes. And eventually, maybe even for a more affordable form of motion-capture. In this particular case, the possibilities really are endless.


Finger on the Pulse


Vitality sensor As for gaming itself, while Kinect can certainly make communication and browsing through menus easier — and maybe even come to the timely aid of eroge publishers — there’s another device that’s better suited to facilitate this “level of immersion” everyone’s so keen to achieve. That device is the Wii Vitality Sensor. The Vitality Sensor, which Nintendo have said can read and interpret signals from the human body, is interesting because it works both ways when placed between user and machine. Games that use it smartly — not that one should expect to see too many of these — could either tune themselves to suit their user, or to manipulate the user instead.


The thought of a game that can read your reactions and respond to them is a little scary at first, but when you start thinking about the long-term effect it could potentially have on narrative and characterization in games, things start to get interesting. What if Persona could tell if you were unwell? What if a character in a game like Heavy Rain could tell if you were lying?



Moving into 3D Space


And then, there’s the concept of “space.” Ever since games went 3D, there have been very few products that have actually managed to make good use of 3D space along all axes. Yes, you can aim and shoot in any direction you want, but we’ve been able to do that since the days of the original Heretic. Games that really use 3D space — the distance between you and your opponent, the entire volume of an area that you can occupy and manouevre through — are far and few between. An old title that comes to mind in this regard is Outwars, developed by Singletrac and published by Microsoft, with its relatively early jetpack mechanic. Another, of course, is Boom Blox.


At E3, hopefully, this space will be dominated largely by Sony’s Move. Sure, the Move is going to make certain PlayStation 3 games more accessible just like the Wii remote did, and that’s great. But long-term potential of the device lies in being able to navigate through stereoscopic 3D space, which is something Sony plan to push aggressively starting this year. While 3D patches for wipEout HD and Super Star Dust are great, the concept of “space” is something that could very well be unique to the PS3.



And the lean toward improving interfaces doesn’t stop there. Even some of the more high-profile software at E3 this year is very interesting in how it approaches UI. Take Metroid: Other M, for instance, where you switch between first and third-person views depending on how you hold the Wii remote. Even Zelda, which everyone’s awaiting with bated breath, is designed entirely around Wii MotionPlus, which could allow for a complete UI overhaul that improves upon Twilight Princess’s disaster of an interface.


New input devices; different ways for games to read your body; stereoscopic 3D; even portable 3D — while they might be competing products from different companies at present, in the long-term, they’re all part of a larger initiative that aims to streamline with the aim of eventually allowing users and their machines to understand each other better and removing the barriers of communication between the two. This E3 is more about researching that space than anything else. After all, isn’t that what “immersion” eventually comes down to?


Siliconera would like to apologize for all the terrible puns used throughout the course of this article.

Ishaan Sahdev
Ishaan specializes in game design/sales analysis. He's the former managing editor of Siliconera and wrote the book "The Legend of Zelda - A Complete Development History". He also used to moonlight as a professional manga editor. These days, his day job has nothing to do with games, but the two inform each other nonetheless.