It’s alive! Playing within and with your living room…

Room Alive!

roomalive

Microsoft presents a new proof of concept & prototype, entitled “RoomAlive”. It shows how to turn your home into an augmented reality video game using Kinect’s for real-time 3D scanning and projectors to cast augmented interactive content onto the walls and furnitures! Microsoft describes it:

[…] Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. […]

Their system consists of projectors with connected depth cameras (6 camera pairs in the demo). It can automatically calibrate itself and scan the room for a 3D reconstruction. As long as the cameras have some overlap (~10%), the system works with multiple devices without user interaction, i.e. the system can auto-localize each camera and create a 3D representation directly. It automatically detects walls (mostly vertical) and floors (mostly horizontal) to use it for the application logics (e.g. to only spawn creatures on the floor, not on the walls).
The user can interact with his/her body (using the Kinect’s real-time abilities) or with a connected infrared gun to play different demo scenarios:

Whack-A-Mole (demo to show a combination of input methods), Robot Attack (demo using the scanned surface as a playground), Traps (demo to show view-dependent rendering per user)… But better check it out directly:

For rendering Unity3D was used and therefore any Unity3D model might appear easily within the AR space. With a scripting interface in C# the game/content developer can write the needed logic and access a RoomAlive API to check certain events (floor hit, user touched…).

The key challenge for content creators is surely the unknown space: a game must work in any setup and react without prior knowledge to it. This is being discussed in their paper (see below) along with thoughts on an appropriate input device for certain tasks. An interesting and challenging new “non-constraint” thought arises: in traditional VR games the user’s actions and movements can be constrained by the game’s logic. With the real person moving directly within the game’s space, this is not possible. The illusion would break since no force-feedback/limitations are possible! The applications needs to deal with it and this is being discussed by Microsoft.

To improve the immersion and the positioning of objects in free space, they write, the user’s head position can also be tracked. This data can then be used to render a view-dependent version of the content. Obviously this breaks multi-user scenarios but yields in a better augmentation (though RoomAlive offers some averaging of multiple user positions that is said to work convincingly well).

If you want to learn more about it, check out the project’s page from Microsoft research or read the ACM Paper (8 pages) on it. I like the demo very much and they trigger some interesting thoughts on spatial awareness for game designers and the impossibility to block the user’s physical movements (without huge and annoying hardware setups) and how to consider this during development & design. Great show!

Next conferences

To complete today’s posting I wanted to mention the next upcoming AR shows! Be sure to check those out, too! First, it’s metaio’s insideAR (Munich Chapter), that will happen on Oct, 29th + 30th! Be sure to register and check the program on their page.

insidear2014

Afterwards, be sure to jump over to London on November, 17th! You can book the Augmented Planet event now on this page from eventbrite!

Have a great day!

Leave a Reply