Lead Developer Alex Silkin Gives His Thoughts on Fully Embodied VR Game Development

From the very beginning we set out to make games that allow a person to actually look with their head, walk with their feet, and interact with their hands like in real life.  In modern first person games, one uses a mouse to look around and shoot, and a keyboard to move – so with Project Holodeck we had to throw conventional wisdom out the window. We had some big design challenges because we could not do what all the other games out there are doing.

Alex and Janice in the Holodeck at GLIMPSE Showcase 2013For instance, we could not just have a menu that you click on with your mouse, instead we had to build several virtually spaced elements/buttons that you need to interact with across multiple sets of game menus (or “Ready Rooms”). Hud elements had to be kept to a minimum with the rare exception of contextually activated reticles. For example in Wild Skies, the ship health is displayed on a small floating orb that changes colors – however the ship itself has rings and other color indicators built onto it so that health information is displayed as diegetically as possible.

For story, we could not momentarily remove control of the player camera to craft cinematic experiences. Instead we chose to use in-game screens and radios to direct player attention. Its a lot of the same environmental storytelling conventions we learned from Half-Life 2, but crafted specifically for fully embodied VR.

Most people do not understand how difficult it is to make networked games, especially when you are already dealing with things like flying ships, VR hardware, and all that. It was a massive challenge to run three virtual worlds simultaneously across server and clients, having two people observe and interact with each other’s in-game avatars in real-time, and allow them to move about in a shared space and pick up shotguns and sniper rifles with motion tracking, all while flying at 100 miles per hour, in three dimensions, being attacked by enemy AI that dispaches its own groups of boarding pirates and shoots homing cannons. I kid you not. Biggest challenge in engineering history.

WildSkies_Screenshot_5We run the client games on two backpack laptops that are worn by the players, that are independently and concurrently simulating the game. This is the only way to deliver such an immersive experience with the existing technology at a low cost.
Because of this, the three instances of the virtual reality “universes” must be synchronized in real time to deliver a convincing mutual experience.

The server camera is central to our system. To the audience, it is a window to the virtual reality world that we have crafted. It’s a lot like watching an animated movie, but the actors are acting it out in real life, in real-time, in the playspace. You could call it the theater of 21st century.

Check out our server camera technique and virtual playspace in action at USC Games Demo Day 2013 below:

Editor’s Note: After 300 all-nighters working towards Demo Day 2013, Alex is now in hibernation. While he’s not that fond of blogging, journalistic writing, or even idle conversation, you are welcome to contact him at alex@projectholodeck.com. He might tolerate your email if you pose a question worth his time :)

Facebook Twitter Reddit Stumbleupon Email
Share

4 Comments

  • >>having two people observe and interact with each other’s in-game avatars in real-time, and allow them to move about in a shared space and pick up shotguns and sniper rifles with motion tracking, all while flying at 100 miles per hour, in three dimensions, being attacked by enemy AI that dispaches its own groups of boarding pirates and shoots homing cannons. I kid you not. Biggest challenge in engineering history…

    Interesting challenge for sure. ‘Biggest challenge in engineering history”.. I’d have a differing opinion on that, err, imho.

    A single kinect today can do full body tracking of multiple users in a room via libraries and software such as NUICapture. Add more kinects and you have better mocap rez.

    I don’t know what game engine was used, but some of the best out there (CryEngine 3, Unreal) can handle what’s described in the article above. Again just an opinion and maybe food for thought.

    Read a few scenarios if you want to indulge in idea-seeding for immersive AR / VR gameplay with Dirrogates (Digital Surrogates) from the hard science novel: Memories with Maya – The Dirrogate (http://www.dirrogate.com)

  • Haha the biggest challenge in engineering history is an exaggeration – but it was a crazy challenge. Your on the right track regarding the Kinect, and we had that exact same thought process when we started Project Holodeck twelve months ago. 1 Kinect could track multiple users in a playspace, and although it was jittery, if we combined 4 of them together we thought we could have higher resolution data.

    Unfortunately it doesn’t quite shake out that way. The Kinect is a great gestural interface, but it falls short when you need high precision data points, such as tracking the points of your hands. If you look at the Kinect skeleton it generates, there are data points jumping all over the place. It can detect if you are swinging your arm, but its not going to tell you exactly where your hand is in three dimensional space.

    Combining four Kinects into a play space did not make the data more accurate necessarily – it simply combined four sets of inaccurate data. We did get it to work somewhat with smoothing and averaging algorithms, you can see here:
    http://www.youtube.com/watch?v=0lpLtqETXm0&list=UUFSMNMaO4suTRZiMuq7FwsA
    http://www.youtube.com/watch?v=pih4Q3EePTk&list=UUFSMNMaO4suTRZiMuq7FwsA&index=6

    But we finally realized that no matter how good you did on the software side, the hardware just couldn’t cut it.

    Your absolutely right that CryEngine 3 and Unreal Engine can handle networking great! But “best” is a relative term – these engines may be the best at graphics and multiplayer gaming, but they are not the best at being open and abstract enough to allow for easy hardware integration and fast prototyping. Unity Engine turned out to be the ideal compromise for this. We were able to quickly develop a framework to support multiple hardware peripherals, and iterate on our gameplay at a rapid pace, without being bogged down by bigger game engines. However Unreal is getting better and we are already working towards integrating Project Holodeck with UDK – but it will take time!

    Thanks for the link, we’ll check out this book.

  • Very thoughtful and informative answer Holodeck! Thank you. I’ve learned a bit from your answer.
    Unity is a good all round engine, and I especially like it, because it’s got a very decent renderer when it comes to Augmented Reality content.

    Also the fact that it’s so accessible for 3rd party devs to create libraries and yes… Cost!
    The other mainstream engines Crytek/ Unreal are killer to license.

    Though if money’s no object, keep a lookout for Cryengine’s “Cinebox”. Aimed mainly for film-making, it’s got built in tracking for mo-cap as well as “Simulcam”

    All the best on your projects and keep innovating!

  • P.s You might want to have a look at this for perfecting on finger tracking: http://www.youtube.com/watch?v=NqjopQmqWAE

    Kind Regards.

Leave a comment

Newsletter

Want to stay posted on our latest progress? Sign up here! Non-trivial updates only, we promise.

Share

Share

Holodeck Collaborators

Holodeck Collaborators