VRLA Mixed Reality Easter Egg Hunt (2017)

Late last year, John Root of Virtual Reality Los Angeles approached me with a crazy idea: what if we built a giant fake forest, placed it in the middle of the Los Angeles Convention Center, and used HoloLens to allow people to hunt for virtual Easter eggs inside a mixed reality experience? I wasn’t sure exactly how I’d go about creating it, but based on my time building and demoing Ether Wars at VRLA’s previous event, I knew it was possible and people would love it. I was all in.

VRLA_Easter_Egg

The project got a late start–a mere matter of weeks before the show. Regardless, everything came together at the right time. We were able to have Microsoft on as a partner who donated all the HoloLens devices for the event as well as brought in their own developers to help with technical issues, project management, and the logistics of running a large public HoloLens experience.

 

LESSONS LEARNED

There haven’t been many projects of this kind. Certainly very few people have experience designing real-world mixed reality installations. I figured I’d give a brain dump of what I learned which might be useful for developers attempting similar feats.

DESIGNING FOR REALITY

Before anything could be built, we had to work out the math behind the user flow of the whole experience. People such as Disney Imagineers who design amusement park rides are very familiar with this process. We had to figure out how many people we could fit in the exhibit’s space and how long it would take to get users in and out of the experience. From here we determined how many people could move through the Easter Egg hunt per day, how much staff would be needed to run it, and how many HoloLens devices we’d need in total. We also had to design the set to accommodate potentially large lines that wouldn’t descend into chaos and mayhem by the middle of the day.

PHYSICAL SET DESIGN IS HARD

The VRLA Mixed Reality Easter Egg Hunt involved typical software development issues and the added challenge of building a unique physical set that works properly with mixed reality technology.

The first step was designing the set itself. Mike Murdock of TriHelix designed the set, using a Vive to make sure it fit inside the dimensions set by the booth. This allowed him to not only judge the size, but preview what it would be like to walk through the set before construction began.

IMG_0779

Choosing paint colors to match Mike’s virtual set design.

It’s important to get the art director of the set and the app on the same page–Primarily to make sure the colors of the real world work best against the HoloLens’ additive display while still being highly trackable.  Unifying the look of the virtual and physical objects is also critical to creating a seamless experience. This means you need to organize the art process far in advance. Nathan Fulton, my art director on the app, made sure the color swatches for the set matched his vision of the virtual objects in the experience.

Lighting is also very important. We spent a lot of time designing the placement and type of lights so the space would be as trackable as possible without having the display overpowered by the environment.

 

To build the physical set, we turned to Fonco Studios, an experienced Hollywood set and prop design company who constructed an amazing N64-stylized forest out of literally a ton of styrofoam. This set was then sliced up into chunks and transported to the LA Convention Center where it was reassembled.

ANCHORING TO THE REAL WORLD

One of the first challenges when designing the software was to determine how the virtual objects were going to be placed on the set. In the beginning I thought we’d simply drop spatial anchors on the physical location and then share these anchors with each HoloLens. This proved to be impractical for a number of reasons.

Firstly, I’ve had lots of issues sharing spatial anchors across devices. They often appear in the wrong places. Not only that, but anyone who has been to a conference will tell you wifi access is spotty, if available at all. Having to use the Internet to transfer spatial anchors around might not be possible.

Plus, without having the physical set to scan we would have no way to test the final layout of eggs and other objects until we got to the location a day or so before the event.

For most of the app’s development, we used Mike’s 3D model of the set design in Unity3D to place the objects on. Then we used AfterNow’s technique of placing three spatial anchors in the corners of the set, saving them, and then spawning the game level at the center of these points. This real world alignment process is a little error prone, but does work.

These anchors are saved locally to the HoloLens so the setup process only has to be done once. Whenever players put on the headset after the app is restarted, they can jump right into the experience–no alignment necessary.

IMG_0854 2

The final set, assembled at the Los Angeles Convention Center

We also attempted to match lighting as accurately as possible by taking spherical photos using a RICOH THETA camera at different points inside the set and using them as cubemaps in Unity3D. These cubemaps provided convincing reflections, and can also be used with IBL shaders to help make virtual objects match the real world environment.

TO OCCLUDE OR NOT TO OCCLUDE

The added bonus of having a fixed set is we know the exact shape of the world. This meant we had the option of using a LIDAR scan of the environment as an occlusion mesh instead of the one built internally by HoloLens.

There are lots of advantages to this. Most importantly, the 3D scan can be optimized for performance by reducing the number of polygons and adding detail where the scan may have missed a few things. This makes the occlusion much more accurate while giving a bonus in performance.

Mimic3D made an amazing scan of the set once it was finally set up in the convention center. We saw quite a reduction in poly count from our own retopologized mesh vs. the dynamically generated HoloLens one. This highly optimized mesh also let us do a neat “The Matrix” style grid effect that appears to pour out over the world with a simple shader.

IMG_0828

Mimic3D’s amazing LIDAR scan

Perhaps most importantly, with over a dozen HoloLens devices to set up before the show, not having to generate a good occlusion mesh on each headset saved a lot of preparation time.

WHERE DO I LOOK?

The limited FOV of the HoloLens made it very important to guide the user’s gaze to where the action is. In some cases our assumptions of where users would look and how they moved through the environment was not exactly lined up with reality. One prime example is the rabbits.

In the back right corner of the forest there is a group of three rabbits that scurry away when you get close. We spent a lot of time tweaking the trigger size, position, and animation of the rabbits to make sure people had plenty of time to notice them. We even had Somatone create a positional audio effect to call attention to this. However, it was far too easy for someone to back into the trigger volume and have the bunnies escape without seeing them. A lot of participants reported not seeing rabbits at all–perhaps, the most important creature in the entire experience!

IN CONCLUSION

The lines were huge but everyone seemed to have a smile on their face when leaving the booth. The press loved it, too. I consider this a huge success for all parties involved and a leading example of how Mixed Reality can be used for things far more interesting than enterprise apps.

Building on what we’ve learned at VRLA, I’m totally ready to tackle a much more elaborate Mixed Reality entertainment experiences. Since the event, I’ve recieved a lot of interest in doing this type of project on a larger scale. Who knows what we might build next!



from Hacker News https://ift.tt/2nVipDa