November 27th, 2017

AR @ Leap Forward Part 1: Unveiling the secrets of Microsoft Hololens

At LeapForward, we consider staying on top of the latest technological advances to be of crucial importance if we want to continue to offer our clients the best solutions for their digital product design needs. In that light, this year we've been doing practical in-depth research into designing and prototyping XR (Cross Reality) applications. 

During one of our Fix-it-Friday's we came up with the idea to create an Augmented Reality promotional piece for each of the business units within the group (i.e. Little Miss Robot, Knight Moves, Once and District01), representing each as a single piece of "holographic" content. We had just received our very own Microsoft Hololens devkit, and in the absence of client demand for AR, self-promotion seemed as good a venue as any. 

By that point we had just finished the first iteration of Journey, so I was no longer a complete and utter newbie (or n00b, as we like to say in the community) when it came to VR and Unity. With the help of our 3D-generalist intern Xavier Allen,  we had everything we needed to get cracking. 

And here's the result:


Getting started

Going from VR to AR in Unity was quite easy (and it's gotten even easier in later versions), and a lot of learned concepts and techniques can be carried over. However, one major hurdle was the limited performance offered by the Hololens compared to the oodles of power afforded by our kitted out über-VR PC. 

The Hololens is essentially a mobile device (originally released in 2015) and so the first thing you'll have to do is scale back your graphical ambitions a bit. 

Fov

The second harsh reality to deal with is the extremely limited field of view afforded by the holographic display. At only around 30°, it can leave you feeling like you're looking at the world through a pair of toilet rolls. And the limited stereoscopic convergence means objects can't get closer than about 80cm before causing eyestrain and headaches (the solution to this is to just cull every 3D object that gets closer than 80cm, but it does mean that for now you'll have to give up on your Magic Leap dreams of holding a 3D elephant in your hands). 

This particular concept isn't possible to do with the hololens yet because of the limitations of the stereoscopic display to render objects closer to your face than 80cm. Source: magicleap.com
This particular concept isn't possible to do with the hololens yet because of the limitations of the stereoscopic display to render objects closer to your face than 80cm. Source: magicleap.com

But despite these flaws, there is still a lot to admire in Microsoft's first iteration: the display tech itself is quite marvellous, and during extended play sessions (I tried the Fragments demo) you do adjust to the limitations and stop noticing them. 

The spatial audio in particular works really well. In fact, the effect can be a bit magical, as you're wondering how you're getting directional audio without anything covering your ears. 

And while the air-tapping gestures work okay, the voice recognition works amazingly well and is really easy to implement. 

The next big feature of AR is spatial awareness i.e. the ability of the device to know exactly where it is in 3D space. Hololens accomplishes this with some really powerful infrared 3D scanners (essentially the same tech that powers the Kinect) mounted to the front of the device. These continuously scan whatever's in front of them, and the Hololens OS then takes the raw input of these scanners, stitches and optimises them into a more or less continuous 3D mesh (not quite in real time, it updates every couple of seconds). Finally it gives you (the developer) a reference to this 3D scan to use in your application. 

Using this mesh you can achieve a variety of different visual effects, like occlusion (i.e. holograms can be hidden behind walls or underneath tables or chairs), but you can then use provided API's to query these meshes, which allows you to find walls or flat surfaces, or even specific objects, like chairs, bathtubs or couches. You can do this by describing these objects as a collection of horizontal & vertical surfaces (i.e. a chair has a small vertical surface connected to a small horizontal surface, or a bathtub is a larger horizontal surface surrounded by smaller vertical surfaces, etc). 

This understanding allows you (in theory) to have holograms interact with the real world, from a ball rolling off of a table or bouncing off the walls, to characters sitting on your chair. In reality, it doesn't always work out though, mostly because of the limitations of the 3D scanning tech. 

Things to be aware of

It turns out our newly renovated offices are a bit of a nightmare scenario for 3D scanners: lots of glass, black surfaces, featureless white walls, an unusual ceiling and people moving about can all play havoc with your scan's accuracy. This in turn makes the "automagical" surface recognition a bit hit & miss, and causes weirdness, like detecting walls on the ceiling. 

This meant that I had to build a user interface which allows users to manually confirm & correct the system's educated guesses about the environment. And it turns out we're not alone in this: Microsoft's own demos and games (like Fragments and RoboRaid) all follow a similar design pattern where the user is first guided to create as good a 3D scan as possible before they're allowed to get on with the game proper. 

Thankfully though, the system does allow you to store these spatial scans and provides persistence APIs to cache placed World Anchors, so (unless something goes wrong or the environment changes radically) users would only need to perform this scan once. 

For our holographic portfolio, I created an initial setup flow where I (or whoever) could walk around the office in ideal conditions (i.e. no harsh lights, no people running around, etc.), scanning the area. Then the app would run the spatial queries and try to guess the best spots for the holograms, and put placeholders there (glowing cubes the same size as the actual hologram). 

Afterwards, you could tap each of these placeholders and manually move them into the right places (i.e. the LMR hologram on the LMR wall, the KM hologram on the table, etc.) and then confirm the final placement. This would then be cached so the next time you launch the app all holograms would be right where you put them. This way, whenever we let clients demo our app, they wouldn't have to go through the initial setup.

Wrapping up

Making a promotional holographic app certainly was an interesting experience, and the reactions from the various  people who've tried it (if you've visited our stand at Bump festival or our VR/AR Arena) have been very encouraging. If you'd like to try it yourself, why not drop by our offices and come give it a try!

Next time, we'll be taking a closer look at Apple's ARKit.

Author image

Gilles Vandenoostende

Interaction Designer

Digital designer & maker of things. Passionate about new technologies and loves to explore new realities: Virtual, Augmented or beyond.