Augmenting your world without Glasses

(C) WikiMedia Commons

Everybody is obsessed with Magic Leap One to ship, Hololens next edition to come, questions like when will we see a Meta without wires and a working hand tracking? In short, we all hope for Sci-Fi-styled slim AR glasses to wear everyday. But, wait a minute. Is the glasses-hype justified? Would people really want to wear glasses if they don’t have to in “normal life”? Or might there even be a better way to augment our environment? I’d like to step back once again, show many examples today and give an idea on an alternative to AR glasses and why it might come in handy – not having any device in your hand or visible at all to augment your life … or Go game.

Defining an Augmented Reality

Augmented Reality, Mixed Reality, Extended Reality… terms are getting mixed up these days or are misunderstood (compared to classic definitions we find in early papers). But that doesn’t matter here. I want to stress the point, that AR can be a general digital extension of our senses (plural) in different ways. It’s not only about visual information, that we cast directly into our eyes, it could be spatial or location-bound sounds (see my article on augmented audio), even tactile sensations (tock-tock from your smartWatch to make a right turn), etc.

Sticking with the visual part – the definition or a useful scenario is not limited to the general public’s perception of a “final goal of AR” being the AR glasses. We don’t have to reach out into the future and dream of holograms for that. Simple other solutions can give a great value and work today. For example, take a head-up display for a car (fun video) or the good lazy clothing by overlaying the dress without changing. So there are scenarios, where it might be a good alternative. But everbody is obsessed with AR glasses – what is the advantage?

When to use AR glasses?

Surely, AR glasses seem to be the holy grail of AR devices for years. It has the most problems to overcome. Well, still some time to go, too. The advantages are numerous if we reach a tech level we all dream of. You could have a pocket-PC that you can put on your nose at any time, it’s wireless, mobile, small and always available. It gives you contextual information right in your field of view, without the need of holding something in your hand. It is private information which can be good or necessary – or annoying. Social problems might arise like with the first iteration of google glassholes glasses. People might (still) be uncomfortable with someone wearing smartglasses. The person with glasses might be distracted by AR during a conversation or he or she might even be recording the conversation, breaking social bonds and trust. Another issue comes from human interaction (as discussed multiple times here before): shaded glasses can be freaky or impolite. Technology still has to find the balance between not-shaded (but less image clarity) or fully-shaded (but less human eye interaction). In short, putting on AR glasses must work in the current context and give the needed added value. Ronald Azuma from Intel gave a great talk that nicely summarizes the history and status-quo of AR and also mentions this topic: it must be worth it. People wear glasses when they can see better with it. – This would extend from sunglasses or prescription glasses to an AR x-ray vision, digital spatial data integration, etc. that might be necessary in a certain context. The roadmap for AR glasses will probably come through specific use cases first (where it’s okay to wear a battery belt or look dorky) and general public use later on (if ever).

CastAR (C) Wikimedia Commons

But there is still some way to go and humans are rather sensitive to “things in faces”. I’d rather not wear glasses if I don’t have to. Again, one needs to decide: what can I do better with glasses? Is it worth it? But if there was another way… you could pick the second option. A kind of blend of tech came from the great team from CastAR: they created glasses with projectors, that would cast the information into the real world, but with correct distortions for the right perspective for the user. Unfortunately, they shut down in 2017, but the idea of casting information into your world remains valid!

Maybe we are better off without glasses!

So-called projection mapping, spatial AR or projection-based AR (PBAR) became quite popular during the last decade and augmented.org reported several times on cool demos. It worked best on flat and bright surfaces in a dark and static environment. Probably you have seen some fun show in your city. Lightform runs a projection mapping website that shows many examples. Here is one:

The great thing about this concept is that you as a consumer / user don’t need to wear anything. You even have your hands free and can enjoy the digital extension with your colleagues or friends easily. This makes it great for cooperation in a mixed environment where you might want to work in a physical space with physical materials – in conjunction with digital extensions. One of many projection demos I did in the past was for design reviews and customization of shoes. The user could select different designs and cast the variant onto the real physical object:

It’s an old demo, but the idea remains cool. You can touch materials, feel and interact with the real object – while augmenting some elements (here color or patterns). It also already shows dynamic reaction to changes. Tracking is today way more advanced and allows matching overlay with moving projector and/or target in real-time. This makes it ideal for fast and interactive scenarios like in gaming or other experiences that get interesting through not-planned user interaction, e.g. going for PONG in AR:

… or by using a hand-held device that let’s you discover “hidden” layers of reality in real-time by casting a “flashlight” onto real objects. Bored at a museum or during an Escape Room game? Devices like the Lumen could spice things up, again for multiple users at a time:

But of course, projections are limited if we want more complex objcects to be augmented or projected. If it has a certain depth in space, tracking of the user’s point of view is needed. It still works well for a rather fixed position of user’s eyes like in this restaurant example where you can preview your food on your plate (rendered in an anamorphosis distortion to work from your point of view):

This works well to integrate digital data into your space, like on a table or map like in this military sandbox example. Cooperation of humans can be supported without the need to wear special devices. A better get-along between human and machine could be reached by in-location communication from machine to the vulnerable fleshy carbon units. E.g. Volkswagen and Fraunhofer institute show a way how real-time projection could increase worker safety:

If we are in a dynamic scenario we sure need working real-time tracking. Rendering doesn’t always have to be photo-realistic neither, but could also be a laser casting the next process step or stencil for your next task onto the physical object. The welding example is a good one:

One more example shows the idea of overlaying some kind of x-ray vision onto the human body. This could help bring hidden three-dimensional data to life in context without the need for a screen, where everything would be limited to a 2D frame again and needs cumbersome mouse or keyboard interaction. Inter-human interaction could be put in focus, removing visible and distracting technology from the field of view (or the nose of the people):

Let’s focus on the problem, not the medium.

The given examples show how another approach to an augmented space could look like. I’m a big fan of AR glasses and can’t wait for the day, when it all interconnects an we can have a working AR cloud in the background to allow AR overlay that sticks and stays. It’s gonna be fun! But meanwhile, let’s think about how to integrate digital data into our space without glasses! Spatial computing doesn’t have to happen through goggles. It would be actually more natural for humans to avoid those. If we can integrate projections smart into our context at home or at work – maybe even by using wearable projectors – we could have a less cluttered interaction, still hands-free. People would not have distracting pieces of tech in their faces that create a social barrier or burden between two interacting persons. While we wait for holograms, we should consider projections. They are shareable, fun and socially easier to integrate into existing physical “hardware” setups. For example, I love to play Go remotely against my friends (although I’m still bad at it). I could play in VR (like AltspaceVR) or on the screen, but I’d prefer playing on a physical board. Projecting the augmented (remote) reality could do the trick to combine 21st century technology with the more than 2000 year old board game seamlessly. Tech steps back and let’s us humans interact more easily. Let’s go!

Igoki: Playing on online-go with physical Go board. from baduk

Allright, fellows. Over and out, enjoy an augmented week!

PS. If you are interested to create a solution for that PBAR-GO together – let me know in the reddit thread! :-)