25. september 2018
AR – Humanizing our interactions
Augmented Reality will humanize our digital lives. That we can look forward to. I found the images to be striking […]
Augmented Reality will humanize our digital lives. That we can look forward to.
I found the images to be striking — black and white portraits of couples, friends, mothers and daughters. All depicting familiar scenes — a couple laying in bed; a family sitting down to dinner; friends out on a boat. But something is surreal and off-kilter about them. All eyes are averted, no one is interacting with one another, at least IRL. They just stare at their empty, cupped hands.
The photo series is an art project by the American photographer Eric Pickersgill called “Removed.” What is removed are the smartphones and digital devices we now take for granted as part of our day to day lives.
Pickersgill was inspired by a chance encounter in a café one morning. In his notes from the encounter he writes:
“Family sitting next to me at Illium café in Troy, NY is so disconnected from one another. Not much talking. Father and two daughters have their own phones out. Mom doesn’t have one or chooses to leave it put away. She stares out the window, sad and alone in the company of her closest family. Dad looks up every so often to announce some obscure piece of info he found online. Twice he goes on about a large fish that was caught. No one replies. I am saddened by the use of technology for interaction in exchange for not interacting. This has never happened before and I doubt we have scratched the surface of the social impact of this new experience. Mom has her phone out now.”
Reacting to these images, I was surprised that it took the removal of the device from the image to really highlight how strange this way of interacting with the world has become. That the simple fact of everyone staring down at little rectangles isn’t in-of-itself alien, and alarming.
But it’s also understandable and predictable. Our digital lives and physical lives have been on a collision course for some time, each next advancement feeling like a natural and inevitable evolution. The hardware that bridged these worlds was once room-sized, then boxy and beige under our desks, then sleeker and on top of our desks, then foldable on our laps, and ultimately on our person at all times in the form of smartphones or smart watches.
We are increasingly dependent on this digital component of our lives. How we get to the next destination, how we frame and curate our personal lives to friends and family, how we find a new job, how long to cook the scallops — our minds have been both collected and offloaded to this digital world in profound ways. Thumb taps on a piece of glass give us all immediate access to the sum of human knowledge and experience. And who remembers phone numbers anymore?
Despite the increasingly inseparable nature of our digital and physical lives, the form-factors through which we access information and communicate remains distinctly inhuman.
Pickersgill’s photos simply shine a spotlight to this, especially in light of emerging alternatives.
Human Elements
This is what virtual reality, augmented reality, mixed reality, and other related terms are all about — putting essential human elements back into the equation.
A new class of platforms is emerging that are perhaps best described as Immersive platforms. They are different because of their user-interface — the device itself starts to disappear, and more fundamentally human gestures — look, move, touch, explore — become the main mode of interaction. These are the first examples of the next phase of computing, often referred to as spatial computing.
It is this humanizing of our interaction with our digital lives that excites me, and drives me to create work in this new and evolving area of media.
I’ve watched the keynote Steve Jobs introducing the first iPhone many times. It was one of those moments, a perfect history-book-ready piece of theater to introduce us to the next way of life that the device, and its subsequent imitators, would usher in. Instant information, and an inseparable digital life.
This was 2007, more than 10 years ago. What those of us working in media should imagine and recognize is that mobile, this particular entry point to our digital lives which is both incredible, and incredibly flawed, is not the last form-factor.
Something is going to come next, and that something is coming sooner than later. It is likely to be an immersive device and could prove to be as game changing as the Internet itself.
Certainly there are many cautionary things that can be said about the barrier between our digital and physical lives disappearing. But I would argue that there is already only a distinction in user interface, in form — we are already entirely dependant and there is no reversing that. What we can look forward to is the humanizing of that relationship.
Virtual Reality
As the director of immersive platforms storytelling at The New York Times, I wrestle with these ideas on a public stage, and with a large amount of audience feedback and data to evaluate and consider. To me this is an essential thing to explore. It is no less important than understanding the means by which we will deliver news and information to the public, perhaps primarily so within the next decade.
Change happens fast, and the media does not have the luxury to get it wrong. We are in the midst of a transitional and disruptive shift in our relationship with digital information, and we need to do the work to understand it.
I first started exploring an immersive medium through Virtual Reality in 2014, after a trip to the SIGGRAPH conference in Vancouver. I’d been to the conference in other years, but this time one could really sense a shift. Papers, art projects, talks, so many were being devoted to virtual reality, augmented reality, and their attendant technologies.
I wanted to know what it could mean for news, this potentially powerful and novel way of creating a sense of place and presence for a reader that would break down barriers. By the end of 2015 a small team in the newsroom had in some cases literally duct-taped the capability together to create 360-degree video virtual reality projects. And with the launch of NYT VR by The New York Times Magazine in August of that year, we had a platform.
Simultaneously, we shipped over a million Google Cardboards to our readers through a partnership, and in that bold stroke, created an awareness around VR that made it a household term literally overnight. What had been an experimental and costly medium mostly found at conferences and festivals and consumed by the usual crowd, was now being experienced by grandmothers and their grandkids together.
Sliding phones into folded pieces of cardboard is definitely not the future. That is not why this was important. It demonstrates, even if crudely, that there is another way to interact with digital information. One that relies on a more human and intuitive mode of interaction. To explore the scene you simply do what you would do in life — look around.
The first VR piece published by the newsroom in the NYT VR app is called “Vigils in Paris” published November of 2015. It is a snapshot of a city in mourning, after the terrorist attack on the Bataclan nightclub. It was a 5-day turnaround, a real feat for that time. As a viewer, you’ll find yourself amidst the rituals of mourning and attempts at healing. At one point, standing in a circle, watching a blindfolded man accept hugs from anyone in need of one. You’re there with the people of the city, you can simply watch as one by one people come forward for that hug, or you can look back and around you at the individual faces, their expressions registering everything you could need to know about how people were feeling. We’ve published many more like this over the last few years, covering topics from every corner of the newsroom.
We’ve used the medium to cover everything from sports, to conflicts, to culture. We’ve taken readers to the top of the world trade center, to Mecca, to Antarctica, to the past, and with a little help from NASA, 3 billions miles away to Pluto.
Virtual Reality as a medium is growing, but it still has a fundamental audience problem — it sits to the side of how we generally consume media, it isn’t at the core. At the Times we considered this carefully as we expanded our immersive storytelling initiative to include Augmented Reality.
Augmented Reality
As it turns out, the mobile phone, while less immersive than a VR headset, is a fantastic immersive storytelling device thanks to recent innovations that greatly expand sophisticated AR capability across millions of older phones.
What it lacks in full immersion, it gains in a massive audience, and what’s called 6-DoF, or six degrees-of-freedom. Only the most expensive and sophisticated VR headsets typically allow for this. Now, thanks to ARkit from Apple and ARcore from Google, phones even several years old can accomplish this, allowing for the creation of experiences where viewers can walk around, and up to digital information in their space.
Starting at the end of 2017 we began the work of adding AR capability to the core nytimes app, so that the millions of readers on supported phones who already have the app would have access to the work we would publish.
Producing AR projects with real value required honing an editorial vision and perspective for this technology. The capabilities of AR as a technology had clearly become sophisticated. Surely it could be used for more than putting fun masks on people’s selfies or dancing hotdogs on tables.
This vision is a work in progress, and will surely change as technologies develop, and platforms come into existence. But here is where we are now:
The first part of this perspective, we could call MGBA — in the parlance of our time: Make Graphics Big Again. For anyone involved in visual storytelling and graphics, there has certainly been a sense of loss from the big 2-page print spreads or large desktop displays that were the primary canvas, to the merely 2-inch-wide screen of the mobile device. And this is how the largest portion of our audience experiences this work.
AR breaks free of that tiny frame by treating the phone not as a surface, but as a window. Now a visual can be a 4-foot wide diagram on your table for example, viewed by walking around with your phone. The screen may be the same size, but the experience is one in which the visuals feel big and impactful again.
Which naturally brings me to the next element of our editorial perspective, which is all about the mode of interaction.
AR is one way to reduce abstractions, and lean into this more human mode. No longer need we pinch-to-zoom, or swipe, or tap. We can simply move ourselves as we would when navigating with the physical world. Want to see something from the other side? Walk around it. Want to see something up close? Lean in.
The next piece of our perspective is all about scale. Understanding real-world scale is nearly impossible to convey on a small phone screen. This is no longer true with AR — using the context of your environment, visuals, objects can be projected into your space at their true size. For a piece we published on the riveting rescue of a boys soccer team in Thailand from deep inside a cave, we enable readers to project slices of the cave into their environment in real-scale. This greatly enhances the understanding of exactly what kind of challenge faced the rescue divers, and provides a kind of engagement with the story and a visual impact not possible before.
The last part of our approach is really the most important, which is to lean into the future of how we will consume media. This is the spatial computing aspect that this is all about. The mobile phone was not originally designed with AR in mind. It’s a bridge device in many ways. But there is still much that can be done with the mobile device, which importantly is the one in everyone’s hands now (at least when not digitally removed by Pickersgill).
We can start to understand what it means to place digital information directly into our world, and imagine the limitless possibilities of full integration between the digital and the physical, free of the limits of the rectangles that serve as barriers in more ways than one.
Short about the author:
Graham Roberts is the director of immersive platforms storytelling at The New York Times, and leads a team that explores virtual and augmented reality projects, as well as innovation in video and motion-graphics.
What is “Tinius Talks”? Tinius Talks are articles, videos, podcasts or debates by specialists on the future of journalism, fair tax policies and ethics in algorithms – shared by the Tinius Trust once a month throughout 2018.