• What’s On The Horizon In UI, VR, AR & AI

What’s On The Horizon In UI, VR, AR & AI

Rebecca Bedrossian, Global Content Director of POSSIBLE, discussed the acronyms that represent disruption and innovations—UI, VR, AR, AI—with futurist Amy Webb.

Foresight, futures, and fringe, oh my! Wander over to the Future Today Institute and you’ll discover content broken down into these three categories: Futures Research, Foresight Tools, and Fringe Sketch. Intriguing, but what does it mean? In an age of uncertainty (read: today), the Future Today Institute acts as a crystal ball. It examines the constant barrage of information we receive on a daily basis, along with the unending advances in technology and science—and maps it to make sense of it all.

At the Institute’s center is founder and CEO Amy Webb, author of The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream. Her work has focused not only on how technology will transform the way we work and live, but also how we govern. Amy grabbed the bull by the horns in the LA Times when she responded to US Treasury Secretary Steven Mnuchin’s nonchalant comment of, “I’m not worried at all [about robots displacing humans in the near future]…In fact, I’m optimistic.” Her op-ed, her research, and her work are not to be taken lightly.

I recently had a chance to sit down with the futurist and ask her about conversational UI, VR, AR, AI—basically acronyms that represent disruption and innovation. The following is an excerpt from our conversation:

Rebecca: Most people agree that conversational UI is the next big step in human-computer- interaction. Given that we have such a hard time communicating with one another, are computers going to fare much better? Do conversational UIs get a pass on empathy because we know they’re machines?

Amy: Certainly not. But this illustrates why teaching machines to learn and emulate empathy is so difficult. As humans, we don’t have a codified set of basic human values, because we all have different worldviews, experiences, and belief systems. Machines need a clear directive for learning. They don’t understand gray areas. This doesn’t mean developers should give up on studying how to encode machines with human values and empathy––they ought to rise to meet this challenge.

What new skills do designers need when working on conversational UIs? Is it more like directing an actor? 

Having a solid background in comparative literature, anthropology, comparative religion, and other languages is critically important when designing conversational UIs. Ultimately, designers must determine whether the interface accurately responds to many different people, not just a single profile of an “average American.” 

What are the signals we should be paying attention to in the VR/AR space, and in the realm of AI?

AR, VR, 360-degree video, and holograms aren’t new. But in the year ahead, we’ll see more devices being made available to consumers at affordable prices—and we’ll see a number of new content providers building out stories and experiences for each platform. I’m particularly interested in the work that Magic Leap has been doing. There’s a lot of intrigue surrounding Magic Leap, and some argue that we may never see a product launch. I’ve read all of their patents and I’ve studied their work—and I’m not the only one. Regardless of what happens to Magic Leap, they’ve laid the groundwork for second-, third-, fourth- and fifth-order iterations, and they’ve inspired countless other companies, big and small. 

Think about AI as the next layer of technology that will be integrated into everything we do and much of the technology we use, and that includes mixed reality interfaces and devices. Simply put, AI is a branch of computer science in which computers are programmed to do things that normally require human intelligence. This includes learning, reasoning, problem-solving, understanding language, and perceiving a situation or environment. AI is an extremely large, broad field that uses its own computer languages and even special kids of computer networks, which are modeled on our human brains. The idea that we might someday create artificially intelligent, sentient robots was first suggested by prominent philosophers in the mid-1600s, but we finally have enough compute power, developers, and use cases for AI to push forward from the fringe to the mainstream.

At f8, Mark Zuckerberg asserted that AR is “about making our physical reality better”—but given that the most successful AR today is all about changing and masking reality (e.g., masks and enhancements), will it be a force for highlighting truths and deepening our understanding of reality or hiding and masking it? 

My sense from f8 was that Facebook’s AR play had more to do with competing against Snapchat in the near-term. That being said, the promise of AR is that it will transition the ubiquitous information layer we currently access using computers and the internet to frictionless devices like mobile phones, eyeglasses, and smart contacts, allowing us to glean information everywhere.

I don’t think that’s masking reality as much as it is deepening our understanding of the world around us.

Our digital and physical lives are increasingly blurred. We use phones at concerts, on vacation, and the creation of content has become as important as the “actual” experience. With AR make the digital less of a disruption to physical experience or more of a disruption?

In the near-term, we’ll use our phones more for AR experiences, but that will inevitably shift to eyewear. But I don’t believe that everyone will opt in to an AR experience, or that active users will want to engage AR all of the time.

It’s quite clear with global population growth and diminishing resources that technology is the main hope for human survival. It’s only through technology innovations and efficiencies that we’ll be able to feed humanity. But so many of the innovations are about information management and sharing. What is the next big innovation that would have the biggest impact on the most number of people? 

I’d argue that biology is one of the most important technology platforms of the 21st century. Innovations in CRISPR-Cas9 and genomic editing will dramatically change human health and agriculture. The implications are tremendous. Mosquitoes carrying malaria could be edited so that they no longer carry the disease through future generations, and so that millions of humans in high-risk regions no longer suffer from the disease.

There are therapeutic possibilities in human medicine as well. Editing our genetic code could mean eradicating certain genetic diseases—like cystic fibrosis—so they can’t be passed along to babies. Liver cells could be edited so that they lower the bad cholesterol levels in families that have inherited mutations. Seeds could be modified to withstand the effects of climate change. Urban food deserts could be eased through smart vertical farms that make use of modified crops that require less water and space. There’s a lot of possibility.

Rebecca Bedrossian, Global Content Director at POSSIBLE, leads the company’s editorial initiatives, creating a culture of content development. As the former managing editor of Communication Arts, Rebecca has been immersed in the world of design and advertising for over 15 years. She has served on the board of AIGA San Francisco, and her articles on visual culture and creatives have appeared in publications throughout the industry.

Amy Webb is an American futurist, author and founder of the Future Today Institute. She is an Adjunct Professor (future of technology) at New York University’s Stern School of Business and was a 2014-15 Visiting Nieman Fellow at Harvard University. Amy was named on the Thinkers50 Radar list of the 30 management thinkers most likely to shape the future of how organizations are led.

Join the conversation