10 VR Takeaways from Oculus Connect 2

Last week’s Oculus Connect 2 conference was perhaps a pivotal event in the story of virtual reality. It was the last developer conference before the floodgates of consumer VR open next year when three platforms–Oculus, Steam VR, and PlayStation VR–make their way into our homes and offices. In some ways, it felt like Apple’s WWDC before the App Store and iOS SDK launched in 2008. Developers and users are on the cusp of a new frontier; there’s so much we don’t know, but the eagerness and excitement for this new platform is palpable. The lessons of early VR experiences are just starting to compound and fuel a feedback loop that will eventually lay out the foundation for our understanding of what works in virtual reality. There’s a whole lot of figuring out to do, which is really exciting.

The emphasis of this year’s Oculus Connect wasn’t on unveiling new hardware. This Holiday’s Samsung Gear VR isn’t all that much different from the past models. We didn’t see new Rift headsets or controller prototypes–the first consumer release is pretty much set. More interesting were the software demos, both from first and third party devs. These demos show not only the current state of VR gaming and social experiences, but where developers’ heads are at in fleshing out new ideas and focusing their efforts for experimentation. Oculus Story Studio, Medium, and the Twitch social experience are the best examples of that, and there are insights to be gleaned from each, even from short demo sessions. As with last year’s Connect and our GDC hands-on with the HTC Vive, I’m going to share the takeaways that stood out to me most. If you followed along the announcements at Oculus Connect 2 or attended the conference, I’d love to hear your own takeaways in the comments.

The VR Content Wars Begin

While gaming is what most people will associate with virtual reality at first, John Carmack predicts that gaming content will represent less than half of VR experiences as the platform matures. That other half of content will be made up of various forms of media, some which are adaptations of existing media, and some new forms we can’t yet anticipate. But the partnerships announced at Oculus Connect make it clear that video–you’re proven moving images in rectangle–will be a big part of what VR users consume.

Netflix, Hulu, Vimeo, and TiVo partnerships add depth to the catalogue of “traditional” video content you’ll be able to watch on the Oculus platform, some of which will be viewed in the rebranded Oculus Video (formerly Oculus Cinema) interface. Twitch live streams may be the most important partnership in the eyes of Oculus, as it’s where social VR is being experimented with first. The big omission, of course, is YouTube. Google has its own Cardboard VR initiative, and doesn’t look like it’ll be contributing content to the Oculus experience anytime soon after launch. That’s a very glaring omission, and forecasts a walled garden of video content experiences in VR that won’t benefit users.

55-82166-store-huge-5-1443658003

For content to really come into its own in VR, there needs to be a mix of curated experiences and user-generated content that lives on equal footing. John Carmack calling Netflix and Vimeo bottomless wells of content is a little misleading. Yes, there’s a lot more than was available previously in Oculus Cinema, but the Vimeo experience we tried only featured video chosen by Vimeo and Oculus editors. There’s no simple pathway to share video that you’ve uploaded to one of these streaming services to another VR user. Facebook may be saving that kind of democratized VR video for themselves as it builds more video functionality into its News Feed as a competitor to YouTube.

Two other anecdotes related to VR content and Oculus partnerships. While some services will live in the Oculus Video app, others like Netflix and Hulu will be played on third-party interfaces with their own custom virtual environments. The Netflix one is a simulation of a large living room in a cabin, with you seated on a big red couch. It’s neat that third parties will be able to tune and develop their own viewing experiences, but I think there’s real user value to be able to choose the virtual environment you watch your shows on. There is a psychological difference between watching a video in a virtual cinema setting and one in a smaller living room setting, even if the screen sizes are technically the same. Plus, third-party created environments are blank canvases for future advertising (note the posters on the wall of the Netflix room).

Another related note: John Carmack pointed out that 720p is the highest quality content you should be playing back in VR today, given the limitations of the panel resolutions on Gear VR. Fortunately, that 720p video looks pretty great when “projected” in a virtual cinema, as long as it’s not showing compression artifacts. Bitrate matters a lot when you’re watching a video in VR.

Social VR Needs to be Purposeful

Oculus emphasized the undeniable potential of social VR at Oculus Connect 2. Humans seek and respond to connection–AOL chatrooms taught us that. But the inherent challenges to making a compelling social VR experience come multiple-fold. First, there’s the question of what types of contents are most suitable to layer on top social experiences. Netflix experimented with shared viewing of its TV shows and movies with the now-discontinued Xbox 360 Party Mode, and one of the lessons they took away was that people didn’t necessarily want to have a shared viewing experience of dramatic content.

So in looking for the kind of content to use with social VR, Oculus is turning to analogues of real-world situations where people get together for shared viewing experiences. Namely, sports. And the analogue for sports that may appeal to Oculus’ first adopters (ie. gamers) is e-sports. Hence, using Twitch as the test case for social VR. In my demo of the shared Twitch experience, the second big challenge to social VR became quickly apparent: the design and functionality of VR avatars.

VR avatar design is a huge challenge, limited not only by aesthetic preferences but also the functional limitations of current VR tracking. Oculus Rift and its Touch Controllers will be able to track your head, hands, some fingers, along with your voice. That’s a lot to work with in social VR, but software will need to fill in the gaps for the many cues and physical nuances of human interaction. For example, facial movement and eye contact. Headsets won’t yet be able to track your eyes, but avatars will need to have animated eyes to avoid a dead-eyed uncanny valley look. One place where VR developers can look to is puppetry and animatronics. Puppeteers and animators have to find ways to convey intent and emotion with a limited number of controllable points in their puppets. Idle animations and “keep alive” states can go a long way in VR. (Update: the recently announced Oculus Social SDK should address this, partly.)

Finally, I think that social VR needs to have purpose. The Toybox demo is a fantastic example of a social VR experience that is really fun and compelling, but works better as a technology demo rather than something you would use regularly. This is an easy problem to solve–social VR devs just need to find ways to leverage social VR so that there is some utility to the experience that you otherwise couldn’t get from a chatroom. Oculus Medium offered a glimpse of that in the demos here new users were taught to sculpt but a virtually “present” guide.

The Abrash Talk was One of Optimism

Michael Abrash, Oculus’ Chief Scientist, gave perhaps the best macro-look yet at the state of Virtual Reality research. It was similar to his keynote from last year’s Oculus Connect, laying out our current understanding and ability to drive the human perceptual system and its senses. In other words, why VR is so hard, and why we’re not nearly as close to the perfect VR as imagined in science fiction. A simple example is our ability to manipulate what we see and drive our vision system. We understand almost everything about the physics of light, but we don’t know how to control and turn light into what our brain thinks is real-world vision. The limitations of display and optics technologies force us to make tradeoffs in all the areas of of VR displays that need improvement: FOV, image quality, dynamic range, ergonomics, etc. And that’s just how far off we are from driving just one of our senses.

But it’s not all depressing; I actually found Michael Abrash’s talk to be optimistic. By laying out what the difficulties are in driving the human perceptual system, he also provided a paths toward those challenges that he knows are solvable. Today’s researchers don’t have to get hung up on the fact that there’s still so much we don’t understand about our brains and senses–they can work on the plenty of solvable problems first, experimenting and iterating to get us closer to breakthroughs. There are no Eureka moments in research, Abrash reminded us. “Insight is the result of result of patience, hard work, and willingness to experiment with ideas in the solution space.” And that’s the point of Oculus Research–they’re exploring every area that Abrash laid out in his presentation, with the goal of building incremental improvements in technology that consumers will get to test in the next five to ten years. As users, we get to be part of that ongoing beta test to get us to better VR.

Minecraft is the First Step to the Metaverse

This was a little surprising to me. Not that Oculus found a way to partner with Microsoft to port Minecraft over to a virtual reality platform, but that it was such a big priority to John Carmack. In his own words, Minecraft is one of the first steps to making the Metaverse–a fully released virtual space that can exist parallel to our own reality. And like VR research, the Metaverse won’t just one day appear and work like how science fiction writers imagined it in the simulations of Star Trek and Ready Player One–it’ll get there incrementally, starting with mass-market experiences like Minecraft. As Carmack explained in a follow-up hallway conversation Connect, it’ll be ” Crass Commercialization ” that gets us to the Metaverse, not the pursuit of something something as fully-fleshed out as a Second Life follow-up. It’s a very pragmatic approach to the idea.

I also loved how enthusiastic Carmack was in explaining his experiences of VR Minecraft. He described the sense of presence in that simple world-scale game as enough to form memories of the experience. As far as his brain was concerned, his play time in Minecraft was something that happened to him. First-person worldscale VR is a huge presence mulitplier and will “deliver powerful emotional experiences”. It’s akin in my head to the experience of being so obsessed with a game that I start having dreams about it–something other gamers may be able to relate to!

Still, there are a lot of questions about Minecraft VR that were left unanswered. Chief among them, how you’ll be able walk around that world when you’re playing on the couch or at your desk. So it’s time to talk about locomotion.

Locomotion Solutions Really Aren’t Solutions

Locomotion is the problem of moving your avatar around a virtual space when your actual body doesn’t necessarily have the physical space to do so (in 1:1 scale). As we see more first-person perspective VR games like Bullet Train and Land’s End, developers have to find cheats, compromises, and shortcuts to get us moving in their virtual environments. When you assign your avatar’s movements to a button or analog thumbstick, your risk giving the user nausea. Shifts in acceleration are what get make people dizzy in VR. (It’s why swivel chair VR with the Gear VR is more compelling than movement with an analog stick–swivel makes use of 1:1 tracking.)

Three possible solutions to locomotion stand out right now: scale shifting, redirected motion, and teleportation. At Oculus Connect 2, teleportation was the one that was most employed by the new demos I tried. It was used in the new Oculus Arcade to jump from virtual arcade cabinet to virtual arcade cabinet, and Bullet Train to allow you to maneuver around the map and shoot your enemies in the back.

But even in the realm of teleportation, there are multiple ways to implement the mechanic. Developers can instantly teleport the user or use transitions like fades or other animations. Levels can lock teleportation points to nodes, or allow the user to point and teleport to any place on the map. Even with a node-based system, devs have the option of pre-determining the direction the user will face after they teleport, or have them maintain the same orientation. Some of these are game design decisions, some are VR ones for user comfort.

Regardless of how well teleportation is implemented, it’s still going to feel like a compromise for locomotion in VR. I think developers are better off making games that are appropriate to the physical environment and VR setups of the users, and not force a design concept into a space that doesn’t accommodate it. Job Simulator is a good example–the developers at Owlchemy Labs have designed their game to dynamically scale and compensate for different user setups. This kind of scalability can be even facilitated in a VR SDK, since some hardware setup processes (like that of the HTC Vive) requires users to designate their physical space boundaries.

FPS in VR is a Presence Multiplier

I mentioned this earlier, but wanted to reiterate it. First-person VR games deliver more emotional and visceral experiences. It doesn’t even need to be a first-person shooter–we’ve felt it with experimental VR experiences like Birdly and Virtuality. First-person VR demos also made me more aware of the physicality of those kinds of simulations. Whether it was a virtual sports game where I was playing the goalie of a hockey game or a puzzle game where I was trying to escape a booby-trapped car–my entire body reacted to the simulation, not just the points that were tracked by the headset and controllers.

55-82229-maxresdefault-1443890672

The London Heist demo I played on PlayStation VR at E3 had me sweating when I was ducking behind the table and avoiding incoming bullets. The VR Sports Challenge, which put me in the helmet of a Hockey Goalie, tensed my body up as I darted my eyes back and forth to keep track of a puck. And that demo was just a sit-down VR experience using a Xbox One Gampad with the Rift! Bullet Train made me aware of the silliness of pointing and shooting two pistols without aiming down the sights. But that demo, even with its interactivity with the Touch Controllers, actually felt the least visceral, because the game gave your an infinite amount of health and ammo. My body subconsciously reacted less when I knew the stakes were lower or there was no risk.

Oculus Touch SDK Adds Some Consistency to VR Interactions

I got a lot more time with Oculus Touch at Connect 2, including demos of the updated Toybox experience and several Touch-enabled games and demos. An importhing to note about how the Touch controllers compared with the SteamVR/HTC Vive controllers: Touch is inherently a different kind of ergonomic experience. With Oculus Touch, your hands are in a neutral/relaxed pose, more akin to how it would grip a each end of a gamepad than grabbing a cylindrical stick like the Vive or PlayStation VR wands. That not only facilitates Touch’s natural gesture recognition system, but makes the act of grabbing virtual objects feel more intuitive. On the flip side, it made holding virtual objects that are sticks–fireworks, screwdrivers, and even the grip of virtual pistols–feel a little off.

55-82230-oculus-touch-3-1443892022

The shape of the Touch controllers means that it’s not easy just to put one down and pick it back up when you’re in a VR experience. You have to have them in your hands at all times, with the safety cord wrapped around your wrist. One of the things I love about the SteamVR controller is the ability to reach out and grab it from anywhere in the room, when already wearing the Vive headset. That allows it to be a virtual tool that can be physically picked up and put down throughout VR experiences (eg. a fishing rod in a virtual fishing game that you can lay down in your virtual boat). With an always-worn Touch controller, you’re never letting go of the physical object in your hand. One isn’t better or worse than the other–they’re just two different paradigms for hand-tracking in VR. The good thing is that there’s parity in their interaction models–two trigger buttons, a grip button, etc–that will allow for cross-platform experiences.

Where Touch shines is in the representation of your hands in VR. Because of the natural holding pose, the Oculus Touch SDK can display an accurate representation of your hands in the virtual space, even though all of your fingers aren’t actually tracked. I found that the best way of showing this was by peeking through the small open gap between the Oculus headset and my nose, and seeing my real hand perfectly transition into the virtual one in the headset. So surreal, and so cool. The Oculus Touch SDK allows for the same virtual hands to be used in all games that use Touch, though some developers use their own hand models for both game design and aesthetic purposes. A fatter virtual hand may give users more margin for error when interacting with objects in the game, for example. But when that virtual hand is missing, it’s very noticeable.

Oculus Medium was the Star of the Show

If there was any big surprise at Oculus Connect 2, it was the reveal of Oculus Medium, the virtual reality art tool making use of the Oculus Touch controllers. The promo video that introduced it was a little reminiscent of the now-popular Glen Keane HTV Vive video, and the comparisons between Medium and the Tiltbrush program are inevitable.

Having used both Tiltbrush and Medium (if only for short sessions), I felt that Medium was the better tool to show off the potential of VR as an interactive art tool. But there are fundamental differences between the two that make them really difficult to compare. Tiltbrush is a painting tool in three-dimensional space–a novel concept that has no real analogue in traditional art. Medium, by contrast, is a sculpting tool, akin to sculpting with clay or foam in the real world. In that sense, Tiltbrush is the more innovative implementation, because the art you can create in it isn’t easily duplicated in the physical world (the best way to describe it is painting with ribbons of color). But Medium, by working with a volumetric–well, medium–was easier to grasp quickly and in my opinion a better demo of VR.

The two programs also use different paradigms for their controls, taking into account that Tiltbrush’s tools are oriented around the Steam VR controller wands while Medium is optimized for Oculus Touch. Both use virtual/holographic menus that pop up from the virtual controllers, but the way you’re holding the controller determines how you’re able to make use of their “brushes”. Since you’re essentially gripping a stick in Tiltbrush, I found it a little difficult to contort my wrist to hold the controller the way I would a paintbrush or pencil. Oculus Touch’s neutral hand pose, combined with where the sculpting tool was projected out from the virtual controllers, felt more akin to holding a sculpting tool the way I would in the real world. Ergonomics will matter a lot in the use of these VR art programs.

There’s a lot of potential for Medium to improve, too. Right now, you’re only given two primitives to work with to build and remove shapes–a sphere and a cube. In that way, it was like being given the limited drawing palette of MS Paint, when you can see the potential for a more robust set of tools. We’ll get to the Photoshop of VR, eventually. And none of these tools will matter much if there’s no way to export or share your creations and integrate them into creative workflows. Like Social VR, art creation in VR should have some purpose beyond being a cool interactive demo.

Third-Person Camera in VR is Really Tricky

The last few takeaways I’ll run through pretty quickly. From playing some more third-person VR games, I found that the third-person camera is still something developers need to figure out. An example: in the VR platformer Lucky’s Tale, you use head-tracking to explore a virtual level, but your perspective is also controlled by the movement and progress of your tiny character. Move further into the level, and the camera moves a long a track to guide you there.

55-82231-luckystalegdc2015_05_b-1443903904

That alone wasn’t a problem, but I found it very difficult to then backtrack in the game–the camera won’t swing around when you’re going against level progress. Because your head dictates rotational camera movements, you’re supposed to spin around your chair and look back, but that’s not going to be easy when playing this kind of game on a couch or at your desk (especially with the headset cord tethered to your PC). In that situation, I really wanted the right analog stick to be activated to let me swing the camera around, as I would on a traditional non-VR platformer. I don’t know manual camera controls with a gamepad or tracked controller can complement head-tracking for these kinds of third-person VR games, but the current implementation of auto-camera movements won’t always work.

Animations Add Fidelity

We know from previous technical talks that virtual reality will require devs to pay more attention to art asset fidelity. In VR, users can scrutinize character and world models and textures more carefully than on a monitor. And graphical quirks that may be acceptable in the 2D screen like aliasing can break VR immersion. One thing we learned was that a simple way to help compensate for the demands of VR graphics is by adding more animation to games. Animations go a long way to breathing life into the objects that inhabit a VR space, and don’t require resource-intensive high-quality art assets. I noticed this in games like AirMech VR and Lucky’s Tale–passive animations all around my peripheral vision help keep the virtual world feeling “alive”.

Storytelling in VR is Theater

Finally, the new Oculus Story “Henry” short was a beautiful example of the potential of storytelling in virtual reality. There are a ton of lessons to learn from Henry, some of which are shared in the Future of Storytelling video above. The idea that there is no fourth wall in VR short films is especially resonant; the fact that you inhabit the same space as the characters you’re watching changes not only how you connect with the story, but how the characters have to treat you as a viewer. You’re not quite a participant (yet), but you don’t have the same distance and disconnect you have of watching a film on television or cinema.

It’s actually quite a lot like theater, both in the performance sense and in the physical stage setting. Watching Henry felt like watching a short play while also being on stage. There are no standard cinematic tricks you would do with a camera, like zooms or focal length shifts. Your eyes and natural field-of-view is the camera, and that kind of storytelling is the domain of theater. I think the stage and theater analogy will do a lot to inform the first efforts of VR filmmaking.

Via Tested

You may also like...