Silicon Valley Virtual Reality Conference 2014

The SVVR conference began as a meet up for VR enthusiasts. It's first full two day conference retained the intimate feeling while being professional, well organized, and a truly worthwhile experience. My motivation to attend was to immerse myself in the VR space and learn everything I could about the industry as quickly as possible. Special thanks to Karl Krantz, Cymatic Bruce, and everyone involved with making this a successful event.

These are my highlights from the sessions and expo. Unfortunately I did not have time to try every demo and see every minute of every session, so undoubtedly I missed some compelling work. Apologies to those with misspelled names or inaccurate details. Please send any corrections as needed. Video of most sessions were all recorded and can be viewed on the Ben Lang’s great blog RoadToVR

The Industry

There is a collective sense that VR is a broken promise from the 1990’s. An endless stream of failed products such as Virtual i/o i-Glasses, Sega VR, and Nintendo’s Virtual Boy delivered hype not much more. Most people (including me) gave up on VR and for startups seeking investment, having the word VR in your pitch was the kiss of death. The technology necessary to trick the user's brain into perceiving that it is in another place just wasn’t practical yet.

While there has been very little activity in VR over the last decade, a wide variety of recent developments in the last year has spurned incredible innovation in a number of areas. This progress was driven largely by the Oculus Rift. Rift's success was a string of multiple successes. The founder's initial insight was that low cost mobile screens and sensors could be mashed together to create an immersive 3D experience. Successful venture capital funding and a Kickstarter campaign brought cash needed to continue to refine the technology and also much public attention and enthusiasm. Oculus continued to build momentum with a successful marketing campaign that continued to build public awareness and enthusiasm. Facebook's subsequent $2 billion dollar investment brought worldwide attention to Oculus that broke through the tech geek circles and grabbed the attention of the entire world. At the same time Sony released information on their own VR headset codenamed Project Morpheus. This competition from a second large company a brought a new level of legitimacy to Oculus that further solidified it in the hearts and minds of enthusiasts, investors, and the public. While it was clear that Oculus was the star child of the show, the level of innovation in many related of peripherals and software experiences was impressive.

Attendees

There was a lot more enthusiasm for VR than I expected. Over 500 people attended. The presenters and attendees were all very open, welcoming, friendly, and smart. Knowledge and creatively was shared unfettered. Even though most folks were looking for business angles, there was a collective sense that it will take a real community to bootstrap this industry and get it off the ground. While games were a significant focus, there is a ton of interest non-gaming applications. Medical applications to treat lazy eye, PTSD, depression, and others were presented. The opportunities for education and training seem limitless. The military and companies such as Boeing have been using these techniques for years, and the advent of good consumer grade hardware will broadly expand the market. A wide variety of new social experiences will also be enabled as was eloquently expressed by Philip Rosedale’s keynote address.

Immersion and Presence

The elusive goal for VR is presence, where the user has to work hard to remember that they are not in a virtual world. This is a level up from immersion which is when the user sometimes forgets that what they are looking at isn’t real. Immersion is what is offered by consumer-level developer hardware today like the Oculus DK1 and DK2. These are the best definitions that of immersion and presence that I heard, but the panel experts and attendees frequently debated these definitions.

There are no exact formulas for immersion and presence.

Yes Michael Abrash defined the technical specifications for visual experiences and head tracking that are needed to achieve presence but there is a lot more to it. Achieving presence is about fooling the brain’s input circuitry, and there are numerous contributing factors from specialized audio, tactile feedback, and sensory motor to production quality, narrative technique, to social interaction. These factors are additive toward the threshold of presence. For instance, a few expo demos with controllers sticks that you hold, one in each hand, which drive your characters hands and arms in VR. The experience of looking down at your virtual hands and reaching out to grab objects was enough distraction so that I forgot where I was for a while. Similarly the production quality of Unello Design’s Opera Nova was so compelling that I didn’t want to leave the VR world when the demo was over even though it was just a rail ride (think Disney ride) with no user interaction besides head tracking.

The experts all seem to agree that the bar for presence will continue to rise. What is completely satisfactory today will be ho-hum tomorrow. We have seen this with the video gaming industry and we will see it in VR as well.

Head Mounted Displays

Oculus DK2

The current publically available version of Oculus is Dev Kit 1. Oculus was showing off their Dev Kit 2 in a comfortable living room arrangement with sofa chairs and side tables. Inside the demo two players sat in a virtual version of the same living room controlling two animated medieval characters that could swing swords, throw fireballs, and jump. The avatars could jump and climb on the furniture to seek advantage while attacking and dodging their opponent. The two significant advancements between DK1 and DK2 are improved displays (better resolution, less screen door effect, higher refresh rate, and faster pixel response) and head tracking that allows you to peer over tables and around corners. Head tracking may sound like a mundane detail, but it significantly added to the immersion factor of this demo. Indeed I lost the first round because I was gaping over the side of my chair but ultimately my competitiveness kicked in and I pulled through for a 3:2 victory.

Sony's Morpheus

These demos had long lines most of the time and when it was my turn I understood why. In the demo the user plays a knight fighting a scarecrow in armor with fists, a sword, and a crossbow. The resolution and head tracking were great, on par with DK2. A Move controller in each hand created a natural experience for grabbing and punching, and a mostly natural experience sword fighting and shooting a crossbow. Accuracy of the crossbow was a little off, I had to shoot up and left to hit the targets, not sure why. A nice feature of the Morpheus is you can output one eye of the display to TV or monitor in full resolution. In contrast Oculus's monitor output looks like you are looking through binoculars. I think this has something to do with the stage in which the distortion occurs to prepare the image for VR optics. Overall Sony presented a tight impressive demo that ended with a dragon that flying in and eating you. Thank you for playing.

Durovis's Dive

Head mounted display for your mobile phone. You strap your own iOS or Galaxy phone into the headset and view using embedded optics. Price point 70 euro. They provide Unity and Unreal plugins so that your smart phone can render a split screen stereoscopic display. Head tracking works but not with the same fidelity as Oculus and Morpheus. Their demo was a rollercoaster that was somewhat compelling though hard to get excited about compared with Oculus and Sony’s offerings. I could imagine myself using the Dive for prototyping or development and perhaps as an addition platform to expand your customer base but ultimately it felt like a toy.

Input Devices

While there is much buzz around VR headsets, there remain a number of unsolved problems around user input. Our three most prevalent input devices today, keyboards, mice, and touch screens all have problems in VR. The user cannot see the keyboard with a headset on. Gamers may be fine with WASD controls but this is not accessible for a mass audience. While the mouse can be used by most people in VR, it is a 2D input device that is unwieldy for things like entering login info or a URL. Touchscreens are similar though this appears to be mostly unexplored territory.

Sixense’s STEM controller system

Arguably the most promising new input device even though it has not actually shipped. It consists of two ski pole-like grips with an array of trigger and thumb buttons. Their demo allows you to see your hands, pick up guns from a table, and shoot targets. The demo was impressive and I lost myself in the experience. More interesting than the VR demo was their simple SDK that provides Unity and Unreal Engine support via drag and drop. STEM also includes 3 additional sensors that provide location and orientation tracking that could be placed anywhere on the body. Sixense has a one-SDK-to-rule-them-all strategy stating they will expand their SDK to include support for all other VR input devices as they become are available.

Matterport

An end-to-end solution for creating 3D geometry of physical spaces. It consisted of a single camera composed of three PrimseSense-like cameras capturing RGB and depth information. Camera sits on a tripod and has built in motor which rotates it to capture 360 degrees. The camera costs $4500 + $100ish per month. After processing is complete (30 minutes) you are presented with an immersive 3D scene to walk through. Working on new interface that exports to WebGL. The cross over to VR is limited compared to real estate, architecture, and other existing industries. This could be an example of an open virtual world that works with a browse but the experience scale up with VR hardware. Either way the results are very impressive, nothing else as fast and easy right now.

VR Sandbox

Graeme Yeaman showed an engaging technology demo with two aspects. First was stereo video captured using two pair of wide angle video mounted to the front of an Oculus. Graeme's idea is that there will be a market for stereoscopic video soon, but 360 video supporting head tracking is too divergent from how movies are produced today for it to take off quickly. This first demo was just video playback. Even without head tracking the results were compelling. The second demo was live video captured from the cameras plus a depth a depth cameras. Superimposed on the live video feed were animated and video-captured characters. The demo was thought provoking though limited due to the quality of the cameras. Graeme is working on a new version using higher quality video and depth cameras for better occlusion culling and object placement.

Software

High Fidelity

High Fidelity is a new company and product started by Philip Rosedale who was one of the creators of Second Life. The vision for High Fidelity is an open source, open protocol platform for connecting a worldwide web-like array of a servers together to form a cohesive universe of 3D spaces. Users can author behavior scripts in javascript. They are creating an ecosystem of independent developers to create scripts for the world with worklist.net. There are a lot of open questions around security, trust, and authenticity of the user's experience but they are just starting out and have a lot of good ideas so far.

Unello Design

Opera Nova (think VR Fantasia), Eden River, the Cave, and the City – these were some of the more stunning set of demos at SVVR. What I played were all Disney-style rail rides though Eden River has interactivity and is available on Steam. The immersion factor of these demos was stunning due to the production value of the art. Lighting and special effects were top notch. Captured video of creepy characters meshed well with the environment. According to the person running the demo, "98%" of the work for these projects was done by one person using the Unreal Engine. Apparently these demos will be released publically soon. I noticed that they were using a high resolution version of DK1 so they must have some relationship with Oculus.

World Viz's Vizard Virtual Reality Software Toolkit.

This company provides a high end tool kit for creating VR spaces and objects. They have been around for several years having an impressive list of military, mechanical, and industry design clients. The recent developments in VR are both a blessing and a curse in that high quality VR equipment will soon be available to consumers but software platforms such as Unreal Engine and Unity could undercut their business. Still, it will be a lot of time before Unity can easily model a 747 (XXX) engine or a human body organ in the intricate detail that Vizard can and they have plenty of opportunities to differentiate themselves in the meantime.

VR Chat

Didn’t get a chance to try this but several people said it was a compelling demo. VR Chat is a chat room for people with Oculus headsets. Our brains have a significant amount of circuitry devoted to social interaction so it makes sense that interacting with other human looking avatars could make an immersive experience. Apparently they have a meet up here in VR on Sundays, maybe I’ll see you there?

Sessions

Philip Rosedale, High Fidelity, formerly of Second Life

Philip presented his vision for a new take on virtual worlds called High Fidelity. His enthusiasm and emotional commitment to VR was apparent in his presentation. Philip suggested that a lot of what we do in our brains is to guess what is about to happen next and then compare that to what actually happened. The sensory experience of things we do over and over create expectations for the future. VR must live up to these expectations in some way. Philip suggested that the face to face experience of talking to someone in VR is amazing and he wants to perfect it. He claimed that cell phones transmission latency is terrible, on the order of 433ms for a typical for cell phones but people put up with it. In his testing 100 ms latency for verbal communication is necessary for good social interaction in VR. Using an edited demo clip from Star Wars he shows the difference between 100 ms and 433 ms latency. I believe him that 100 ms latency is better (having a conversation on a cell phone drive me nuts), but the demo clip didn’t demonstrate that very well for me.
Philip observed that video game technology is roughly 10 years behind Hollywood in terms of realism. Since Avatar was released in 2009 he suggested that we are only 6 years away from having that level of realism available in consumer VR. This will make VR extremely compelling and immersive. I love that he is thinking that far out and I think he is right.
Philip demoed High Fidelity with a bunch of different input devices including a drum set, the PrioVR motion capture suit, a Primesense 3D camera, and others. His demo also integrated software from Faceshift that moves your avatars lips exactly as your real lips while speaking. Pretty amazing stuff. I cover specifics of High Fidelity itself in more detail above. If you want to pitch in and help make High Fidelity real check out WorkList.net.

Peter Giokaris, Developing for the next generation of Oculus Hardware and Software

This was a good high level technical talk on using the recently available Oculus 0.3 SDK. He covered basics, Timewarp, etc which I will not go into here. Half the time was spent on audio which is notable because the Oculus DK2 has no audio support right now. Peter did not make any product announcements here but instead focused his talk on spatial audio theory.

Presence is VR magic. It is the instinctive feeling of being somewhere real. Presence is distinct from immersion.

Immersion is where you can forget you are not inside the virtual world.

Presence is where you have to work hard to remember you are not.

Presence engages instincts, not just intellect. The user ducks to avoid objects and is wary of high ledges.

"In a study carried out by Lucsas Film whilst testing the THX standards, it became apparent that decent sound can fool the brain into thinking the picture is better. In the study, one group of people were shown a movie with average sound, then a second group shown the same movie with better sound. With better sound people thought the picture was sharper."

Peter’s passion for great audio was obvious and he was frustrated by the lack of progress in audio technologies for the past several years which he attributed to consoles focusing on 5.1 and a legal dispute between Creative Labs (EAX) and Aureal Semiconductor (A3D). The KEMAR dummy head debuted in 1972 and binaural audio dates back to 1981. These were used (by MIT I believe) to create a model, the HRTF or Head Related Transfer Function. He referenced a resource called Ultimate Spatial Audio Index 12/17/96.

The three main components to spatialized audio in this model are

  • Direct (dry sound). These are the first sound waves to hit ears. May be occluded/obstructed. Likely the most important audio cue for specialization.

  • Early reflections. The second set of waves to hit ears. Sound comes later then direct sound (longer path to ears). Reflections off materials "color" the sound. Further helps localize sound in environment.

  • Reverberation. The final set of waves to hit ears. Is a diffuse element of sound. Overall decay and density help listener associate environmental properties to audio components (canyons, concrete walls).

Each ear hears sound a little differently. Each person has different HRTF coefficients based on size of head, hair, etc. KEMAR model provides a model to approximate. From my rough notes:

h'y'(t): delay amount
x'y'(t): frequency content
For any given location around head, a set of h and x values exist.

There are a few commercial engines for spatialized audio such as Fmod , Wwise, Miles Sound System, RealSpace 3D Audio, 3DCeption, and AstoundSound!.

Peter Giokaris seemed like a smart guy. He has a blog on Gamasutra.

Indie Game Devs Panel

This was a panel of several game developers who had released projects for Oculus. From my rough notes here are some interesting thoughts.

Aaron Lemke, Unello Design

Game: Eden River, Opera Nova (forthcoming)
You can make full games on laptop nowadays. While traveling, at a coffee shop, wherever. Skrillex made last 2 albums on tour. In a VR game you don't necessarily need goals. When goals are apparent then people want to quickly completely them. Not having goals allows more discovery and exploration. Maybe making apps is a good way to get started now on VR (see the Dive). Audio is very important, binaural, some companies doing basic 3D audio but no one is thinking about audio reflections. UE4 is awesome because it bounces light around automatically giving global illumination. We need global illumination for audio. Try the VR Haircut!

Denny Unger, Cloudhead Games

Game: The Gallery Six Elements
Rotational velocities are one of the most difficult things to get right in VR. In making games the steepest learning curve is relationships with distribution companies. Steam is probably best distribution platform for PC games. Reach out and establish a relationship with Steam, they are hunger to build their VR catalog.

James Andrew, Pixel Router

Game: Rift Wars
In VR, scale matters. Rift sets the scale of the human being. Need to ensure the world is scaled appropriately.

Blair Renaud, Iris Productions

Game: Technolust
His game employs mouse input only because everyone knows how to use a mouse without looking at it. The keyboard is not available in VR because most people need to be able to see it. You need to play test your game will non-gamers to refine the interaction model and make sure it is accessible. Cyberpunk adventure games are great but something like Candy Crush VR will be the addictive games with huge success. Technolust was green Lit on Steam in 31 hours. Steam is itching for VR content.

Justin Moravetz

Game: Proton Pulse
Recently quit his job to work on VR games full time. His philosophy is to make as much cool stuff as much as he can. Justin was adamant about not having a controller in his game. Instead the user interacts with the world by looking. User holds in place where they are looking at to activate menu. Designed a feature to tap the side of oculus to center the view. Proton Pulse is a love letter to Amiga. Interesting question, how do you convey your VR game in 2d for marketing purposes?

Sean Edwards, Lucid VR/Shovsoft

Game: Lunar Flight
Getting Unity editor to render on a secondary display is tricky but Shawn has a script to do this. After the conference I caught up with him and he told me where to find it (thanks Shawn!) Pop up menus are tricky and dpad bad for non gamers. Colliders may need to be much larger in VR so that people can't go through object. Shawn came all the way from Australia for the conference!

60 Second Pitches

The first hour of the conference was a set of fast presentations. Anyone who wanted to was given 60 seconds to stand in front of the crowd of 500 people and pitch their product, booth, idea, inspiration, etc. Given my inexperience in the space it was hard to understand what many of the presenters were pitching but at least this gave me an idea of who I wanted to follow up with. A few folks did really well. At the end the moderator Cymatic Bruce gave the remaining presenting 10 seconds each which was humorous at worst. This was a great way to start the conference and the audience was overwhelming accepting of everyone. I won’t include my raw notes from here but you are interested shoot me an email.

There was a lot more and I could easily write another 7 pages. I hope this summary of SVVR helps you catch up in the VR space as much has it helped me.


comments powered by Disqus