Creating an immersive 360-degree video with WebVR

VR is becoming increasingly mainstream as the technology evolves, with an estimated 100M+ VR headsets sold to date.

Since this popularity boom, web developers have proposed a new spec to allow VR applications within web browsers, making VR experiences easily accessible for most users. WebGL - a framework for creating 3D graphics on the web, along with the WebVR API, now allows developers to create engaging experiences with almost zero barrier to entry.

When it comes to work with WebVR, 360-degree videos make a good starting point: they are built on existing web browser technology and a lot simpler to implement than full scale VR apps. With this in mind we challenged ourselves to develop our own immersive 360-degree video.

Using Google Interior Streetview as inspiration we created an interactive walkthrough of our London office complete with audio and interactive hotspots.

Our ambition is to take the learnings from this experiment and build on them so when our clients come to us with a VR brief we can support them in the best way possible.

In this article we will take you through the steps we took to create our own 360-degree video experience and share our key learnings from the process.

Shooting for VR

From start to finish it took us six weeks to develop the prototype. This included two weeks of pre-shoot planning, a one day shoot and a month of post production, tech discovery, bespoke interactivity development and implementation.

The shoot was done in 4K using a five camera rig, which is effectively a bunch of go-pros on a tripod pointing in different directions to create the 360 projection. Although using more cameras leads to more stitching in post production, it is also what produces a higher quality of output which is why we decided on this approach.

While we shot each room of the office, we decided against filming transitions between spaces for technical and usability reasons. Adding movement which is not initiated by the user in VR can make the user feel dizzy and syncing between transitions and static shots adds complexity and time in post production.

Camera work for VR

When shooting for VR, camera position and movement deserve special attention. Always consider the final output and how your audience will be viewing it.

For example it's vital to think about where the camera is positioned. Make sure the camera is at head height so the user does not feel like a giant or an ant (unless you’re going for that effect of course!).

Camera movement should always be user initiated. If you want the user to see something at a given time, do not force the video to their current orientation, but rather give them hints to turn around if they’re facing the wrong way. As mentioned, this is what makes filming transitions tricky. If you do decide to do this, there are several options for positioning the camera. You could have someone hold the camera as they walk through (but obviously as this is a 360 video, the operator would be visible). This could be countered by making the transitions 270 degrees so the operator is cut out. Alternatively a dolly can be set up so the camera can move between rooms without an operator.

Managing people in shot

You will need to control everything in shot to capture the needed footage, particularly if the video will have a lot of interactivity. Plan each shot in advance, especially the movement of people. Two things to watch out for include:

  • Make sure nobody walks too close to the camera as this will often cause a part of the video to have seams which can be costly to fix.
  • During the shoot no one should be walking past areas that could become interactive elements. Any 3D objects added to the scene are always drawn over the top of the video so you need to make sure they are not obscured.

Building the prototype

The first version of our prototype was developed using rough cuts from the shoot. Rough cuts are exactly what they sound like - video footage stitched together roughly, with objects and people ghosted slightly. Fine stitching is a longer process and takes a few weeks to complete but rough stitches are good enough to develop with and can be swapped out at the very end of the process.

Since we didn’t film transitions during the shoot, we went with a basic fade when moving from one room to another, using interactive arrow pointers (added during post production) the user could click on to navigate between spaces.

To develop the 360 perspective our first route was looking at pre-existing libraries specialising in 360-degree videos. We had some specific requirements, which included:

  • The ability to load a 360 video and play it performantly.
  • Take care of the WebVR polyfills and switching between VR and normal mode.
  • Allow access to the WebGL scene so we’re able to control what is added and removed.
  • Ideally based off THREE.js which makes a great tool for general web based animations because of its flexible framework

There were a few options that initially came up. A-Frame is one of the most popular WebVR frameworks. However after experimenting with the 360-degree video module, it became apparent quickly that the performance was terrible for higher res videos.

A library that took our interest was the Google VRView.

The code is open source, based off THREE.js and is extremely performant, managing to playback 4k video without a problem on high-end android devices. This setup was not ideal for our case though. The library is designed to insert an iframe into your website, not allowing us to place our own objects to the scene, meaning it was very limited in its application.

To improve our asset library and turnaround time we engineered our own library for 360-degree video in VR, basing it on the existing GoogleVR View code.

Mobile Treatment

There are a specific considerations when it comes to developing WebVR on mobile. For example mobile devices do not support playing videos automatically, therefore at the beginning of the experience, a splash screen is required. A user initiated click or touch event can then be used to start a video.

Other watchouts include:

Size: some mobile devices have a maximum video size that they can render. From our experiments, the max size for most devices is 3840 × 1920. Anything higher and it will refuse to load.

Loading: iOS will refuse to load and render video to a texture across domains, even if CORS are set. Make sure your website and videos are served from the same domain.

Playback: iOS 10 introduced inline playback of videos, which makes 360-degree video possible, however if the user is using iOS lower than 10 or Internet Explorer 11 and below, a fallback will be needed.

Adding interactivity

Designing for VR is very different to building a UI for a website. While sites often rely on menus and buttons to navigate, this isn’t always the best option for VR and can sometimes break immersion. For VR, interactivity is introduced through interactions with different areas in the room.

In our prototype, clicking on one of our campaign posters will load up information about that campaign. We’re also exploring the use of spatial audio using the audio recorded in multiple places in each room during the shoot.

Hotspots: if you want to add a hotspot in an area, during filming make sure nobody passes between the camera and that area. Otherwise when you add the hotspot, it would be drawn in front of the person and won't look like a part of the environment.

Hover states: For a website build, it’s important to build in hover states to give instant feedback to the user that this area is interactive. This is no different in VR. When hovering over interactive areas, make sure to give some feedback, for example, change the size of the reticle, or a subtle colour change

Frame rates: For VR, it’s important to maintain a consistent 60 frames per second. Any slower and the user will notice the latency and the immersion will not be as great. With this in mind, make sure that any 3D elements you add to the top of the scene do not slow the application down. Be clever about how geometry is loaded and removed.

Final results

From start to finish it took us six weeks to develop the prototype. This included two weeks of pre-shoot planning, a one day shoot and a month of post production, tech discovery, bespoke interactivity development and implementation.

While this is an early prototype that can benefit from further iteration, we have been able to achieve our core goal of using WebVR technology to develop a functional VR experience. Our 360 video experience allows the user to move through our entire office, hear the sounds of the studio and interact with specific objects.

Explore our London office here

Future of WebVR

In the past VR has always been seen by the public as something very expensive and only tied to gaming or installations. However with the release of lower budget headsets such as Google Cardboard, VR is becoming increasingly accessible.

WebVR is a spec which is evolving at a fast pace, and I won’t be surprised if we see WebVR supported natively across the main browsers within the coming years.

Tying the web with VR is likely to introduce a very low barrier to entry for creating high immersive experiences on the web, which is very exciting.