Getting started with the WebVR API

Developers at Mozilla and Google have begun adding a WebVR API to the Firefox and Chrome browsers. Learn how to leverage this nascent technology to build applications and games suitable for viewing with VR head mounted displays such as the Oculus Rift.

Notes

  • Chrome can render to the HMD without mirroring or dragging the browser window to the HMD display before going full screen. Firefox cannot, yet, but Vlad is working on it.
  • Because this technology is being actively developed, there will certainly be changes to the API. I’ll try to keep this tutorial and three-vr-renderer updated to match, but keep it in mind.

Prerequisites

Boilerplate

Here’s the HTML and CSS boilerplate we’ll use to get started.

index.html

<html>
    <head>
        <link rel="stylesheet" type="text/css" href="style.css">
        <script src="three.js"></script>
        <script src="VRRenderer.js"></script>
        <script src="main.js"></script>
    </head>
    <body>
        <canvas id="render-canvas"></canvas>
        <div style="position: fixed; top: 8px; left: 8px; color: white">Hit the F key to engage VR rendering</div>
    </body>
</html>

Here we’ve imported a stylesheet we’ll be writing, THREE.js, the VR Renderer, and the main.js script we’ll be writing to render our scene. We’ve also created the canvas element that we’ll be rendering to.

style.css

body {
    margin:0;
}

#render-canvas {
    width: 100%;
    height: 100%;
}

Here we’ve set the margin of the body to zero and made the rendering canvas fill the body.

Onwards to the Metaverse!

Creating the HMD and sensor objects

Great, now that we’re done with the boring stuff, let’s dive into creating our HMD and Sensor objects. Let’s wait for the document to load, and then call navigator.mozGetVRDevices (Firefox) or navigator.getVRDevices (Chrome) with a callback that will handle the discovered devices. Note that the Chrome version of this function returns a promise instead of taking a callback.

window.addEventListener("load", function() {
    if (navigator.getVRDevices) {
        navigator.getVRDevices().then(vrDeviceCallback);
    } else if (navigator.mozGetVRDevices) {
        navigator.mozGetVRDevices(vrDeviceCallback);
    }
}, false);

Now let’s define vrDeviceCallback. Inside, we’ll loop through the provided VR devices and locate the first HMD. Then, we’ll identify the sensor associated with it. Once we’ve got our hands on the devices, we’ll initialize the scene and renderer, and then kick off the rendering loop. Take a look:

function vrDeviceCallback(vrdevs) {
    for (var i = 0; i < vrdevs.length; ++i) {
        if (vrdevs[i] instanceof HMDVRDevice) {
            vrHMD = vrdevs[i];
            break;
        }
    }
    for (var i = 0; i < vrdevs.length; ++i) {
        if (vrdevs[i] instanceof PositionSensorVRDevice &&
            vrdevs[i].hardwareUnitId == vrHMD.hardwareUnitId) {
            vrHMDSensor = vrdevs[i];
            break;
        }
    }
    initScene();
    initRenderer();
    render();
}

Scene and renderer initialization

Alright, let’s look at our scene initilization. First, a quick note about units: because we are using a stereoscopic HMD, units have a real meaning beyond the relative sizes of objects in our scene. The viewer will be able to sense the sizes of objects in VR, so we need our units to reflect that. I’m doing my work on an Oculus Rift headset, so I’m using units of meters to define the following scene components because these are the units the Oculus SDK uses.

First we’ll create a camera with an FOV of 60, an aspect ratio of 1280/800, a near plane of 0.001 meters (1 millimeter), and a far plane of 10 meters. The FOV and aspect ratio we define here are simply placeholders - the VRRenderer will set these appropriately for our HMD.

Then we’ll create a THREE.Scene object and place an icosahedron of radius 1 meter at the origin. This will place the surface of the icosahedron 1 meter from our face. Here’s what all that looks like:

function initScene() {
    camera = new THREE.PerspectiveCamera(60, 1280 / 800, 0.001, 10);
    camera.position.z = 2;
    scene = new THREE.Scene();
    var geometry = new THREE.IcosahedronGeometry(1, 1);
    var material = new THREE.MeshNormalMaterial();
    mesh = new THREE.Mesh(geometry, material);
    scene.add(mesh);
}

Now let’s initialize our WebGLRenderer and VRRenderer.

First we get a handle to the canvas we created in index.html, and pass this to the WebGLRenderer constructor. We set the clear color of the renderer to a neutral gray color, and set its size to 1280x800 (the resolution of the DK1). Finally we’ll create the VRRenderer object, passing in our WebGLRenderer and HMD objects.

function initRenderer() {
    renderCanvas = document.getElementById("render-canvas");
    renderer = new THREE.WebGLRenderer({
        canvas: renderCanvas,
    });
    renderer.setClearColor(0x555555);
    renderer.setSize(1280, 800, false);
    vrrenderer = new THREE.VRRenderer(renderer, vrHMD);
}

Rendering

Now that we have everything set up, let’s render our scene.

First we request an animation frame for our render function. This provides the mechanism for our rendering loop. Then we rotate our icosahedron by a small amount along the y-axis (up and down). Next we get the state of our sensor, and use the quaternion it provides to set the orientation of our camera. Finally, we use the VRRenderer we created to render the scene.

function render() {
    requestAnimationFrame(render);
    mesh.rotation.y += 0.01;
    var state = vrHMDSensor.getState();
    camera.quaternion.set(state.orientation.x, 
                          state.orientation.y, 
                          state.orientation.z, 
                          state.orientation.w);
    vrrenderer.render(scene, camera);
}

Going full screen

In order to use the distortion rendering provided by the WebVR API, we need to request full screen with our HMD object provided as the vrDisplay parameter. A full screen request must be triggered by a user action, so here we fire the request when the user hits the F key. Note that we’re calling the appropriate full screen request function for the browser in use. Also note that we’re using webkitRequestFullscreen, not webkitRequestFullScreen (lowercase s vs. capital S). The capitalized version is deprecated, and not compatible with VR in Chrome. Similarly, we are using the capital S with Firefox.

window.addEventListener("keypress", function(e) {
    if (e.charCode == 'f'.charCodeAt(0)) {
        if (renderCanvas.mozRequestFullScreen) {
            renderCanvas.mozRequestFullScreen({
                vrDisplay: vrHMD
            });
        } else if (renderCanvas.webkitRequestFullscreen) {
            renderCanvas.webkitRequestFullscreen({
                vrDisplay: vrHMD,
            });
        }
    }
}, false);

Check out the demo

You should have something that looks like this:

Check out the live demo (and code) here. Note that for the sake of simplicity and brevity, I have excluded error handling from this demo, so if you don’t have the correct Firefox installed or a DK1 hooked up, it’s going to crash and burn.