Get started with the new web standard for developing mixed-reality applications in Microsoft Edge on Windows. Credit: Microsoft Call it mixed reality, call it Mesh, call it a metaverse. Augmented and virtual reality (VR) technologies are coming back as the foundation of a new generation of interactive experiences. There’s a lot of work to be done to add these experiences to our applications, and even more work needed to move mixed-reality development away from gaming-derived technologies like Unity. How can we make it simple to build mixed reality and deliver it to as many devices as possible? One answer is to go back to the browser, where many of the early VR experiments began, building mixed reality into browser APIs and working with existing browser 3D technologies like WebGL. That’s where work from web standards bodies is essential, designing APIs that work in all modern browsers and on as many devices as possible. Introducing WebXR Bringing mixed reality to the web can build on earlier standards, as the web is always evolving, with new technologies replacing older ones and APIs migrating in and out of browsers. One relevant newer standard that’s moving into the latest Edge builds is WebXR, the replacement for WebVR, adding support for mixed-reality devices. Technically WebXR is still a draft specification, but the way web standards now evolve requires implementation, and it’s publicly available in Chromium, under development but accessible behind browser flags in Mozilla, and in development for Apple’s WebKit. It’s important to think of WebXR as an evolution of WebVR rather than an outright replacement, adding support for augmented reality as well as virtual reality. A WebXR application running on a PC with a VR headset should also work on HoloLens 2 as mixed reality or on a Surface Duo as an augmented-reality app. It’s a way to bring the tools and techniques used for VR to cross-platform, augmented-reality applications, porting models from isolated environments so they can be displayed in blended views. Viewing WebXR in Edge Microsoft has had experimental support for WebXR in its Chromium-based Edge for some time now, with most of its support now outside of browser flags. Most of the WebXR APIs are supported, with very few remaining to be implemented. Code you write and deploy now will work in current releases, as well as in future versions of Edge. While it’s worth having a headset or HoloLens to test your applications, it’s not necessary as most WebXR experiences will render in Edge. For more complex interactions, for example with controllers, you can download and install Mozilla’s WebXR API emulator extension from Google’s Chrome extension store. This adds a new developer tool to the F12 tools, with emulation for most common VR headsets including the HTC Vive and the Oculus Quest. With the emulator installed, WebXR content will render on your PC using the profile for your chosen device, with an option to view stereo renderings of the WebXR content to test 3D. Open the extension’s WebXR tab in Edge DevTools to control your view, using your mouse to adjust the position of the simulated headset in a 3D space. You can then move controllers in the same space, clicking and squeezing controls from your dev tools. If you don’t have a headset then the Mozilla WebXR extension is an essential tool for testing and exploring WebXR content and applications. The standard DevTools features can debug the WebXR APIs and your application JavaScript, but having access to device emulators and controls can help you understand the user experience. It’s even useful if you do have a headset, as you don’t have to switch away from your screen and keyboard to test code. Authoring WebXR Most of the main 3D web frameworks have begun adding support for WebXR, including the Microsoft-sponsored Babylon.js. Alternatives come from three.js and the VR- and AR-focused A-Frame 3D library. You have the option of developing for WebXR in WebGL directly, though in practice it’s a lot easier to use a higher-level framework. Using a tool like Babylon.js makes a lot of sense, especially if you’ve already been using it to build 3D applications, as you can reuse existing scenes and assets in your WebXR applications. The same is true for moving from WebVR to WebXR, with only a single change needed to complete scene migration. The biggest change will be implementing the new controller APIs, as they’re no longer treated as game devices. Instead, you now have a pointer device, much like a mouse which exposes device-specific features that you can query from your code. Getting started with WebXR in your Babylon.js application is as easy as instantiating a WebXR Helper Experience factory. This asynchronously creates a scene, throwing an exception if WebXR isn’t supported. This allows you to launch the appropriate polyfill or back out of the WebXR experience. You can now create a WebXR session, choosing a mode: inline, immersive-vr, or immersive-ar. This will depend on the device used: for example, using inline on a PC, immersive-vr in a headset, and immersive-ar on a phone with augmented-reality support. You can now choose the reference type for your model. In most mixed-reality cases that’s going to be set to local-floor. IDG Making a scene with WebXR The result is a familiar pattern. You start by creating a scene in Babylon.js, adding a camera, controls, lighting, and objects before starting a helper and adding a set of floor meshes to create a ground. If you’re not using the Babylon.js FreeCamera object you can create a WebXR camera object that uses real-world position information to set its location. You can set the position based on the reference space for your device, either manually updating the position between frames or using WebXR’s teleportation function to support movement. WebXR is designed to work closely with your hardware, getting much of its environmental information from your device. This allows it to be flexible, so if you’re using HoloLens, it can take advantage of the device’s mapping tools, or if you have a smartphone with a 3D camera or LIDAR, it can use that to deliver location information. As you move, the camera will track your movements, using sensor data. Exiting a session is simply a matter of calling an exit function and returning to your browser. The aim of the WebXR development team is to make adding mixed reality to a web app a matter of working with familiar tools and methods. Using the same libraries and frameworks that you’ve used to build 3D web applications in the past makes it simple to migrate to the new platform, bringing existing models and interaction techniques. Bringing mixed reality to the web gives us that common platform that’s needed to deliver it to as wide an audience as possible. What we need next is for browsers like Edge to go a step further and start to use the sensor capabilities of Windows to let us use devices like Surface as WebXR viewers, viewing an augmented reality world through our PC’s cameras. Until then, we’ll have to use them to develop those experiences, using our browsers to test out experiences that we then deliver to headsets and phones. Related content feature What is Rust? Safe, fast, and easy software development Unlike most programming languages, Rust doesn't make you choose between speed, safety, and ease of use. Find out how Rust delivers better code with fewer compromises, and a few downsides to consider before learning Rust. By Serdar Yegulalp Nov 20, 2024 11 mins Rust Programming Languages Software Development how-to Kotlin for Java developers: Classes and coroutines Kotlin was designed to bring more flexibility and flow to programming in the JVM. Here's an in-depth look at how Kotlin makes working with classes and objects easier and introduces coroutines to modernize concurrency. By Matthew Tyson Nov 20, 2024 9 mins Java Kotlin Programming Languages analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks Resources Videos