// Called every time a XRSession requests that a new frame be drawn.functiononXRFrame(t,frame){constsession=frame.session;session.requestAnimationFrame(onXRFrame);constbaseLayer=session.renderState.baseLayer;constpose=frame.getViewerPose(xrRefSpace);if(pose){gl.bindFramebuffer(gl.FRAMEBUFFER,session.renderState.baseLayer.framebuffer);// Clears the framebuffer.gl.clearColor(0,0,0,0);gl.clear(gl.COLOR_BUFFER_BIT|gl.DEPTH_BUFFER_BIT);// Note: Two views will be returned from pose.views.for(constviewofpose.views){constviewport=baseLayer.getViewport(view);gl.viewport(viewport.x,viewport.y,viewport.width,viewport.height);constdepthData=frame.getDepthInformation(view);if(depthData){renderDepthInformationGPU(depthData,view,viewport);}}}else{console.error('Pose unavailable in the current frame!');}}
代码要点
必须返回有效的姿势才能访问深度数据。
pose.views 会为每只眼睛返回一个视图,并且其对应的 for 循环会运行两次。
使用 WebXR Depth Sensing API 进行实时深度可视化。红色表示较近的像素,蓝色表示较远的像素。
在 WebXR 中添加手部互动
通过向 WebXR 应用添加手部互动,您可以实现更直观、更沉浸的体验,从而提高用户互动度。
手部输入是 Android XR 的主要交互机制。Chrome for Android XR 支持将 Hand Input API 用作默认输入。借助此 API,用户可以使用手势和手部动作来抓取、推或操控场景中的元素,从而自然地与虚拟对象互动。
以下动画展示了将“双指张合”与 WebXR API 结合使用的示例,其中显示了用户“擦除”虚拟空间表面,以显示通往现实世界的透视窗口。
WebXR 演示:使用手指张合擦除屏幕,以透视现实世界。
WebXR 中的权限
当您使用需要权限的 WebXR API 时,Chrome 会提示用户向网站授予权限。所有 WebXR API 都需要 3D 映射和摄像头跟踪权限。访问所跟踪的面部、眼睛和手部数据也受权限保护。如果已授予所有所需权限,调用 navigator.xr.requestSession('immersive-ar', options) 会返回有效的 WebXR 会话。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-07-27。"],[],[],null,["# Develop with WebXR\n\nChrome on Android XR supports [WebXR](https://www.w3.org/TR/webxr/). WebXR is an open standard\nby [W3C](https://www.w3.org/) that brings high-performance XR APIs to\n[supported browsers](https://immersiveweb.dev/#supporttable). If you're building for the web, you can\nenhance existing sites with 3D models or build new immersive experiences.\n\nThe following WebXR features are supported by Chrome for Android XR:\n\n- [Device API](https://www.w3.org/TR/webxr/)\n- [AR Module](https://www.w3.org/TR/webxr-ar-module-1/)\n- [Gamepads Module](https://www.w3.org/TR/webxr-gamepads-module-1/)\n- [Hit Test Module](https://immersive-web.github.io/hit-test/)\n- [Hand Input](https://www.w3.org/TR/webxr-hand-input-1/)\n- [Anchors](https://immersive-web.github.io/anchors/)\n- [Depth Sensing](https://immersive-web.github.io/depth-sensing/)\n- [Light Estimation](https://immersive-web.github.io/lighting-estimation/)\n\nTo see WebXR in action, launch Chrome on an Android XR device or the [Android XR\nEmulator](/develop/xr/jetpack-xr-sdk/studio-tools#android-xr) and browse the official [WebXR samples](https://immersive-web.github.io/webxr-samples/).\n\nPrerequisite: Choose a WebXR framework\n--------------------------------------\n\nBefore you begin developing, it's important to choose the right WebXR framework.\nThis significantly enhances your own productivity and improves the quality of\nthe experiences you create.\n\n- For full control over 3D scenes and creation of custom or complex interactions, we recommend [three.js](https://threejs.org/) and [babylon.js](https://www.babylonjs.com/).\n- For rapid prototyping or using HTML-like syntax to define 3D scenes, we recommend [A-Frame](https://aframe.io/docs/1.6.0/components/webxr.html) and [model-viewer](https://modelviewer.dev/examples/augmentedreality/).\n- You can also review more [frameworks and sample code](https://immersiveweb.dev/#a-frame).\n\nIf you prefer using native JavaScript and WebGL, refer to [WebXR on\nGitHub](https://github.com/immersive-web/webxr-samples/tree/main) to create your first WebXR experiment.\n\nAdapt for Android XR\n--------------------\n\nIf you have existing WebXR experiences running on other devices, you may need to\nupdate your code to properly support WebXR on Android XR. For example, WebXR\nexperiences focused on mobile devices will have to adapt from one screen to two\nstereo screens in Android XR. WebXR experiences targeting mobile devices or\nexisting headsets may need to adapt input code to be hand-first.\n\nWhen working with WebXR on Android XR, you may need to update your code to\ncompensate for the fact that there are two screens---one for each eye.\n\nAbout ensuring a sense of depth in WebXR\n----------------------------------------\n\nWhen a user places a virtual object in their physical space, its scale should be\naccurate so that it appears as if it truly belongs there. For example, in a\nfurniture shopping app, users need to trust that a virtual armchair will fit\nperfectly in their living room.\n\nChrome for Android XR supports the [Depth Sensing Module in\nWebXR](https://www.w3.org/TR/webxr-depth-sensing-1/), which enhances a device's ability to perceive the\ndimensions and contours of their real-world environment. This depth information\nallows you to create more immersive and realistic interactions, helping users\nmake informed decisions.\n\nUnlike depth sensing on mobile phones, depth sensing in Android XR is\nstereoscopic, streaming two depth maps in real-time for binocular vision. You\nmay need to update your WebXR code to properly support stereo depth frames.\n\nThe following example renders depth information stereoscopically: \n\n // Called every time a XRSession requests that a new frame be drawn.\n function onXRFrame(t, frame) {\n const session = frame.session;\n session.requestAnimationFrame(onXRFrame);\n const baseLayer = session.renderState.baseLayer;\n const pose = frame.getViewerPose(xrRefSpace);\n\n if (pose) {\n gl.bindFramebuffer(gl.FRAMEBUFFER, session.renderState.baseLayer.framebuffer);\n\n // Clears the framebuffer.\n gl.clearColor(0, 0, 0, 0);\n gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);\n\n // Note: Two views will be returned from pose.views.\n for (const view of pose.views) {\n const viewport = baseLayer.getViewport(view);\n gl.viewport(viewport.x, viewport.y, viewport.width, viewport.height);\n\n const depthData = frame.getDepthInformation(view);\n if (depthData) {\n renderDepthInformationGPU(depthData, view, viewport);\n }\n }\n } else {\n console.error('Pose unavailable in the current frame!'); }\n }\n\n**Key points about the code**\n\n- A valid pose must be returned to access depth data.\n- `pose.views` returns a view for each eye and its corresponding for loop runs twice.\n\nAlas, your browser doesn't support HTML5 video. That's OK! You can still [download the video](/static/videos/design/ui/xr/depth_01.mp4) and watch it with a video player.\nReal-time depth visualization using WebXR depth sensing API. Red indicates closer pixels while blue indicates farther pixels.\n\nAdd hand interaction in WebXR\n-----------------------------\n\nAdding hand interaction to your WebXR app enhances user engagement by enabling\nmore intuitive and immersive experiences.\n\nHand input is the primary interaction mechanism in Android XR. Chrome for\nAndroid XR supports the [Hand Input API](https://github.com/immersive-web/webxr-hand-input/blob/master/explainer.md) as the default input.\nThis API lets users interact with virtual objects naturally, using gestures and\nhand movements to grab, push, or manipulate elements in the scene.\n\nDevices such as mobile phones or controller-centric XR devices may not yet\nsupport hand input. You may need to update your WebXR code to properly support\nhand input. The [Immersive VR Session with Hands](https://immersive-web.github.io/webxr-samples/immersive-hands.html) demonstrates\nhow to integrate hand tracking into your WebXR project.\n\nThe following animation shows an example of combining pinching with the WebXR\nAPI to show a user \"wiping out\" the surface of a virtual space to reveal a\npass-through window into the real world. \nAlas, your browser doesn't support HTML5 video. That's OK! You can still [download the video](/static/videos/design/ui/xr/xrlabs_screenwiper_20241209_v2.mp4) and watch it with a video player.\nWebXR demo of using hand pinch to wipe out screens to see-through the physical reality.\n\nPermissions in WebXR\n--------------------\n\nWhen you use WebXR APIs that require permission, Chrome prompts the user to\ngrant the permission to the site. All WebXR APIs require the 3d mapping and\ncamera tracking permission. Access tracked face, eye, and hand data are also\nprotected by permissions. If all needed permissions are\ngranted, calling `navigator.xr.requestSession('immersive-ar', options)` returns\na valid WebXR session.\n\nThe permissions preference chosen by the user persists for each domain.\nAccessing a WebXR experience in a different domain causes Chrome to prompt for\npermissions again. If the WebXR experience requires multiple permissions, Chrome\nprompts for each permission one at a time.\n\nAs with all permissions on the web, you should properly handle situations where\nthe permission is denied."]]