Incorporate the head position in your app with ARCore for Jetpack XR

After the user grants permission for head tracking, your app can retrieve head pose information through ARCore for Jetpack XR. Head pose information can help your app create more intuitive experiences, such as a window following the user's field of view.

Create an ARCore for Jetpack XR session

Obtain head pose information through an ARCore for Jetpack XR session. See Understand a Session's lifecycle to obtain a Session.

Configure the session

Head tracking is not enabled by default on XR sessions. To enable head tracking, configure the session and set the HeadTrackingMode.LAST_KNOWN mode:

val newConfig = session.config.copy(
    headTracking = Config.HeadTrackingMode.LAST_KNOWN,
)
when (val result = session.configure(newConfig)) {
    is SessionConfigureSuccess -> TODO(/* Success! */)
    is SessionConfigureConfigurationNotSupported ->
        TODO(/* Some combinations of configurations are not valid. Handle this failure case. */)

    else ->
        TODO(/* The session could not be configured. See SessionConfigureResult for possible causes. */)
}

Retrieve head pose data

Head pose data is exposed through an RenderViewpoint. A RenderViewpoint describes the pose and field of view for a given point of view of a device. A device can have a left, right, or mono viewpoints, depending on the device's capabilities.

To obtain data for the mono viewpoint:

val mono = RenderViewpoint.mono(session) ?: return
mono.state.collect { state ->
    val fov = state.fieldOfView
    val viewpointPose = state.pose
}

Applications of head tracking

One way your app could use head tracking is to keep entities in the user's field of view, for apps that require your users to look or move around.

Avoid using headlocked entities in the user's field of view because this can cause motion sickness. Instead, use entity movement that follows the user's head after a small duration:

val viewpointPose = RenderViewpoint.left(session)!!.state
lifecycleScope.launch {
    while (true) {
        delay(2000)
        val start = panel.getPose()
        val startTime = session.state.value.timeMark

        val pose = session.scene.perceptionSpace.transformPoseTo(
            viewpointPose.value.pose,
            session.scene.activitySpace
        )
        val target = Pose(pose.translation + pose.forward * 1f, pose.rotation)
        while (true) {
            val ratio =
                (session.state.value.timeMark - startTime).inWholeMilliseconds / 500f
            panel.setPose(Pose.lerp(start, target, ratio))
            if (ratio > 1f) break
        }
    }
}