Research direction
Fusing gaze, body, and task state for real-time context modeling in XR.
The useful goal is not to make XR feel magically aware. The goal is to capture enough context to reduce friction, adapt support, and keep users in control.
Most XR apps still react to clicks, not human context.
Spatial systems can observe richer cues than 2D software: gaze direction, head pose, hands, posture, boundary behavior, task phase, object state, and collaboration state. Those signals are valuable only when they become explainable interface decisions.
Signals worth modeling
- Gaze and head pose for attention direction, not mind reading.
- Hands and controller state for interaction intent.
- Posture and movement for fatigue, transition, or hesitation cues.
- Boundary and spatial anchors for safety and scene continuity.
- Task state for whether guidance should expand, collapse, or pause.
Better adaptation without hiding the system.
For training, education, healthcare simulation, field support, and industrial XR, context modeling can help tune guidance to the moment. The non-negotiable part is transparency: users should know what changed, why it changed, and how to override it.