Presentation description
This study assessed how performance during a spatial navigation task varies between normal vision and impaired vision in virtual reality (VR). Participants completed a homing task, known as a triangle completion task, under normal vision and impaired vision conditions simulated in VR. The triangle completion task consists of walking through several paths and returning to a remembered location based on cues available. Previous research has established that individuals tend to rely more on visual cues, such as landmarks in the environment, during a navigational task compared to body-based cues (utilized from moving through an environment) and perform most successfully when both cues are present. When vision of landmarks in the environment is impaired, participants' ability to navigate may decrease or they may rely more on body-based cues. The current study simulated impaired vision with a central field of view scotoma (a central blind spot represented as a blackened region) that followed participants' gaze. Participant response was measured based on accuracy and variability in returning to a designated target location. Results showed that participants were most accurate and least variable when having access to both visual and body-based (self-motion) cues for navigation. Variability (consistency in response) improved (reduced) under normal vision compared to impaired vision, however there was not a significant difference in navigational accuracy. Overall, these results indicate that participant variability worsens when navigating with a scotoma, however it does not seem to impact accuracy. These findings may be beneficial in understanding how low vision populations conduct spatial navigation in comparison to normal vision populations.
Presenter Name: CC Willemsen
Henriksen