克服虚拟现实中的空间限制——重定向行走技术综述
-
摘要:研究背景
在虚拟现实中,行走交互技术因能给予用户更沉浸、更自然的体验而受到广泛关注。然而,如何使用户于有限的物理空间中漫游更广阔的虚拟空间,这一问题使行走技术面临巨大挑战。重定向行走技术作为虚拟现实系统中一种重要的中间层技术,能够通过重新映射用户在虚拟环境中的运动,使用户的物理和虚拟运动产生差异,从而可能突破空间限制、允许更大范围的漫游。通过运用重定向增益、使用重定向方法、利用感知和环境因素等方式,重定向行走技术可以增强用户在虚拟现实中的行走交互体验,对促进虚拟现实技术的未来发展具有巨大潜力。
目的本研究综述了学术界在重定向行走技术上的重要成果和最新进展。通过多个主题的全面探索和讨论,本综述阐述了重定向行走技术对于虚拟现实交互的重要实用意义。本综述致力于帮助虚拟现实领域的研究者、开发者和工业界从业人员加深对重定向行走技术的认识,并为相关主题的研究和开发提供参考。
方法本研究分四个主题综述了重定向行走领域的研究工作: (1)引导用户的重定向增益。该部分内容以重定向增益这一重定向行走技术中的重要概念为中心,分三个子章节,分别探讨重定向增益的概念和类型、重定向增益的感知阈值和影响感知阈值的因素。 (2)重定向行走控制方法。该部分内容归纳了用于实现重定向行走控制的四类方法:即时型方法、预测型方法、预置型方法和显式重定向方法。 (3)利用感知和环境因素的重定向行走技术。该部分内容探索了作为广义重定向行走技术的感知类和环境操纵类技术,具体包括扫视盲视抑制技术、虚拟环境操纵技术、干扰物技术和触觉反馈技术。 (4)重定向行走技术的评价和衡量。该部分内容总结了对于重定向行走技术的评价和衡量类工作,包括重定向行走方法的评估、重定向行走方法的评价指标和重定向行走技术效果的探讨。
结果本综述共引用237篇重定向行走技术相关工作。通过对于这些现有工作的阐述和探讨,本综述发现了重定向行走技术当前面临的一些问题和挑战,并指出该技术在如下方向上可能的未来发展: (1)深入研究重定向增益等感知技术,实现隐式而有效的操纵。 (2)充分利用用户的多感官,以合适的感知信号/刺激实现重定向。 (3)研究自适应和个性化的重定向行走方法,提升重定向效果。 (4)开发多人协同虚拟现实环境下的重定向行走方法,适应于元宇宙概念发展下未来虚拟现实在线应用的场景。 (5)结合重定向行走技术和其他漫游技术,提升交互体验。 (6)将重定向行走技术落地于大规模的现实工业应用场景。
结论本综述全面地总结和讨论了重定向行走技术在发展历程中的重要成果和最新进展,强调了其对于突破虚拟现实中的空间限制及增强用户沉浸感和安全性方面的重要作用。通过分析这些现有研究,本综述指出了重定向行走技术在未来发展上的方向和机遇,为研究者和开发者提供了启示。若能突破现存的挑战,重定向行走技术将进一步为用户带来灵活而高效的沉浸漫游体验,在未来多种场景的虚拟现实应用中发挥更大的作用。
-
关键词:
- 虚拟现实(VR) /
- 重定向行走(RDW) /
- 重定向增益 /
- 感知
Abstract:As the virtual reality (VR) technology strives to provide immersive and natural user experiences, the challenge of aligning vast virtual environments with limited physical spaces remains significant. This survey comprehensively explores the advancements in redirected walking (RDW) techniques aimed at overcoming spatial constraints in VR. RDW addresses this by subtly manipulating users’ physical movements to allow for seamless navigation within constrained areas. The survey delves into gain perception mechanisms, detailing how slight discrepancies between virtual and real-world movements can be utilized without user awareness, thus extending the effective navigable space. Various RDW control algorithms for gain-based RDW are analyzed, highlighting their implementation and effectiveness in maintaining immersion and minimizing perceptual disturbances. Furthermore, novel methods extending beyond traditional gain-based techniques are discussed, showcasing innovative approaches that further refine VR interactions. The practical implications of RDW in enhancing safety and reducing physical collisions in VR environments are underscored, alongside its potential to improve user experience by aligning virtual exploration more closely with natural human behavior patterns. Through a thorough review of existing literature and recent advancements, this survey provides a systematic understanding for researchers, developers, and industry professionals. It underscores the importance of RDW in the future of VR, emphasizing RDW's role in making VR more accessible and practical across various applications, from education and training to therapy and entertainment. The paper concludes with a forward-looking perspective on the continued evolution and potential of RDW in revolutionizing virtual reality experiences.
-
Keywords:
- virtual reality (VR) /
- redirected walking (RDW) /
- redirection gain /
- perception
-
1. Introduction
In our three-dimensional (3D) world, we are constantly confronted with an explosion of information and a diversification of modalities. Traditional two-dimensional (2D) interaction methods are increasingly limited in their ability to present complex data and simulate the real world. The virtual reality (VR) technology[1], by creating virtual 3D environments, allows users to interact with the digital world in an immersive manner, providing a crucial interface and interactive environment for the future computer and Internet. VR interaction transcends the constraints of traditional 2D interactions, enabling users to interact not with pixel points on a screen but with 3D images projected into a stereoscopic space. This capability better simulates the real world, presenting information more intuitively and comprehensively, thus meeting the complex demands of various industries for efficient information processing and realistic interaction.
Despite the significant advancements in the VR technology driven by the development of 3D content generation, wearable interaction devices, and other hardware and software innovations, achieving natural and efficient mobility in virtual environments (VEs) remains a formidable challenge[2]. The inherent conflict between the vastness of virtual worlds and the physical constraints of real-world spaces limits the depth and breadth of user experience. Many VR systems resort to techniques like “teleportation” and “instantaneous movement”[3, 4] to address the issue of non-stationary movement, but these methods severely disrupt the user's sense of spatial presence and immersion, failing to provide an optimal solution[5]. The redirected walking (RDW) technology[6, 7] emerges as a critical intermediary solution to enhance VR interaction quality. By leveraging the intricacies of human perceptual systems, this technology employs sophisticated algorithms to subtly remap the relationship between physical and virtual movements. This enables users to change direction or path seamlessly and comfortably within the confines of physical space, thus overcoming the limitations imposed by the real-world environment. Consequently, RDW significantly enhances the sense of immersion and practicality in VR applications, allowing for a more profound and natural user experience[8].
The importance of RDW is multifaceted. Firstly, RDW can significantly enhance user immersion and experience quality in the practical adoption of VR[8]. By intelligently adjusting the physical movement paths required for free navigation within VEs, RDW enables users to engage in unrestricted virtual exploration within limited physical spaces. This not only broadens the scope of explorable VEs and increases the freedom and flexibility of VR experiences but also ensures that the exploration of these environments aligns more closely with natural human behavior patterns. Consequently, it profoundly elevates the immersive quality and overall user experience in VR.
Secondly, the RDW technology significantly enhances the safety of VR environments. By utilizing intelligent algorithms to modify the movement paths of users exploring virtual spaces, VR systems can prevent physical collisions and injuries that might occur due to movements within the VE[9-15]. By analyzing user movement patterns and intentions, the RDW technology can preemptively adjust the VE, guiding users away from potential collision risks[16-19]. This proactive approach not only ensures a safer VR experience but also reinforces the overall reliability and practicality of VR applications.
Thirdly, the study of RDW offers significant insights into the mechanisms of motion perception and spatial cognition within VR environments[20-23]. Human perceptual systems are inherently imperfect, possessing a degree of tolerance for discrepancies. The RDW technology leverages this imperfection, allowing for subtle adjustments in virtual movement without compromising the user's experience. This enables users to safely explore more extensive virtual spaces within the confines of limited physical environments. Further research into RDW not only enhances our understanding of these perceptual and cognitive mechanisms but also provides a novel experimental platform and research tool for cognitive science, fostering deeper exploration into human perception and cognition.
Finally, with the advent of new technologies such as 5G and cloud computing, VR is poised to integrate even more seamlessly into our daily lives and work environments. As a critical technology for enhancing the quality of VR experiences, RDW ensures smooth interaction and collaboration among users within VEs[18]. This provides a broader and more effective means of interaction and experience across various fields, including education and training, psychological therapy, and sports.
This paper comprehensively reviews the advancements in the RDW technology within the realm of VR. From the exploration of gain perception mechanisms and the development of gain-based redirected controller algorithms to the integration of comprehensive redirection techniques and algorithms, it aims to offer a broad perspective on the multifaceted technical aspects of RDW. Through a thorough review of the literature and analysis of the latest research, this paper aspires to provide researchers, technology developers, and VR industry professionals with a clear and systematic understanding framework.
2. Redirection Gains for Redirecting the User
2.1 Principles and Types of Redirection Gains
Visual feedback of self-motion is perceived through optic flow, forming a pattern of expanding motion when users walk forward. This pattern helps estimate travel distance and heading direction[24]. In a non-static environment, moving elements disrupt this pattern, affecting the perception of distance[25] and heading[26]. In VR, slight contrasts between visual input and body movement can redirect users along modified paths without their knowledge[27]. This discrepancy between virtual and real-world movement is known as redirection gain[2].
Redirection gains, which map user motions to virtual environments, were initially categorized by Steinicke et al.[2] into three types: translation, rotation, and curvature. Translation gain magnifies forward step distances, rotation gain increases or decreases the angle of the user's rotations, while curvature gain bends straight paths into curves.
Apart from these, more novel types of gains have also been discovered and measured. Fig.1 shows a comprehensive illustration of different redirection gain types.
Figure 1. Illustration of different types of redirection gains. (a) Translation. (b) Rotation. (c) Curvature. (d) Bending. (e) Deviation. (f) Strafing. (g) Jumping (height). (h) Jumping (distance). (i) Jumping (rotation angle). (j) Jumping (stair). (k) Slope. (l) Bidirectional Rotation Difference (BiRD). (m) Backward (translation). (n) Backward (curvature).Bending Gain. Bending gain[28] is a variation of curvature gain, bending an already curved virtual path to achieve even greater physical curvature.
Strafing Gain. Strafing gain[29] adds incremental lateral movements when the user travels straight ahead, allowing users to travel diagonally while maintaining their orientation.
Deviation Gain. Deviation gain[30] transforms rotation into translation, shifting the user sideways when they rotate in place.
Bidirectional Rotation Gain Difference (BiRD). BiRD[31] is a special rotation gain applied when the user turns their head back and forth while walking forward. When reciprocating, BiRD applies a different gain value that causes a shift between virtual and physical headings, thus manipulating the user to travel in a different direction.
Non-Forward Gain. Despite most gains being originally designed for forward movement, non-forward gains[32, 33] alter and apply them to sideways and backward steps as well. So far, translation and curvature gains have both been proven to work for non-forward steps, with curvature gain even achieving a higher detection threshold.
Slope Gain. Slope gain[34, 35] is the difference in angle between the virtual and physical slopes, the latter of which can even be flat ground. This is achieved through visual cues and distance scaling in the virtual world[34, 35], or utilizing props such as slanted shoe soles to simulate a virtual slope different from the physical one[36, 37].
Vertical Gain. Vertical gain[38] maps the user's physical vertical movements, such as stretching and squatting, into the virtual world, and allows for novel methods to naturally control objects with vertical movement, such as drones.
Jumping Gain. Jumping gain[39-41] manipulates the height, distance, rotation angle, and curvature path of a jump. By manipulating peak timing and applying different gains to the ascending and descending phases of a jump, jumping gains can also create the experience of jumping onto virtual stairs, relocating users on the vertical axis[42].
2.2 Detection Thresholds of Redirection Gains
In RDW, the virtual camera can be manipulated by redirection gains, allowing it to move differently from the user's physical movements. To maintain subtlety (i.e., the user cannot detect the manipulation), these gains should remain within a specific range known as the detection threshold (DT). It serves as a crucial hyperparameter in RDW algorithm research.
Through psychophysical experiments, researchers have conducted comprehensive measurements and analyses to determine the detection threshold. Additionally, concepts that further describe this threshold, including the low detection threshold (LDT), point of subjective equality (PSE), and high detection threshold (HDT), have also been explored[43]. We compile the different thresholds covered in the major existing papers, primarily including translation, rotation, and curvature thresholds. Additionally, researchers have included thresholds for bending, jumping, non-forward, and interactive behaviors based on different interactions in VR. The compiled threshold results are summarized and presented in Tables 1-4.
Table 1. Detection Thresholds of Translation GainSource Threshold Comment Steinicke et al., 2008[27] 0.78–1.22 – Steinicke et al., 2009[2] 0.86–1.26 – Bruder et al., 2012[44] 0.8724 –1.2896 Walking 0.9378 –1.3607 Electric wheelchair Zhang et al., 2018[45] 0.942–1.097 360 degree video-based telepresence systems Kruse et al., 2018[46] 0.85823 –1.26054 No visible virtual feet in a high-fidelity visual environment 0.87583 –1.15388 Visible virtual feet in a high-fidelity visual environment 0.72745 –1.25038 Visible virtual feet in a low cue VE Reimer et al., 2020[47] 0.911–1.278 No self-avatar 0.891–1.216 Visible self-avatar Kim et al., 2021[48] 0.88–1.19 Larger VR room, reference translation gain: 1.0 0.85–1.29 Smaller VR room, reference translation gain: 1.0 0.60–0.97 Larger VR room, reference translation gain: 1.2 0.68–1.16 Smaller VR room, reference translation gain: 1.2 Kim et al., 2023[49] 0.91–1.22 Large×empty (size/object) 0.85–1.12 Medium×empty (size/object) 0.73–1.10 Small×empty (size/object) 1.02–1.34 Large×furnished (size/object) 0.92–1.23 Medium×furnished (size/object) 0.96–1.24 Small×furnished (size/object) 0.76–1.25 Large×empty (size/layout) 0.86–1.25 Large×centered (size/layout) 0.83–1.25 Large×peripheral (size/layout) 0.84–1.35 Large×scattered (size/layout) 0.80–1.25 Small×empty (size/layout) 0.73–1.23 Small×centered (size/layout) 0.78–1.25 Small×peripheral (size/layout) 0.82–1.25 Small×scattered (size/layout) Luo et al., 2024[50] 0.48–1.78 With different zoomed-in FOVs Table 2. Detection Thresholds of Rotation GainSource Threshold Comment Steinicke et al., 2008[27] 0.59–1.10 Discrimination between virtual and physical rotation 0.76–1.19 Discrimination between two successive rotations Steinicke et al., 2009[2] 0.67–1.24 – Bruder et al., 2012[44] 0.6810 –1.2594 Walking 0.7719 –1.2620 Electric wheelchair Serafin et al., 2013[51] 0.82–1.20 Audio Paludan et al., 2016[52] 0.93–1.27 Visual density, control 0.81–1.19 Visual density, 4 objects 0.82–1.20 Visual density, 16 objects Nilsson et al., 2016[53] 0.77–1.10 No audio 0.80–1.11 Static audio 0.79–1.08 Moving audio Zhang et al., 2018[45] 0.877–1.092 Rotations to the left 0.892–1.054 Rotations to the right Williams & Peck, 2019[43] 0.5742 –1.2829 FOV 40°, without distractors, female 0.7382 –1.1790 FOV 40°, without distractors, male 0.5455 –1.3198 FOV 40°, with distractors, female 0.7619 –1.2156 FOV 40°, with distractors, male 0.6459 –1.3218 FOV 110°, without distractors, female 0.6999 –1.5616 FOV 110°, without distractors, male 0.3692 –1.4772 FOV 110°, with distractors, female 0.7242 –1.6211 FOV 110°, with distractors, male Brument et al., 2020[54] 1.13–1.32 Rotation: 60°, vignetting (none, color) 1.11–1.29 Rotation: 60°, vignetting (none, blur) 1.13–1.32 Rotation: 60°, vignetting (horizontal, color) 1.10–1.29 Rotation: 60°, vignetting (horizontal, blur) 1.13–1.38 Rotation: 60°, vignetting (global, color) 1.08–1.35 Rotation: 60°, vignetting (global, blur) 1.15–1.33 Rotation: 90°, vignetting (none, color) 1.15–1.30 Rotation: 90°, vignetting (none, blur) 1.12–1.40 Rotation: 90°, vignetting (horizontal, color) 1.13–1.35 Rotation: 90°, vignetting (horizontal, blur) 1.16–1.35 Rotation: 90°, vignetting (global, color) 1.12–1.33 Rotation: 90°, vignetting (global, blur) Brument et al., 2021[55] 0.64–1.35 Rotation type: 20°/s 0.58–1.36 Rotation type: 30°/s 0.72–1.19 Rotation type: 40°/s Robb et al., 2022[56] 0.803–1.242 One week 0.862–1.117 Two week 0.874–1.128 Three week 0.894–1.095 Four week Wang et al., 2022[57] 0.89–1.28 Seated 0.80–1.40 Standing Xu et al., 2024[31] 0.84–1.28 Bidirectional Ogawa et al., 2023[58] 0.81–1.27 No sound 0.57–1.37 Fixed sound 0.81–1.33 Redirected sound Table 3. Detection Thresholds of Curvature GainSource Threshold Comment Steinicke et al., 2008[27] −π/50<r<+π/52.94 Scene rotation started immediately −π/69.23<r<+π/85.71 Scene rotation started after 2 meters Steinicke et al., 2009[2] r>−π/69.23 Leftward bended paths r>+π/69.23 Rightward bended paths Bruder et al., 2012[44] r⩾ Walking r\geqslant\text{8.97} Electric wheelchair Neth et al., 2012[59] r> \text{10.57} v = \text{0.75\; m/s} r> \text{23.75} v = \text{1.00\; m/s} r> \text{26.99} v = \text{1.25\; m/s} Serafin et al., 2013[51] - 25–30 Audio Grechkin et al., 2016[60] r> \text{11.61} Constant stimuli r> \text{6.41} Maximum likelihood Nguyen et al., 2018[61] r> \text{10.7} Male r> \text{8.63} Female Rietzler et al., 2018[62] 5.2°/m – Bölling et al., 2019[63] r> \text{97} Day-1 (baseline) r>\text{12} Day-2 (after first adaptation) r>\text{270} Day-3 (re-test at the start) r>\text{14} Day-3 (after second adaptation) Reimer et al., 2020[47] -5.518< r< 4.124 Without body -5.590< r< 3.428 With body Nguyen et al., 2020[64] 4.17<r<55.5 – Nguyen et al., 2020[65] 4.06<r<38.11 , r(\text{mean})>\text{6.75} Single task 4.06<r<38.11 , r(\text{mean})>\text{5.24} Dual task Li et al., 2021[66] gc=\text{0.128}\pm\text{0.034\; m}^{-1} Left direction, total detection threshold gc=\text{0.126}\pm\text{0.036\; m}^{-1} Left direction, ascending order gc=\text{0.130}\pm\text{0.034\; m}^{-1} Left direction, descending order gc=\text{0.098}\pm\text{0.043\; m}^{-1} Right direction, total detection threshold gc=\text{0.096}\pm\text{0.043\; m}^{-1} Right direction, ascending order gc=\text{0.101}\pm\text{0.043\; m}^{-1} Right direction, descending order gc=\text{0.079\; m}^{-1} – \text{0.132\; m}^{-1} Left-curved postorder path gc=\text{0.055\; m}^{-1} – \text{0.108\; m}^{-1} Right-curved postorder path Mostajeran et al., 2024[67] DT=\text{0.078}(\text{right}) Nature environments DT=\text{-0.095}(\text{left}) Nature environments DT=\text{0.069}(\text{right}) Urban environments DT=\text{-0.083}(\text{left}) Urban environments Table 4. Detection Thresholds of Non-Forward and Interactive GainGain Source Threshold Comment Bending Langbehn et al., 2017[28] 3.25 r_{\text{real}} = 1.25\; \text{m} 4.35 r_{\text{real}} = 2.5\; \text{m} Jumping Hayashi et al., 2019[39] 0.68–1.44 Distance 0.09–2.16 Height 0.50–1.39 Rotation Li et al., 2021[41] 0.70–1.35 Horizontal 0.38–2.57 Vertical Non-forward vertical Matsumoto et al., 2020[38] 0.842–2.547 Virtual environment: stretching up 0.827–1.944 Virtual environment: crouching 2.576–34.096 Drone telepresence system: stretching up 1.121–3.410 Drone telepresence system: crouching Non-forward steps translation Cho et al., 2021[32] 0.84–1.33 Backward step 0.87–1.16 Leftward sidestep 0.88–1.18 Rightward sidestep Non-forward steps curvature Cho et al., 2021[32] - 10.95–10.30 Backward step - 6.02–13.19 Leftward sidestep - 9.92–4.65 Rightward sidestep Strafing You et al., 2022[29] 4.68 degree Left 5.57 degree Right Interactive Hoshikawa et al., 2022[68] 0.74–1.73 Push condition, using door prop 0.66–2.39 Push condition, using controller 0.49–1.48 Pull condition, using door prop 0.31–2.68 Pull condition, using controller 2.2.1 Psychophysics
Since the detection threshold is a perceptual concept, its measurement is fundamentally rooted in psychophysics, which quantifies how changes in stimulus intensity are perceived by an observer. Psychophysical experiments involve exposing participants to varying stimulus intensities, collecting responses to model their perception via a psychometric function[69]. This is often executed using the method of constant stimulus (MCS), where participants, in a VE, choose between two forced responses—indicating if the perceived speed is faster or slower than the real world—without the option of neutrality, thus leading to a 50% chance of choosing the correct answer when participants are uncertain[69]. Alternatively, the adjustment method (MoA) involves participants dynamically adjusting the stimulus intensity to the detection threshold, calculated as the average of detected points[70]. Adaptive methods like the staircase[71] or parameter estimation by sequential testing (PEST) oscillate intensity around the threshold for rapid detection[6, 72, 73].
2.2.2 Supplementary Metrics to Detection Thresholds Measurement
Due to individual differences, standardizing results of measured DTs in RDW has been challenging. Therefore, various metrics are employed to understand and justify these differences. The Simulator Sickness Questionnaire (SSQ)[74] and the Fast Motion Sickness Scale (FMS)[75] help explain the range of DTs observed. Additionally, the NASA Task Load Index[76] assesses cognitive workload's impact on DT measurements, as explored in [77]. The VR Locomotion Experience Questionnaire (VRLEQ)[78] measures the user experience of VR locomotion techniques, while the Igroup Presence Questionnaire (IPQ)[79] evaluates the sense of presence within the VE. These tools provide insights into user perceptions and contribute to understanding the variability in DT measurements.
2.3 Factors Influencing Detection Thresholds
Detection thresholds may vary due to several factors. Understanding these influences is important for optimizing user experience and enhancing the effectiveness of RDW techniques. In this subsection, we categorize the factors that may influence detection thresholds into three primary lenses: human senses, cognitive sensitivity and behavior, and spatial scene perception. This tripartite division encompasses biological sensory factors, cognitive aspects, and spatial scene relationships, capturing the diverse influences on DTs.
2.3.1 Human Senses
In this subsection, we analyze the impact of human senses in virtual environments on the measured DTs, including visual, auditory, tactile, and vestibular systems.
Visual. Direct visual perception factors, such as optical flow and field of view (FOV), have significant impacts. In the early 2000s, Warren et al.[80] underscored the role of visual perception in human walking, highlighting the impact of optical flow on how users perceive self-motion in VEs. This inspired Jaekl et al.[81] to explore visual motion tolerance during head movements, revealing broader tolerances for translational than rotational movements in an earth-stable environment. Additionally, Rotacher et al.[82] found a positive correlation between higher visual dependency and curvature threshold gains, as assessed via the rod-and-frame test. Building on this, Bruder et al.[83] demonstrated that optical illusions can subtly alter users’ self-motion perception in VEs, thus enhancing the VR experience without compromising user awareness.
In the studies of field of view (FOV), Bolte et al.[84] showed that augmented head rotation and enhanced geometric field of view (GFOV) reduce actual head movement and increase exploration efficiency in VEs. Meanwhile, Williams and Peck[43] observed that a larger FOV broadens the range of rotation gain thresholds, with gender-specific responses; male thresholds are raised, while distractions more influence female thresholds. Brument et al.[54] reported minimal overall impact on the rotation gain thresholds from FOV restrictions, suggesting the utility of dynamic FOV adjustments for improved navigation and comfort. Luo et al.[50] found that reducing FOV significantly expands translation gain thresholds, which requires careful control to avoid discomfort.
Auditory. Auditory signals can also influence the user's perception of the VE. Serafin et al.[51] initially determined that auditory cues could subtly influence user redirection, though their impact was generally limited. Extending this, Nogalski and Fohl[85] utilized wave field synthesis to show that auditory stimuli sensitivity varies with participants’ experience, particularly affecting their perception of rotational and curvature gains. Conversely, Nilsson et al.[53] reported minimal effects of varying audio signals (static, dynamic, or absent) on rotational gains, underscoring the predominance of visual cues. Gao et al.[86] further examined the reduction in sensitivity due to discordant visual-auditory signals, which diminished users’ detection threshold perception in VEs. More recently, Weller et al.[87] found that modifying auditory feedback related to footsteps could significantly enhance virtual space navigation, especially when combined with visual redirection strategies. Lee et al.[88] observed that higher optical flow tightened detection thresholds, albeit less so for VR-experienced users, emphasizing the significance of familiarity with the technology.
Tactile. Tactile (or haptic) interactions with the real world can provide additional sensory cues. Steinicke et al.[89] pioneered the integration of redirection technology using passive haptic feedback for more extensive navigation. They utilized a proxy object to guide users along a precomputed path to a registered target position. Similarly, Matsumoto et al.[90] showed that tactile cues could effectively reduce perceived curvature gain and enhance immersion in walking simulations on curved paths.
Vestibular System. The vestibular system plays a crucial role in the user's sense of balance and spatial orientation, which is pivotal in shaping perceptions of DTs. Matsumoto et al.[91] demonstrated that noisy vestibular electrical stimulation (GVS) subtly extends curvature gain thresholds, without notable impacts on walking speed or head swing. This highlights how sensory systems, alongside cognitive and behavioral factors, influence the acceptable range of DTs in RDW.
2.3.2 Cognitive Sensitivity and Behavior
We discuss cognitive sensitivity and behavior in four aspects, including awareness and perception sensitivity, cognitive load, embodiment, and posture. They summarize how sensitive a person is to changes in information in the virtual world, leading to a different set of DTs.
Awareness and Perception Sensitivity. Depending on the conditions where redirection gains are applied, the user may experience varying awareness and sensitivity to the manipulation. In an awareness sensitivity research by Sakono et al.[92], they demonstrated that dynamic gain effectively lowers user sensitivity to gradual path curvature changes, enhances comfort, and extends curvature gain thresholds for longer virtual walking distances. Li et al.[66] revealed higher sensitivity to curvature gains on right-curved paths, with prior path curvature influencing subsequent path sensitivity, especially when paths curve similarly. Robb et al.[56] noted increased sensitivity to rotational gains with more VR exposure, although initial high detection thresholds decreased over time. Nguyen et al.[61] observed a negative correlation between walking speed and curvature gain thresholds, more pronounced in male participants. Congdon and Steed[93] found that gradual gain adjustments are less perceptible than sudden changes, enhancing threshold acceptance. Furthermore, Zhang et al.[45] reported wider acceptable detection thresholds for translation and rotation gains when users engaged with 360 video content captured by robots in remote environments. In terms of perception sensitivity, some previous work also studied how detection thresholds will be influenced when they are applied simultaneously. Brument et al.[55] found that the rotation gains are not influenced by translational motion, though the speed of rotational motion heavily influences the gains. Grechkin et al.[60] also found that the users’ ability to detect curvature gain thresholds was not influenced by the presence of translation gain in the experiments.
Cognitive Load. Given that users have limited cognitive resources to manage various tasks, cognitive load can significantly influence their performance in spatial orientation and perception-related activities. Nguyen et al.[65] observed that increased cognitive load raises the user's curvature gain threshold, with men showing greater sensitivity to curvature gain compared with women. Elevating cognitive load thus enhances RDW effects by allowing higher gains. Schmelter et al.[94] found that interaction complexity in VR reduces the sensitivity to rotational direction, with perception thresholds varying across interaction scenarios. Mostajeran et al.[67] studied cognitive demands in forest and urban VR environments, finding no significant differences in detection thresholds or cognitive performance between these environments, despite the influence of curvature gain on lateral movement.
Embodiment. Embodiment refers to the user's perception of his/her own body within a VE. Factors such as the presence of virtual avatars and the execution of virtual actions may influence the perception of DTs. From this aspect, Kruse et al.[46] found that visible virtual feet minimally impact user perception of translation gain compared with the VE's visual complexity. Nguyen et al.[64] reported that perspective and action consistency enhance body perception, thus increasing sensitivity to redirection. Conversely, Reimer et al.[47] observed no significant effect of virtual bodies on gain detection, although most participants felt that visible virtual bodies facilitated the detection of gain changes.
Posture. By default, RDW operates when the user is standing and walking. However, adopting a different posture, such as sitting, can affect perception. To investigate the influence of sitting posture behavior, Wang et al.[57] designed an experiment to compare rotation gain thresholds under seated and standing conditions, revealing that people are more sensitive to rotational gain when seated.
2.3.3 Spatial Scene Perception
Spatial scene in VR enables users to perceive and understand spatial relationships between objects or themselves within the environment. The size and scale of objects and environments relative to the user often vary across different applications, resulting in different threshold sensitivity, and making standardizing DTs challenging. Thus, to generate guidelines, researchers have recently investigated the aspects of spatial structure and spatial information in VR scene content.
Spatial Structure. Explicit scene characteristics, such as texture, size, and architectural structure can influence visual perception. Nguyen et al.[95] reported that the absence of wall textures did not significantly affect curvature gain detection thresholds due to consistent optical flow in the VE. In contrast, Kim et al.[48] found that larger VEs increased users’ sensitivity to translation gain changes, offering broader adjustment ranges. Kim et al.[49, 96] further explored how room size, object presence, and user feedback within various spatial configurations substantially influence relative translation gain thresholds.
Spatial Information. The insights on implicit spatial information like visual density and optical flow on detection thresholds also yielded interesting results. Paludan et al.[52] tested the hypothesis that high visual density alters perception thresholds for rotational gains, finding no significant effect. Similarly, Waldow et al.[97] explored how texture and global illumination impact translation gain perceptions, and also reported no significant differences.
3. RDW Controller Algorithms
In the development of RDW technologies, a variety of controller algorithms have been proposed to manage the complexities of guiding users through virtual environments while preventing collisions and maintaining immersion. These algorithms can be broadly categorized based on their operational strategies. This section provides an overview of the primary categories of RDW controller algorithms.
3.1 Reactive Methods
Once locomotion gains are measured, they must be applied using RDW controller algorithms to minimize collisions by keeping users away from obstacles and boundaries. One key category of these algorithms is reactive methods[118], which do not rely on predefined virtual paths but instead respond in real time to users’ positions, orientations, and dynamic states, guiding them towards specific turning targets or areas within the physical space.
Early reactive methods primarily use heuristic strategies. Razzaque proposed three foundational methods: steer-to-center (S2C), steer-to-orbit (S2O), and steer-to-multiple-targets (S2MT)[7]. S2C directs users to the physical center, S2O guides them along a circular path, and S2MT navigates them towards multiple predefined targets around the center. Hodgson and Bachmann introduced steer-to-multiple+center (S2MC), combining S2C and S2MT[9]. Azmandian et al.'s research further examined S2C and S2O in varying physical environments (PEs), finding S2C generally more effective, with S2O excelling in long straight paths[10].
Artificial Potential Field. As technology advances, research on RDW has increasingly focused on complex, intelligent algorithms, and artificial potential field (APF)-based approaches are one of them. Notably, Thomas and Rosenberg[11] introduced the Push/Pull Reactive (P2R) strategy, which uses attraction and repulsion forces to steer users. Bachmann et al.[12] and Messinger et al.[102] further expanded APF strategies for multi-person and irregular spaces. Dong et al.[103] proposed Dynamic APF to better manage multi-user scenarios. Recently, Chen et al.[114] optimized traditional APF with APF Steer-to-Target (APF-S2T), guiding users towards more open spaces by identifying the lowest scoring target sample points.
Reactive Alignment. Based on the idea of aligning obstacles in the PE with those in the VE to avoid collisions, reactive alignment methods have gained popularity. Thomas et al.[106] introduced the reactive environment alignment (REA) method in 2020, and Williams et al.[15] proposed the alignment redirected controller (ARC) in 2021, both aiming to adjust user orientation to align PE obstacles with those in the VE, thereby reducing collisions. Williams et al.[108] further enhanced spatial alignment by calculating the similarity of visibility polygons in PE and VE. Additionally, Chen et al.[107] and Wang et al.[110] proposed strategies that integrate reactive alignment with reinforcement learning in 2021 and 2022, respectively, to improve effectiveness and applicability.
Reinforcement Learning. The development of reinforcement learning has significantly improved RDW algorithms, surpassing traditional logic-based methods. Lee et al.[101] introduced steer-to-optimal-target (S2OT), which uses RL to optimize user turning targets, reducing collisions. They later enhanced it to Multiuser-S2OT (MS2OT)[14], accommodating up to 32 users. RL was also applied to reactive RDW algorithms[105], improving gains in rotation, translation, and curvature.
Physical Trajectory Planning. In recent years, Xu et al.[50, 111, 112, 117] have developed advanced physical trajectory planning methods that incorporate curvature gain thresholds, enhancing redirection path optimization. These methods establish models of physical path reachability and address previously unmanageable constraints, such as: enhancing user immersion by making resets away from targets[111], ensuring safe landings during jumps[117], and efficiently navigating obstacles in complex PEs[50].
Integration. Following the proposal of these strategies, researchers have integrated them to maximize their advantages. Azmandian et al.[109] laid the groundwork for combining multiple strategies into the Adaptive Redirection framework. Wu et al.[113] merged alignment-based and APF-based methods to create an integrated controller. Lee et al.[116] introduced the Selective Redirection Controller (SRC), which uses reinforcement learning to dynamically select and switch between optimal controllers based on the physical and virtual environments.
Additionally, with the diversification of VR applications, RDW techniques have been extended to multi-user interactions and specific scenarios. Bachmann et al.[98] and Azmandian et al.[119] explored collision prevention for two users. Chen et al.[99] proposed two greedy strategies (steer-to-farthest and trapezoidal roadmap) for irregular and dynamic environments. Dong et al.[100] and Dong et al.[13, 120] developed methods for multi-user redirection, including virtual scene mapping and density adjustments. Li and Fan[104] utilized Voronoi diagrams for navigable path mapping, minimizing collisions and distortions. Xu et al.[18] and Lee et al.[115] presented strategies for multi-user resets. The reactive methods we investigated and their characteristics are listed in Table 5.
Table 5. Reactive MethodsSource Multi-User Support Physical Space
StatusVirtual Space Status Virtual Actions of
UsersRazzaque, 2005[7] No Static obstacles Open spaces No knowledge required Bachmann et al., 2013[98] Two users Static obstacles Open spaces No knowledge required Chen et al., 2018[99] No Dynamic obstacles Open spaces No knowledge required Bachmann et al., 2019[12] Yes Static obstacles Open spaces No knowledge required Dong et al., 2019[100] Yes Static obstacles Open spaces No knowledge required Lee et al., 2019[101] No Static obstacles Open spaces No knowledge required Messinger et al., 2019[102] Yes Static obstacles Open spaces No knowledge required Thomas and Rosenberg, 2019[11] No Static obstacles Open spaces No knowledge required Dong et al., 2020[103] Yes Static obstacles Open spaces No knowledge required Lee et al., 2020[14] Yes Static obstacles Open spaces No knowledge required Li and Fan, 2020[104] No Static obstacles Fixed spaces No knowledge required Strauss et al., 2020[105] No Static obstacles Open spaces No knowledge required Thomas et al., 2020[106] No Static obstacles Fixed spaces No knowledge required Chen et al., 2021[107] No Static obstacles Fixed spaces No knowledge required Dong et al., 2021[13] Yes Static obstacles Open spaces No knowledge required Williams et al., 2021[15] No Static obstacles Fixed spaces No knowledge required Williams et al., 2021[108] No Static obstacles Fixed spaces No knowledge required Azmandian et al., 2022[109] No Static obstacles Open spaces Need virtual path Wang et al., 2022[110] No Static obstacles Fixed spaces No knowledge required Xu et al., 2022[18] Yes Static obstacles Open spaces Need next waypoint Xu et al., 2022[111] No Static obstacles Open spaces Need next waypoint Xu et al., 2022[112] No Static obstacles Open spaces No knowledge required Wu et al., 2023[113] No Static obstacles Fixed spaces No knowledge required Chen et al., 2024[114] No Static obstacles Open spaces No knowledge required Lee et al., 2024[115] Yes Static obstacles Open spaces No knowledge required Lee et al., 2024[116] No Static obstacles Fixed spaces No knowledge required Xu et al., 2024[117] No Static obstacles Open spaces Need next waypoint 3.2 Predictive Methods
To enhance the efficiency of redirection algorithms, prediction techniques have been integrated, resulting in two main types of predictive-based approaches. The first type involves predicting a user's future movements or intentions based on their current and historical movement patterns, enabling real-time optimization of the redirection process. The second type focuses on predefining redirection strategies and predicting the most suitable one by dynamically assessing the physical environment.
3.2.1 Prediction Method for RDW
Short-Term. When the accuracy of a user's future virtual path prediction is high, the effectiveness of redirection algorithms can be significantly enhanced. Nitzsche et al.[121] categorized these algorithms into short-distance and long-distance predictions. Early methods predict user paths using current movements or recent historical data[89, 122, 123]. Nescher and Kunz[16] utilized head tracking data to predict walking direction reliably for a few seconds. However, due to the irrational nature of user exploration, these methods were constrained to narrow VEs. Hirt et al.[124] addressed this by introducing a drop-shaped trajectory prediction algorithm, represented by Bernoulli lemniscates, suitable for open areas. These short-term techniques, however, only use tracking space movement data and do not incorporate VE information.
Long-Term. In contrast, long-term prediction leverages virtual target information. Zank and Kunz[125] introduced a skeleton graph search algorithm for redirection planning and potential waypoint prediction, enabling the use of previous prediction algorithms in large-scale VEs. Similarly, some methods based on long-term path planning have been proposed recently. Qi et al.[126] introduced a voronoi skeleton based navigation method that plans virtual paths and employs locomotion gain for redirection. Thomas et al.[127] proposed a predictive environment alignment method using inverse kinematics to optimize locomotion gain, allowing physical interaction within the aligned environment. Similarly, Williams et al.[108] presented a path planning approach using visibility polygons to calculate walkable spaces, guiding users along planned paths to minimize resets.
Eye-Gaze Data. Gandrud and Interrante[128] found that head movement and eye gaze data can predict a user's future direction, forming the basis for trajectory prediction. Subsequent studies used neural networks and recurrent networks to leverage eye movement data, improving trajectory predictions[129, 130].
Deep-Learning. Jeon et al.[131] proposed an advanced RDW algorithm leveraging the LSTM model, integrating spatial and eye tracking data to predict user behavior in VR. This approach, devoid of assumptions, forecasts future positions and applies these predictions to existing RDW methods, effectively minimizing resets and extending intervals between them, thereby enhancing RDW algorithm efficiency.
3.2.2 Planning Strategy
The advancements in predictive planning strategies for RDW are crucial for effective navigation in constrained VR environments. Zmuda et al. proposed the Fully Optimized Redirected Walking for Constrained Environments (FORCE) method[17], which predicts user paths using multi-step probability and a map of the tracking area and obstacles to find collision-free paths. Nescher et al. introduced the Model Predictive Control Redirection (MPCred) method[132], which incorporates user behavior to improve redirection strategies. However, both methods require cumbersome path marking.
To address this, Azmandian et al.[133] proposed an automated method using navigation grids for dynamic path prediction and environment annotation, removing the need for manual layout annotation. Hirt et al.[134] improved this by using APF to handle non-convex, dynamic tracking spaces, and support multi-user scenarios. Chen et al.[107] subsequently found that when the user interacts with virtual objects, the inconsistency between the user's virtual position and physical position hinders passive tactile feedback. They addressed this problem with a new RDW algorithm based on reinforcement learning (RL), ensuring obstacle avoidance and positioning accuracy. Recent advancements include Thomas et al.'s predictive alignment of virtual and physical spaces[127], and Congdon et al.'s use of the Monte-Carlo algorithm for optimal redirection gains[135]. These innovations enhance RDW by improving path prediction and adaptation to user behavior and physical space constraints. For multi-user scenarios, Jeon et al.[136] proposed the Optimal Space Partitioning (OSP) method, using deep RL to predict user movements and optimally divide physical space, minimizing boundary resets. The space conditions of the prediction methods mentioned above are shown in Table 6.
Table 6. Predictive MethodsSource Multi-User Support Physical Space Status Virtual Space
StatusVirtual Tracking Data of
UsersInterrante et al., 2007[122] No Static obstacles Open spaces Gaze direction & previous displacement Steinicke et al., 2008[89] No Static obstacles Open spaces Viewing direction Su, 2007[123] No Static obstacles Open spaces None Nescher and Kunz, 2013[16] No Static obstacles Open spaces Head tracking data Hirt et al., 2019[124] No Static obstacles Open spaces None Zank and Kunz, 2017[125] No Static obstacles Open spaces User's position Qi et al., 2021[126] No Static obstacles Open spaces &
highly-structured mazeUser's location & orientation Thomas et al., 2022[127] No Static obstacles Open spaces User's location, orientation,
& next waypointGandrud and Interrante, 2016[128] No Static obstacles Open spaces Head orientation & gaze direction Bremer et al., 2021[129] No Static obstacles Open spaces Positional, orientation, & eye-tracking data Stein et al., 2022[130] No Static obstacles Open spaces Eye-tracking data Jeon et al., 2024[131] No Static obstacles Open spaces User's spatial & eye-tracking data Zmuda et al., 2013[17] No Static obstacles Virtual store with aisles User's location & orientation Nescher et al., 2014[132] No Static obstacles Open spaces User disturbance caused by
applied RETAzmandian et al., 2016[133] No Static obstacles Open spaces Navigation meshes Chen et al., 2021[107] No Static obstacles Open spaces User's current position Hirt et al., 2019[134] Yes Dynamic spaces Open spaces User's current position Jeon et al., 2022[136] Yes Dynamic spaces Open spaces User's current position 3.3 Scripted Methods
Unlike reactive and predictive methods, scripted methods focus on mapping predetermined virtual paths to real collision-free paths. These methods[7, 109, 137] often require predetermined virtual paths, which guide the user to explore the virtual space along the pre-calculated virtual paths. They leverage map information effectively, yielding better results in specific scenarios.
The Zigzag method by Razzaque[7] requires users to follow a predefined zigzag path, converting real-world movement into a confined arc through rotational distortions. Engel et al.[137] expanded this by using locomotion gains to guide users along a meandering path in open space. Azmandian et al.[109] introduced the Combinatorially Optimized Pre-Planned Exploration Redirector (COPPER), which uses machine learning to optimize motion gain for these paths. These methods ensure realistic virtual exploration within physical constraints but are limited in adaptability for free exploration or dynamic environments.
3.4 Overt Redirection
To ensure a highly immersive VR experience, RDW techniques should remain imperceptible to users[8]. However, overt techniques become necessary when users reach physical boundaries, guiding them back safely within the tracking area while balancing user safety and immersion[20]. Suma et al.[138] classified overt methods into categories: repositioning or reorientation (manipulation of virtual position or orientation) and continuous or discrete (redirection through continuous motion or discrete steps).
3.4.1 Overt Continuous Repositioning
Continuous translation of the VE around the user's position can facilitate simple repositioning, allowing access to previously inaccessible areas within the virtual space. However, such unexpected translations can cause disorientation and instability[139]. To mitigate this, translations can be paired with familiar motion metaphors like elevators, escalators, moving walkways, or vehicles. Yu et al.[140] proposed cell-based redirection, dividing the virtual world into cells matching the tracking space. Their Bird technique raises users above obstacles, flying them to the target cell, and aligning their virtual and physical positions for seamless navigation (refer to Fig.2).
Figure 2. Yu et al.'s[140] Bookshelf (a)–(d) and Bird (e)–(h) redirection technique. The black rectangle represents the PE, and the red and green rectangles represent two virtual rooms. (a) The user starts from the green cell, steps onto the bookshelf, and triggers redirection. (b) The bookshelf virtually rotates the user. (c) The physical space is now in consistent mapping with the red virtual room. The user rotates physically to face it. (d) The user can now step off the bookshelf and navigate the red virtual room. (e) The user starts from the green cell and selects the red cell as the target. (f) The bird approaches and picks the user up. (g) The bird flies the user to the destination cell. (h) The user is placed in the same relative position in the destination cell as they were in the original cell.3.4.2 Overt Discrete Repositioning
Discrete repositioning, or teleportation, involves instant user translation within VEs to safe physical spaces[141]. Bolte et al.[142] used teleportation for long-distance navigation, triggered by a jumping gesture, but this can cause disorientation if users are unprepared[143]. To mitigate this, Bruder et al.[144] introduced portals inspired by science fiction for better spatial awareness. Bozgeyikli et al.[3] developed the Point & Teleport technique (Fig.3), allowing users to point and teleport after two seconds. Freitag et al.[145] enabled portal creation with a controller to keep users within tracked boundaries. Liu et al.[146] proposed redirected teleportation (Fig.4).
Figure 3. Bozgeyikli et al.'s[3] Point & Teleport technique. (a) The user points to their virtual target destination to get teleported there. (b) The user points to the target destination using the direction specification feature. After teleportation, the user will be facing the direction of the green arrow.Figure 4. Liu et al.'s[146] redirected teleportation technique. (a) Users select their teleportation destinations using their controller's raycast. (b) After selection, a portal (blue circle) will appear, showing a preview of the location the user will be teleported to.3.4.3 Overt Continuous Reorientation
The core of overt continuous reorientation techniques is the manipulation of virtual space to keep users within the tracking area. Su[123] and Nitzsche et al.[121] used motion compression to rotate the VE as users near boundaries. Simeone et al.'s[147] Space Bender bends virtual geometry near physical boundaries, while Han et al.'s[148] Foldable Spaces employs horizontal, vertical, and accordion folding transformations.
3.4.4 Overt Discrete Reorientation
Regardless of the redirection technique or steering algorithm employed, users sometimes need to halt their virtual experience and realign towards the center of the physical space. Solutions like the RDW toolkit use a stop-and-go method[149], where users walk until reaching the tracking space boundary, then a curvature gain is applied and the user is rotated until obstacles are cleared. Similarly, Razzaque et al.'s methods[6, 7] use prerecorded verbal messages to change the user's physical direction while following audio instructions. Lee et al.'s[150] multi-user reset controller (MRC), trained with multi-agent reinforcement learning, accounts for physical obstacles, multi-user movement, and reset minimization. Yu et al.'s[140] Bookshelf technique uses cell-based redirection, rotating the user virtually on a spinning bookcase to face a destination room within the available physical space (refer to Fig.2).
Williams et al.[20] proposed three resetting methods (Fig.5): Freeze-Backup, Freeze-Turn, and 2:1-Turn. In Freeze-Backup, users are alerted upon reaching physical boundaries, causing the system to freeze the virtual location, deactivate tracking, and prompt users to step backward physically before resuming the display and tracking. In Freeze-Turn, the system signals users to turn 180 ^\circ when near a boundary, freezing the virtual location during the turn and then resuming. The 2:1-Turn method instructs users to rotate until their orientation aligns, applying a 2x rotation gain, resulting in a 180 ^\circ physical turn and a 360 ^\circ virtual turn.
Figure 5. Williams et al.'s[20] resetting methods. (a) The user walks straight within the VE. (b) The Freeze-Backup method relocates the user before resuming the display. (c) The Freeze-Turn and 2:1-Turn methods instruct the user to rotate in place.Zhang et al.[151, 152] proposed efficient reset strategies (Fig.6). The One-Step Out-of-Place strategy directs users to optimal physical positions, maximizing movement freedom and minimizing resets. The adaptive algorithm segments physical boundaries into endpoints, interpolating reset vectors for efficient direction determination.
Figure 6. The One-Step Out-of-Place strategy[151] (a)–(d) and the adaptive optimization algorithm[152] (e). (a) Points are sampled at regular intervals within the tracked area. (b) Simulated walking is performed to evaluate suitability values for optimal reset direction (blue for higher suitability and yellow for lower). (c) The user (orange avatar with brown orientation) collides with an obstacle in the real environment (gray rectangle). The executed reset path is illustrated by the dashed curve. (d) User's final position and orientation after reset. (e) The PE is discretized for simulation-based reset optimization after the addition of reset endpoints.Xie et al.[153] found that combining translation gain with the 2:1-Turn method is effective but has cognitive costs. Kwon et al.[154] proposed Reset at Fixed Positions (RFP), integrating substitutional reality (SR) with the 2:1-Turn, featuring two methods: Generating Virtual Space (G-RFP) for creating infinite virtual spaces, and Implementing Given Virtual Space (I-RFP) for calculating optimal reset points.
To enhance user reorientation, various reset strategies have been evaluated (Fig.7). The reset to center (R2C) strategy modifies the 2:1 Turn to always reset users facing the physical center. Thomas and Rosenberg[11] proposed modified reset to center (MR2C), reset to gradient (R2G), and step-forward reset to gradient (SFR2G). MR2C realigns users parallel to obstacles or towards the center. R2G resets users following the negative gradient of potential functions[12, 103]. SFR2G involves taking adjustable steps in the gradient descent direction when no attractive force exists, using the final position for resetting.
Figure 7. The gray area represents the PE, with the thick black lines being the boundaries. The red dot indicates the center of the tracked space and the black rectangles symbolize an obstacle. The red arrows specify the orientation to which users are reset to. (a) R2C. (b) MR2C. (c) R2G. (d) SFR2G.4. Integrating Sensory Cues and Environmental Manipulation
4.1 Saccadic and Blink Suppression
Saccadic and blink suppression, where perception is temporarily blocked during rapid eye movements or blinks, allows for imperceptible orientational and positional changes[155]. Studies have identified thresholds for undetectable manipulations: Bolte and Lappe[156] reported 0.5 meters of translation and 5 degrees of rotation, while Langbehn et al.[157] found thresholds of 4 cm – 9 cm for translations and 2 degrees – 5 degrees for rotations during blinks. Other studies noted varying thresholds due to differences in hardware and experimental setups[158].
These suppression techniques have been integrated into RDW control methods. Langbehn et al.[157] discussed expanded gains, and Nguyen and Kunz[159] revised scene reorientation during blinks. Sun et al.[160] proposed a saccade-based technique adaptable to complex environments, while Pinson et al.[161] leveraged natural saccades and blinks.
4.2 Virtual Environment Manipulation
In the current literature, most studies emphasize subtle redirection gains and their applications, typically involving translations or rotations of VEs while retaining the same structure and content. However, research on virtual environment manipulation holds great potential for enhancing redirection performance. These techniques vary from subtle to overt changes and include overlapping-based, generation-based, mapping-based, and others.
Change blindness (Fig.8(a)) is the phenomenon where significant changes go unnoticed[165]. Suma et al.[162] applied this concept to RDW by altering doors and corridors’ layouts, with most users not noticing these changes.
Figure 8. Illustrations of classic virtual environment manipulation methods. (a) Example of change blindness[162]. The door and the corridor are rotated when the user approaches the monitor. (b) Example of impossible spaces[163]. By overlapping the two rooms to up to 50%, much space can be saved. (c) Cases of flexible spaces[164]. Various layouts for two rooms and the corridors can be procedurally generated.Impossible spaces[163] (Fig.8(b)) create self-overlapping offices to expand virtual spaces. Studies show rooms can overlap by more than 30% without detection. Vasylevska and Kaufmann[166] explored various layouts’ influence on perception, while Langbehn et al.[167] combined this with bending gain in RDW. Koltai et al.[168] created self-overlapping mazes as an application of impossible spaces.
Flexible spaces[164] (Fig.8(c)) focus on dynamic generation of new layouts for larger or infinite virtual spaces. Cheng et al.[169] introduced VRoamer for real-time virtual room generation. Han et al.[148] introduced “folding” techniques, and Xu et al.[170] applied the Lorentz contraction theory to contract virtual space based on user velocity variation. These novel researches expanded the idea of flexible spaces.
Mapping virtual to physical space and applying distortions is another research focus. Sun et al.[171] proposed a static planner mapping approach, while Dong et al.[172] introduced Smooth Assembly Mapping. Dong et al.[100] extended this to multi-user RDW, combining dynamic mapping with bending gain. Dong et al.[173] developed a perception-aware restructuring method to compress the VE.
4.3 Distractors
In RDW, various sensory cues, such as visual, auditory, olfactory, gustatory, and tactile stimuli, can be configured to attract the user's attention. These cues serve as distractors, helping to divert the user's focus from the RDW manipulations. By integrating such distractors, RDW may increase its operational thresholds, enabling users to explore larger VEs within limited physical spaces and enhancing their overall experience in VR. Lee et al.[174] studied sensory attractors’ effects, such as visual, auditory, and olfactory stimuli, on intermittent reorientation during VR locomotion (Fig.9). Their findings highlighted that auditory and olfactory cues significantly influence reorientation, underscoring the potential of non-visual cues to enrich VR locomotion experiences.
Figure 9. Illustrations of visual, auditory, and olfactory attractors[174]. (a) Initial state. (b) A visual stimulus flies to the desired object (a snack) behind the user. (c) An auditory stimulus behind the user. (d) An olfactory (scent) stimulus. These attractors guide the user away from the collision-detection area.4.3.1 Visual Distractor
Visual distractors significantly impact users’ perception of space in VR, enabling manipulation of perceived paths and extending virtual environments beyond physical limits. Chen and Fuchs[175] integrated redirection techniques and immersive distractors, like a fire-breathing dragon, into a VR game to enhance navigation in confined spaces. Peck et al.[22] demonstrated the superiority of Redirected Free Exploration with Distractors (RFED) over traditional methods, highlighting natural navigation's importance. Chen and Fuchs[176] achieved imperceptible redirection by incorporating distractors into primary VR activities. Cools and Simeone[177] found continuous RDW methods reduce user detection, while Schmelter et al.[94] showed interactions like picking up and throwing objects can discretely alter walking paths. These studies underscore visual distractors’ potential to shape navigation and interactions in VR, laying the groundwork for further refinement of RDW techniques with additional sensory stimuli.
4.3.2 Auditory Distractor
Auditory cues significantly enhance the immersive experience in VR, especially in RDW. Mahmud et al.[178] showed that auditory feedback mitigates gait disturbances, improving accessibility for mobility-impaired users. Nogalski and Fohl[85] introduced acoustic RDW (ARDW) using wave field synthesis for spatial audio cues, increasing perceptual thresholds and enabling more extensive navigation. Rewkowski et al.[179] demonstrated that auditory distractors aid complex navigation for both visual and non-visual users, maintaining performance quality. Weller et al.[87] found that auditory step feedback (ASRDW) can subtly alter walking paths, enhancing natural navigation. Ogawa et al.[58] highlighted that fixed auditory cues in real-world settings can widen detection thresholds. Collectively, these studies emphasize the potential of auditory cues to optimize RDW techniques, enhancing user navigation, spatial perception, and overall VR experience.
4.4 Haptic Cues
Innovative approaches in VR have enhanced user experiences through haptic cues, particularly in RDW techniques. Studies have shown that combining haptic feedback with visual cues creates immersive virtual environments. Haptic cues in RDW can be categorized into dynamic and stationary. Dynamic haptic cues change based on user movements or environmental stimuli, while stationary cues remain constant or semi-constant regardless of user actions or VE changes. This classification facilitates a nuanced exploration of the contribution of different haptic cues to the effectiveness of RDW strategies.
4.4.1 Stationary Haptic Cues
Matsumoto et al.[90] introduced curvature manipulation techniques with haptic cues to enhance RDW, showing that users perceive a linear path while walking along a convex wall, reducing perceived curvature by 62%. Their ``Unlimited Corridor" system[180] employs visuo-haptic interaction, allowing infinite virtual walking in confined spaces through dynamic curvature alteration (Fig.10). Steinicke et al.[89] provided a comprehensive overview of RDW, emphasizing passive haptic feedback and predictive user targeting. These studies highlight diverse approaches, from haptic illusions to novel redirection systems, optimizing spatial perception, reducing motion sickness, and enhancing VR immersion. By leveraging haptic and visual-haptic interactions, researchers offer valuable insights for future advancements in the RDW technology.
Figure 10. Overview of Unlimited Corridor: Handrail Version[180]. A participant perceives straight handrails via the head-mounted display (HMD) and experiences the sensation of walking while holding onto what appears to be a straight handrail, despite actually holding onto a curved one. (a) Real space. (b) Virtual space.4.4.2 Dynamic Haptic Cues
Haptic feedback and redirection techniques play a crucial role in enhancing VR experiences by optimizing spatial utilization and providing immersive user interactions. Askarbekkyzy et al.[181] introduced floor-based shape-changing displays to simulate larger terrains, addressing hardware limitations. Research on the hanger reflex for hand redirection[182] highlights its potential in upper limb rehabilitation and lower limb strategies. Weiss et al.[183] explored pseudo-stiffness to dynamically adjust object hardness, enriching haptic experiences.
The TableMorph system[184] uses movable tables controlled by robots to create visuo-haptic illusions, enhancing tactile VR experiences. Hoshikawa et al.[185] extended RDW to door-opening scenarios with dynamic haptic feedback, enhancing user guidance in limited spaces (Fig.11).
Figure 11. RedirectedDoors+ by Hoshikawa et al.[185]. (a) The system offers users haptic feedback simulating the act of opening doors in room-scale VR, achieved by dynamically controlling a few wheel robots with a doorknob prop. (b) As the user opens a door, the system rotates the entire VE. (c) The user is redirected away from the play area's boundary.Hwang et al.[186] examined various vestibular stimulation methods, achieving spatial expansion without compromising immersion. Sassi et al.[187] integrated RDW techniques for wheelchair movement in VR, demonstrating its effectiveness in enhancing user experiences. Additionally, Hwang et al.[188] explored Bone-Conduction Vibration (BCV), extending detection thresholds and improving comfort with auditory stimuli.
These studies underscore the significance of haptic feedback and innovative vestibular cues in VR, paving the way for more engaging and seamless virtual experiences across various applications.
5. Evaluation of RDW Methods
Evaluating RDW techniques is crucial for assessing their effectiveness and driving further improvements. In this section, we will review key studies that analyze different RDW methods and their outcomes.
5.1 Evaluation of RDW
Several studies have systematically evaluated RDW techniques under different conditions and configurations. Hodgson and Bachmann[9] analyzed four general approaches (S2C, S2O, S2MT, and S2MC) through experiments involving various path types, using both simulations and user studies. Azmandian et al.[10] tested S2C, S2O, and non-redirection methods, plus their modifications with translation gains, assessing performance across diverse physical space configurations. They introduced a virtual path model based on waypoints, which is widely used for simulation. Similarly, Messinger et al.[102] focused on the influence of the size and shape of tracking spaces on APF and traditional RDW methods. Kim et al.[96] broadened the scope to include various virtual room dimensions and objects, exploring their impact on translation gain thresholds. Additionally, different multi-user redirection strategies were evaluated by Azmandian et al.[119]. OpenRDW[189] has become a key benchmarking tool, offering a configurable toolkit for testing advanced RDW techniques, including APF and reinforcement-based methods, across a range of experimental conditions.
Recent studies have refined our understanding of simulation's role in evaluating RDW techniques. Azmandian et al.[190] affirmed simulation's validity through analyses of real and simulated user data. In contrast, Hirt et al.[191] criticized the approach, highlighting the complex and delicate nature of RDW that necessitates careful evaluation of simulation-based methods. These insights underscore the ongoing discourse on refining RDW research methodologies.
5.2 Criteria for Evaluation
Throughout the history of RDW development, performance evaluation has utilized both objective and subjective criteria. Objective criteria include quantitative metrics like the number of resets, distance to walls and the center, and redirection rate, suitable for simulations and user studies given accurate tracking. Subjective criteria involve user assessments of experience and practicality, gathered exclusively through user studies. These criteria ensure comprehensive evaluation and continual improvement of RDW techniques.
Number of resets, or wall contacts, is a primary objective metric, quantifying how often a user collides with boundaries, obstacles, or others. Fewer resets indicate better performance, showing the technique's ability to keep users within the physical space. Related metrics include frequency of resets and physical/virtual distances between resets. Numerous studies have used these criteria for evaluation [9-15, 18, 19, 102, 151].
Distances to walls and the center and similar metrics offer an alternative perspective on redirection effectiveness. In obstacle-free spaces, RDW techniques typically guide users away from walls towards a central, safe area. These criteria are widely used in generalized methods[9, 12, 13, 21, 103, 191].
Rate of redirection measures the average injected rotation or translation over time, assessing the direct manipulation by a technique. While a higher rate can influence user perception and increase side effects, it does not necessarily indicate better performance. Relevant studies include [9, 12, 101, 102, 135].
In evaluating subjective criteria for RDW, the focus is on its potential side effects, primarily simulator sickness and cognitive load. Simulator sickness may be exacerbated by RDW manipulations. SSQ[74] is commonly used to measure this. Comprehensive evaluation remains challenging due to variations in manipulation methods, hardware, and environments[8].
Spatial manipulations may also interfere with users’ spatial recognition and increase cognitive load. General metrics like the IPQ[79] are used, but criteria and experimental design for evaluating these effects are still developing. Some studies suggest RDW does not negatively affect spatial memory or may even improve spatial understanding[21, 22]. However, other studies highlight potential negative effects, such as increased difficulty maintaining orientation and influence on task performance[20, 23].
Additional criteria, such as the system usability scale (SUS)[192], can also be valuable for subjective evaluation of RDW systems or applications. While there are limited general criteria, it is crucial to carefully assess the practicality of RDW in real-world scenarios.
5.3 Discussions of RDW as a Locomotion Technique
Several studies have shown RDW to be superior to other VR locomotion techniques. Peck et al.[22] found that RFED outperformed walking-in-place (WIP)[193] and Joystick control in virtual navigation. Similarly, Langbehn et al.[4] demonstrated that RDW provides better spatial knowledge than joystick control and teleportation in room-scale VR. Prinz et al.[194] identified RDW as a prominent taxonomy in VR locomotion research. However, some studies challenge RDW's practicality. Paris et al.[195] concluded that WIP is preferable in small spaces. Additionally, Tseng et al.[196] highlighted potential risks, noting that RDW and similar perceptual manipulation methods could be exploited maliciously.
6. Future Directions in the RDW Technology
The advancements in the RDW technology have significantly enhanced the VR experience, yet several challenges and opportunities for future research remain. This section outlines potential future research directions that could revolutionize RDW and, by extension, VR experiences.
Enhancing Perceptual Gains and Redirection Techniques. Investigating the limits of human perceptual thresholds, especially under varied and complex VEs, can lead to more sophisticated redirection techniques. Machine learning algorithms, particularly deep learning, could be employed to dynamically adjust these gains in real time, based on user behavior and environmental context. Integrating eye-tracking and other physiological measures can provide more granular data to tailor these adjustments.
Integrating Multisensory Cues. While current RDW methods primarily focus on visual and auditory cues, there is significant potential in incorporating more comprehensive multisensory feedback, including haptic, olfactory, and vestibular stimuli. Research into how these additional sensory inputs can be synchronized and optimized for RDW can lead to a more immersive and convincing VR experience. For instance, dynamic haptic feedback through wearables or environmental props can enhance the illusion of real-world navigation within virtual spaces.
Adaptive and Personalized RDW Algorithms. As VR becomes more mainstream, adaptive RDW algorithms that can tailor redirection strategies to individual users’ movement patterns, perceptual sensitivities, and preferences will become essential. Machine learning and AI can play a crucial role in developing these adaptive systems. By analyzing user behavior data, these algorithms can predict user movements more accurately and adjust redirection techniques in real time, ensuring a seamless and personalized VR experience.
Multi-User and Collaborative VR Environments. As VR applications expand into collaborative and multi-user domains, the RDW technology must evolve to handle multiple users interacting within the same physical space. Future research should focus on developing algorithms that can manage complex interactions between users, ensuring seamless navigation and minimizing collisions. Techniques such as shared virtual spaces and adaptive redirection algorithms that account for the presence and movements of other users will be essential.
Integration with Other Locomotion Techniques. Combining RDW with other VR locomotion techniques, such as teleportation, WIP, and vehicle-based movement, can offer more flexible and user-friendly navigation options. Hybrid approaches can mitigate the limitations of individual techniques and provide users with multiple modes of movement, enhancing overall accessibility and comfort. Future VR systems may dynamically switch between these techniques based on the user's location, activity, and the VE's requirements.
Real-World Applications and Industry Adoption. Translating RDW research into practical applications requires close collaboration with industry partners. Future studies may focus on developing scalable and cost-effective RDW solutions that can be easily integrated into commercial VR systems. Exploring applications in fields such as medical training, psychological therapy, and professional development can demonstrate the broader impact of the RDW technology and drive its adoption across various sectors.
By addressing these future research directions, the RDW technology can continue to advance, making VR experiences more immersive, accessible, and practical across diverse applications. The integration of new technologies, ethical considerations, and industry collaborations will play a pivotal role in shaping the future landscape of VR navigation.
7. Conclusions
In this survey, we comprehensively explored the advancements in RDW (redirected walking) techniques aimed at overcoming spatial constraints in VR (virtual reality). We gave an in-depth analysis of gain perception mechanisms, various RDW control algorithms, and innovative methods extending beyond traditional gain-based techniques. We highlighted the significant role RDW plays in enhancing user immersion and safety within limited physical spaces. While considerable progress has been made, this field continues to present opportunities for further research, particularly in refining perceptual gains, integrating multisensory feedback, and developing advanced predictive algorithms. By addressing these areas, the RDW technology can become even more effective, making VR experiences more immersive and practical across diverse applications.
Acknowledgements: The authors wish to thank Chen-Qi Jia and Chen-Fei Yuan from Tsinghua University, Beijing, for their valuable contributions to this paper. -
Figure 1. Illustration of different types of redirection gains. (a) Translation. (b) Rotation. (c) Curvature. (d) Bending. (e) Deviation. (f) Strafing. (g) Jumping (height). (h) Jumping (distance). (i) Jumping (rotation angle). (j) Jumping (stair). (k) Slope. (l) Bidirectional Rotation Difference (BiRD). (m) Backward (translation). (n) Backward (curvature).
Figure 2. Yu et al.'s[140] Bookshelf (a)–(d) and Bird (e)–(h) redirection technique. The black rectangle represents the PE, and the red and green rectangles represent two virtual rooms. (a) The user starts from the green cell, steps onto the bookshelf, and triggers redirection. (b) The bookshelf virtually rotates the user. (c) The physical space is now in consistent mapping with the red virtual room. The user rotates physically to face it. (d) The user can now step off the bookshelf and navigate the red virtual room. (e) The user starts from the green cell and selects the red cell as the target. (f) The bird approaches and picks the user up. (g) The bird flies the user to the destination cell. (h) The user is placed in the same relative position in the destination cell as they were in the original cell.
Figure 3. Bozgeyikli et al.'s[3] Point & Teleport technique. (a) The user points to their virtual target destination to get teleported there. (b) The user points to the target destination using the direction specification feature. After teleportation, the user will be facing the direction of the green arrow.
Figure 4. Liu et al.'s[146] redirected teleportation technique. (a) Users select their teleportation destinations using their controller's raycast. (b) After selection, a portal (blue circle) will appear, showing a preview of the location the user will be teleported to.
Figure 5. Williams et al.'s[20] resetting methods. (a) The user walks straight within the VE. (b) The Freeze-Backup method relocates the user before resuming the display. (c) The Freeze-Turn and 2:1-Turn methods instruct the user to rotate in place.
Figure 6. The One-Step Out-of-Place strategy[151] (a)–(d) and the adaptive optimization algorithm[152] (e). (a) Points are sampled at regular intervals within the tracked area. (b) Simulated walking is performed to evaluate suitability values for optimal reset direction (blue for higher suitability and yellow for lower). (c) The user (orange avatar with brown orientation) collides with an obstacle in the real environment (gray rectangle). The executed reset path is illustrated by the dashed curve. (d) User's final position and orientation after reset. (e) The PE is discretized for simulation-based reset optimization after the addition of reset endpoints.
Figure 7. The gray area represents the PE, with the thick black lines being the boundaries. The red dot indicates the center of the tracked space and the black rectangles symbolize an obstacle. The red arrows specify the orientation to which users are reset to. (a) R2C. (b) MR2C. (c) R2G. (d) SFR2G.
Figure 8. Illustrations of classic virtual environment manipulation methods. (a) Example of change blindness[162]. The door and the corridor are rotated when the user approaches the monitor. (b) Example of impossible spaces[163]. By overlapping the two rooms to up to 50%, much space can be saved. (c) Cases of flexible spaces[164]. Various layouts for two rooms and the corridors can be procedurally generated.
Figure 9. Illustrations of visual, auditory, and olfactory attractors[174]. (a) Initial state. (b) A visual stimulus flies to the desired object (a snack) behind the user. (c) An auditory stimulus behind the user. (d) An olfactory (scent) stimulus. These attractors guide the user away from the collision-detection area.
Figure 10. Overview of Unlimited Corridor: Handrail Version[180]. A participant perceives straight handrails via the head-mounted display (HMD) and experiences the sensation of walking while holding onto what appears to be a straight handrail, despite actually holding onto a curved one. (a) Real space. (b) Virtual space.
Figure 11. RedirectedDoors+ by Hoshikawa et al.[185]. (a) The system offers users haptic feedback simulating the act of opening doors in room-scale VR, achieved by dynamically controlling a few wheel robots with a doorknob prop. (b) As the user opens a door, the system rotates the entire VE. (c) The user is redirected away from the play area's boundary.
Table 1 Detection Thresholds of Translation Gain
Source Threshold Comment Steinicke et al., 2008[27] 0.78–1.22 – Steinicke et al., 2009[2] 0.86–1.26 – Bruder et al., 2012[44] 0.8724 –1.2896 Walking 0.9378 –1.3607 Electric wheelchair Zhang et al., 2018[45] 0.942–1.097 360 degree video-based telepresence systems Kruse et al., 2018[46] 0.85823 –1.26054 No visible virtual feet in a high-fidelity visual environment 0.87583 –1.15388 Visible virtual feet in a high-fidelity visual environment 0.72745 –1.25038 Visible virtual feet in a low cue VE Reimer et al., 2020[47] 0.911–1.278 No self-avatar 0.891–1.216 Visible self-avatar Kim et al., 2021[48] 0.88–1.19 Larger VR room, reference translation gain: 1.0 0.85–1.29 Smaller VR room, reference translation gain: 1.0 0.60–0.97 Larger VR room, reference translation gain: 1.2 0.68–1.16 Smaller VR room, reference translation gain: 1.2 Kim et al., 2023[49] 0.91–1.22 Large \times empty (size/object) 0.85–1.12 Medium \times empty (size/object) 0.73–1.10 Small \times empty (size/object) 1.02–1.34 Large \times furnished (size/object) 0.92–1.23 Medium \times furnished (size/object) 0.96–1.24 Small \times furnished (size/object) 0.76–1.25 Large \times empty (size/layout) 0.86–1.25 Large \times centered (size/layout) 0.83–1.25 Large \times peripheral (size/layout) 0.84–1.35 Large \times scattered (size/layout) 0.80–1.25 Small \times empty (size/layout) 0.73–1.23 Small \times centered (size/layout) 0.78–1.25 Small \times peripheral (size/layout) 0.82–1.25 Small \times scattered (size/layout) Luo et al., 2024[50] 0.48–1.78 With different zoomed-in FOVs Table 2 Detection Thresholds of Rotation Gain
Source Threshold Comment Steinicke et al., 2008[27] 0.59–1.10 Discrimination between virtual and physical rotation 0.76–1.19 Discrimination between two successive rotations Steinicke et al., 2009[2] 0.67–1.24 – Bruder et al., 2012[44] 0.6810 –1.2594 Walking 0.7719 –1.2620 Electric wheelchair Serafin et al., 2013[51] 0.82–1.20 Audio Paludan et al., 2016[52] 0.93–1.27 Visual density, control 0.81–1.19 Visual density, 4 objects 0.82–1.20 Visual density, 16 objects Nilsson et al., 2016[53] 0.77–1.10 No audio 0.80–1.11 Static audio 0.79–1.08 Moving audio Zhang et al., 2018[45] 0.877–1.092 Rotations to the left 0.892–1.054 Rotations to the right Williams & Peck, 2019[43] 0.5742 –1.2829 FOV 40°, without distractors, female 0.7382 –1.1790 FOV 40°, without distractors, male 0.5455 –1.3198 FOV 40°, with distractors, female 0.7619 –1.2156 FOV 40°, with distractors, male 0.6459 –1.3218 FOV 110°, without distractors, female 0.6999 –1.5616 FOV 110°, without distractors, male 0.3692 –1.4772 FOV 110°, with distractors, female 0.7242 –1.6211 FOV 110°, with distractors, male Brument et al., 2020[54] 1.13–1.32 Rotation: 60°, vignetting (none, color) 1.11–1.29 Rotation: 60°, vignetting (none, blur) 1.13–1.32 Rotation: 60°, vignetting (horizontal, color) 1.10–1.29 Rotation: 60°, vignetting (horizontal, blur) 1.13–1.38 Rotation: 60°, vignetting (global, color) 1.08–1.35 Rotation: 60°, vignetting (global, blur) 1.15–1.33 Rotation: 90°, vignetting (none, color) 1.15–1.30 Rotation: 90°, vignetting (none, blur) 1.12–1.40 Rotation: 90°, vignetting (horizontal, color) 1.13–1.35 Rotation: 90°, vignetting (horizontal, blur) 1.16–1.35 Rotation: 90°, vignetting (global, color) 1.12–1.33 Rotation: 90°, vignetting (global, blur) Brument et al., 2021[55] 0.64–1.35 Rotation type: 20°/s 0.58–1.36 Rotation type: 30°/s 0.72–1.19 Rotation type: 40°/s Robb et al., 2022[56] 0.803–1.242 One week 0.862–1.117 Two week 0.874–1.128 Three week 0.894–1.095 Four week Wang et al., 2022[57] 0.89–1.28 Seated 0.80–1.40 Standing Xu et al., 2024[31] 0.84–1.28 Bidirectional Ogawa et al., 2023[58] 0.81–1.27 No sound 0.57–1.37 Fixed sound 0.81–1.33 Redirected sound Table 3 Detection Thresholds of Curvature Gain
Source Threshold Comment Steinicke et al., 2008[27] -\mathrm{\mathrm{\mathrm{\mathit{\mathrm{\pi}}}}}/50 < r < +\pi/52.94 Scene rotation started immediately -\mathrm{\pi}/69.23 < r < +\mathrm{\pi}/85.71 Scene rotation started after 2 meters Steinicke et al., 2009[2] r > -\mathrm{\pi}/69.23 Leftward bended paths r > +\mathrm{\pi}/69.23 Rightward bended paths Bruder et al., 2012[44] r\geqslant 14.92 Walking r\geqslant\text{8.97} Electric wheelchair Neth et al., 2012[59] r> \text{10.57} v = \text{0.75\; m/s} r> \text{23.75} v = \text{1.00\; m/s} r> \text{26.99} v = \text{1.25\; m/s} Serafin et al., 2013[51] - 25–30 Audio Grechkin et al., 2016[60] r> \text{11.61} Constant stimuli r> \text{6.41} Maximum likelihood Nguyen et al., 2018[61] r> \text{10.7} Male r> \text{8.63} Female Rietzler et al., 2018[62] 5.2°/m – Bölling et al., 2019[63] r> \text{97} Day-1 (baseline) r>\text{12} Day-2 (after first adaptation) r>\text{270} Day-3 (re-test at the start) r>\text{14} Day-3 (after second adaptation) Reimer et al., 2020[47] -5.518< r< 4.124 Without body -5.590< r< 3.428 With body Nguyen et al., 2020[64] 4.17<r<55.5 – Nguyen et al., 2020[65] 4.06<r<38.11 , r(\text{mean})>\text{6.75} Single task 4.06<r<38.11 , r(\text{mean})>\text{5.24} Dual task Li et al., 2021[66] gc=\text{0.128}\pm\text{0.034\; m}^{-1} Left direction, total detection threshold gc=\text{0.126}\pm\text{0.036\; m}^{-1} Left direction, ascending order gc=\text{0.130}\pm\text{0.034\; m}^{-1} Left direction, descending order gc=\text{0.098}\pm\text{0.043\; m}^{-1} Right direction, total detection threshold gc=\text{0.096}\pm\text{0.043\; m}^{-1} Right direction, ascending order gc=\text{0.101}\pm\text{0.043\; m}^{-1} Right direction, descending order gc=\text{0.079\; m}^{-1} – \text{0.132\; m}^{-1} Left-curved postorder path gc=\text{0.055\; m}^{-1} – \text{0.108\; m}^{-1} Right-curved postorder path Mostajeran et al., 2024[67] DT=\text{0.078}(\text{right}) Nature environments DT=\text{-0.095}(\text{left}) Nature environments DT=\text{0.069}(\text{right}) Urban environments DT=\text{-0.083}(\text{left}) Urban environments Table 4 Detection Thresholds of Non-Forward and Interactive Gain
Gain Source Threshold Comment Bending Langbehn et al., 2017[28] 3.25 r_{\text{real}} = 1.25\; \text{m} 4.35 r_{\text{real}} = 2.5\; \text{m} Jumping Hayashi et al., 2019[39] 0.68–1.44 Distance 0.09–2.16 Height 0.50–1.39 Rotation Li et al., 2021[41] 0.70–1.35 Horizontal 0.38–2.57 Vertical Non-forward vertical Matsumoto et al., 2020[38] 0.842–2.547 Virtual environment: stretching up 0.827–1.944 Virtual environment: crouching 2.576–34.096 Drone telepresence system: stretching up 1.121–3.410 Drone telepresence system: crouching Non-forward steps translation Cho et al., 2021[32] 0.84–1.33 Backward step 0.87–1.16 Leftward sidestep 0.88–1.18 Rightward sidestep Non-forward steps curvature Cho et al., 2021[32] - 10.95–10.30 Backward step - 6.02–13.19 Leftward sidestep - 9.92–4.65 Rightward sidestep Strafing You et al., 2022[29] 4.68 degree Left 5.57 degree Right Interactive Hoshikawa et al., 2022[68] 0.74–1.73 Push condition, using door prop 0.66–2.39 Push condition, using controller 0.49–1.48 Pull condition, using door prop 0.31–2.68 Pull condition, using controller Table 5 Reactive Methods
Source Multi-User Support Physical Space
StatusVirtual Space Status Virtual Actions of
UsersRazzaque, 2005[7] No Static obstacles Open spaces No knowledge required Bachmann et al., 2013[98] Two users Static obstacles Open spaces No knowledge required Chen et al., 2018[99] No Dynamic obstacles Open spaces No knowledge required Bachmann et al., 2019[12] Yes Static obstacles Open spaces No knowledge required Dong et al., 2019[100] Yes Static obstacles Open spaces No knowledge required Lee et al., 2019[101] No Static obstacles Open spaces No knowledge required Messinger et al., 2019[102] Yes Static obstacles Open spaces No knowledge required Thomas and Rosenberg, 2019[11] No Static obstacles Open spaces No knowledge required Dong et al., 2020[103] Yes Static obstacles Open spaces No knowledge required Lee et al., 2020[14] Yes Static obstacles Open spaces No knowledge required Li and Fan, 2020[104] No Static obstacles Fixed spaces No knowledge required Strauss et al., 2020[105] No Static obstacles Open spaces No knowledge required Thomas et al., 2020[106] No Static obstacles Fixed spaces No knowledge required Chen et al., 2021[107] No Static obstacles Fixed spaces No knowledge required Dong et al., 2021[13] Yes Static obstacles Open spaces No knowledge required Williams et al., 2021[15] No Static obstacles Fixed spaces No knowledge required Williams et al., 2021[108] No Static obstacles Fixed spaces No knowledge required Azmandian et al., 2022[109] No Static obstacles Open spaces Need virtual path Wang et al., 2022[110] No Static obstacles Fixed spaces No knowledge required Xu et al., 2022[18] Yes Static obstacles Open spaces Need next waypoint Xu et al., 2022[111] No Static obstacles Open spaces Need next waypoint Xu et al., 2022[112] No Static obstacles Open spaces No knowledge required Wu et al., 2023[113] No Static obstacles Fixed spaces No knowledge required Chen et al., 2024[114] No Static obstacles Open spaces No knowledge required Lee et al., 2024[115] Yes Static obstacles Open spaces No knowledge required Lee et al., 2024[116] No Static obstacles Fixed spaces No knowledge required Xu et al., 2024[117] No Static obstacles Open spaces Need next waypoint Table 6 Predictive Methods
Source Multi-User Support Physical Space Status Virtual Space
StatusVirtual Tracking Data of
UsersInterrante et al., 2007[122] No Static obstacles Open spaces Gaze direction & previous displacement Steinicke et al., 2008[89] No Static obstacles Open spaces Viewing direction Su, 2007[123] No Static obstacles Open spaces None Nescher and Kunz, 2013[16] No Static obstacles Open spaces Head tracking data Hirt et al., 2019[124] No Static obstacles Open spaces None Zank and Kunz, 2017[125] No Static obstacles Open spaces User's position Qi et al., 2021[126] No Static obstacles Open spaces &
highly-structured mazeUser's location & orientation Thomas et al., 2022[127] No Static obstacles Open spaces User's location, orientation,
& next waypointGandrud and Interrante, 2016[128] No Static obstacles Open spaces Head orientation & gaze direction Bremer et al., 2021[129] No Static obstacles Open spaces Positional, orientation, & eye-tracking data Stein et al., 2022[130] No Static obstacles Open spaces Eye-tracking data Jeon et al., 2024[131] No Static obstacles Open spaces User's spatial & eye-tracking data Zmuda et al., 2013[17] No Static obstacles Virtual store with aisles User's location & orientation Nescher et al., 2014[132] No Static obstacles Open spaces User disturbance caused by
applied RETAzmandian et al., 2016[133] No Static obstacles Open spaces Navigation meshes Chen et al., 2021[107] No Static obstacles Open spaces User's current position Hirt et al., 2019[134] Yes Dynamic spaces Open spaces User's current position Jeon et al., 2022[136] Yes Dynamic spaces Open spaces User's current position -
[1] Burdea G C, Coiffet P. Virtual Reality Technology. John Wiley & Sons, 2003.
[2] Steinicke F, Bruder G, Jerald J, Frenz H, Lappe M. Estimation of detection thresholds for redirected walking techniques. IEEE Trans. Visualization and Computer Graphics, 2010, 16(1): 17–27. DOI: 10.1109/TVCG.2009.62.
[3] Bozgeyikli E, Raij A, Katkoori S, Dubey R. Point & teleport locomotion technique for virtual reality. In Proc. the 2016 Annual Symposium on Computer-Human Interaction in Play, Oct. 2016, pp.205–216. DOI: 10.1145/2967934.2968105.
[4] Langbehn E, Lubos P, Steinicke F. Evaluation of locomotion techniques for room-scale VR: Joystick, teleportation, and redirected walking. In Proc. the 2018 Virtual Reality International Conference-Laval Virtual, Apr. 2018, Article No. 4. DOI: 10.1145/3234253.3234291.
[5] Usoh M, Arthur K, Whitton M C, Bastos R, Steed A, Slater M, Brooks Jr F P. Walking > walking-in-place > flying, in virtual environments. In Proc. the 26th Annual Conference on Computer Graphics and Interactive Techniques, Jul. 1999, pp.359–364. DOI: 10.1145/311535.311589.
[6] Razzaque S, Kohn Z, Whitton M C. Redirected walking. In Proc. the 22nd Annual Conference of the European Association for Computer Graphics, Sept. 2001.
[7] Razzaque S. Redirected walking [Ph. D. Thesis]. The University of North Carolina at Chapel Hill, Chapel Hill, 2005.
[8] Nilsson N C, Peck T, Bruder G, Hodgson E, Serafin S, Whitton M, Steinicke F, Rosenberg E S. 15 years of research on redirected walking in immersive virtual environments. IEEE Computer Graphics and Applications, 2018, 38(2): 44–56. DOI: 10.1109/MCG.2018.111125628.
[9] Hodgson E, Bachmann E. Comparing four approaches to generalized redirected walking: Simulation and live user data. IEEE Trans. Visualization and Computer Graphics, 2013, 19(4): 634–643. DOI: 10.1109/TVCG.2013.28.
[10] Azmandian M, Grechkin T, Bolas M, Suma E. Physical space requirements for redirected walking: How size and shape affect performance. In Proc. the 25th International Conference on Artificial Reality and Telexistence and 20th Eurographics Symposium on Virtual Environments, Oct. 2015, pp.93–100.
[11] Thomas J, Rosenberg E S. A general reactive algorithm for redirected walking using artificial potential functions. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.56–62. DOI: 10.1109/VR.2019.8797983.
[12] Bachmann E R, Hodgson E, Hoffbauer C, Messinger J. Multi-user redirected walking and resetting using artificial potential fields. IEEE Trans. Visualization and Computer Graphics, 2019, 25(5): 2022–2031. DOI: 10.1109/TVCG.2019.2898764.
[13] Dong T, Shen Y, Gao T, Fan J. Dynamic density-based redirected walking towards multi-user virtual environments. In Proc. the 2021 IEEE Virtual Reality and 3D User Interfaces, Mar. 27-Apr. 1, 2021, pp.626–634. DOI: 10.1109/VR50410.2021.00088.
[14] Lee D Y, Cho Y H, Min D H, Lee I K. Optimal planning for redirected walking based on reinforcement learning in multi-user environment with irregularly shaped physical space. In Proc. the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2020, pp.155–163. DOI: 10.1109/VR46266.2020.00034.
[15] Williams N L, Bera A, Manocha D. ARC: Alignment-based redirection controller for redirected walking in complex environments. IEEE Trans. Visualization and Computer Graphics, 2021, 27(5): 2535–2544. DOI: 10.1109/TVCG.2021.3067781.
[16] Nescher T, Kunz A. Using head tracking data for robust short term path prediction of human locomotion. In Transactions on Computational Science XVIII, Gavrilova M L, Tan C J K, Kuijper A (eds. ), Springer, 2013, pp.172–191. DOI: 10.1007/978-3-642-38803-3_10.
[17] Zmuda M A, Wonser J L, Bachmann E R, Hodgson E. Optimizing constrained-environment redirected walking instructions using search techniques. IEEE Trans. Visualization and Computer Graphics, 2013, 19(11): 1872–1884. DOI: 10.1109/TVCG.2013.88.
[18] Xu S Z, Liu J H, Wang M, Zhang F L, Zhang S H. Multi-user redirected walking in separate physical spaces for online VR scenarios. IEEE Trans. Visualization and Computer Graphics, 2024, 30(4): 1916–1926. DOI: 10.1109/TVCG.2023.3251648.
[19] Fan C W, Xu S Z, Yu P, Zhang F L, Zhang S H. Redirected walking based on historical user walking data. In Proc. the 2023 IEEE Conference Virtual Reality and 3D User Interfaces, Mar. 2023, pp.53–62. DOI: 10.1109/VR55154.2023.00021.
[20] Williams B, Narasimham G, Rump B, McNamara T P, Carr T H, Rieser J, Bodenheimer B. Exploring large virtual environments with an HMD when physical space is limited. In Proc. the 4th Symposium on Applied Perception in Graphics and Visualization, Jul. 2007, pp.41–48. DOI: 10.1145/1272582.1272590.
[21] Hodgson E, Bachmann E, Waller D. Redirected walking to explore virtual environments: Assessing the potential for spatial interference. ACM Trans. Applied Perception (TAP), 2011, 8(4): Article No. 22. DOI: 10.1145/2043603.2043604.
[22] Peck T C, Fuchs H, Whitton M C. An evaluation of navigational ability comparing redirected free exploration with distractors to walking-in-place and joystick locomotion interfaces. In Proc. the 2011 IEEE Virtual Reality Conference, Mar. 2011, pp.55–62. DOI: 10.1109/VR.2011.5759437.
[23] Bruder G, Lubos P, Steinicke F. Cognitive resource demands of redirected walking. IEEE Trans. Visualization and Computer Graphics, 2015, 21(4): 539–544. DOI: 10.1109/TVCG.2015.2391864.
[24] Lappe M, Bremmer F, van den Berg A V. Perception of self-motion from visual flow. Trends in Cognitive Sciences, 1999, 3(9): 329–336. DOI: 10.1016/S1364-6613(99)01364-9.
[25] Hülemeier A G, Lappe M. Visual perception of travel distance for self-motion through crowds. Journal of Vision, 2023, 23(4): Article No. 7. DOI: 10.1167/jov.23.4.7.
[26] Hülemeier A G, Lappe M. Illusory percepts of curvilinear self-motion when moving through crowds. Journal of Vision, 2023, 23(14): Article No. 6. DOI: 10.1167/jov.23.14.6.
[27] Steinicke F, Bruder G, Jerald J, Frenz H, Lappe M. Analyses of human sensitivity to redirected walking. In Proc. the 2008 ACM Symposium on Virtual Reality Software and Technology, Oct. 2008, pp.149–156. DOI: 10.1145/1450579.1450611.
[28] Langbehn E, Lubos P, Bruder G, Steinicke F. Bending the curve: Sensitivity to bending of curved paths and application in room-scale VR. IEEE Trans. Visualization and Computer Graphics, 2017, 23(4): 1389–1398. DOI: 10.1109/TVCG.2017.2657220.
[29] You C, Benda B, Rosenberg E S, Ragan E, Lok B, Thomas J. Strafing gain: Redirecting users one diagonal step at a time. In Proc. the 2022 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2022, pp.603–611. DOI: 10.1109/ISMAR55827.2022.00077.
[30] Mayor J, Raya L, Bayona S, Sanchez A. Multi-technique redirected walking method. IEEE Trans. Emerging Topics in Computing, 2022, 10(2): 997–1008. DOI: 10.1109/TETC.2021.3062285.
[31] Xu S Z, Chen F X Y, Gong R, Zhang F L, Zhang S H. BiRD: Using bidirectional rotation gain differences to redirect users during back-and-forth head turns in walking. IEEE Trans. Visualization and Computer Graphics, 2024, 30(5): 2693–2702. DOI: 10.1109/TVCG.2024.3372094.
[32] Cho Y H, Min D H, Huh J S, Lee S H, Yoon J S, Lee I K. Walking outside the box: Estimation of detection thresholds for non-forward steps. In Proc. the 2021 IEEE Virtual Reality and 3D User Interfaces, Mar. 27 -Apr. 1, 2021, pp.448–454. DOI: 10.1109/VR50410.2021.00068.
[33] Dong T, Gao T, Dong Y, Wang L, Hu K, Fan J. FREE-RDW: A multi-user redirected walking method for supporting non-forward steps. IEEE Trans. Visualization and Computer Graphics, 2023, 29(5): 2315–2325. DOI: 10.1109/TVCG.2023.3247107.
[34] Hu L, Zhang Y, Wang R, Gao Z, Bao H, Hua W. Human sensitivity to slopes of slanted paths. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.984–985. DOI: 10.1109/VR.2019.8798248.
[35] Matsumoto K, Narumi T, Tanikawa T, Hirose M. Walking uphill and downhill: Redirected walking in the vertical direction. In Proc. the 2017 ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, Jul. 30-Aug. 3, 2017, Article No. 37. DOI: 10.1145/3102163.3102227.
[36] Zhang Y, Wu J, Liu Q. The sloped shoes: Influence human perception of the virtual slope. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, Mar. 2022, pp.826–827. DOI: 10.1109/VRW55335.2022.00264.
[37] Miyazaki K, Nishihara I, Nakata T. Investigation of the effectiveness by redirected walking with tilt presentation to the sole. In Proc. the 2023 International Workshop on Advanced Imaging Technology, Mar. 2023, pp.129–133. DOI: 10.1117/12.2666577.
[38] Matsumoto K, Langbehn E, Narumi T, Steinicke F. Detection thresholds for vertical gains in VR and drone-based telepresence systems. In Proc. the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2020, pp.101–107. DOI: 10.1109/VR46266.2020.00028.
[39] Hayashi D, Fujita K, Takashima K, Lindeman R W, Kitamura Y. Redirected jumping: Imperceptibly manipulating jump motions in virtual reality. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.386–394. DOI: 10.1109/VR.2019.8797989.
[40] Jung S, Borst C W, Hoermann S, Lindeman R W. Redirected jumping: Perceptual detection rates for curvature gains. In Proc. the 32nd Annual ACM Symposium on User Interface Software and Technology, Oct. 2019, pp.1085–1092. DOI: 10.1145/3332165.3347868.
[41] Li Y J, Jin D R, Wang M, Chen J L, Steinicke F, Hu S M, Zhao Q. Detection thresholds with joint horizontal and vertical gains in redirected jumping. In Proc. the 2021 IEEE Virtual Reality and 3D User Interfaces, Mar. 27-Apr.1, 2021, pp.95–102. DOI: 10.1109/VR50410.2021.00030.
[42] Ogawa K, Fujita K, Takashima K, Kitamura Y. PseudoJumpOn: Jumping onto steps in virtual reality. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.635–643. DOI: 10.1109/VR51125.2022.00084.
[43] Williams N L, Peck T C. Estimation of rotation gain thresholds considering FOV, gender, and distractors. IEEE Trans. Visualization and Computer Graphics, 2019, 25(11): 3158–3168. DOI: 10.1109/TVCG.2019.2932213.
[44] Bruder G, Interrante V, Phillips L, Steinicke F. Redirecting walking and driving for natural navigation in immersive virtual environments. IEEE Trans. Visualization and Computer Graphics, 2012, 18(4): 538–545. DOI: 10.1109/TVCG.2012.55.
[45] Zhang J, Langbehn E, Krupke D, Katzakis N, Steinicke F. Detection thresholds for rotation and translation gains in 360° video-based telepresence systems. IEEE Trans. Visualization and Computer Graphics, 2018, 24(4): 1671–1680. DOI: 10.1109/TVCG.2018.2793679.
[46] Kruse L, Langbehn E, Steinicke F. I can see on my feet while walking: Sensitivity to translation gains with visible feet. In Proc. the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2018, pp.305–312. DOI: 10.1109/VR.2018.8446216.
[47] Reimer D, Langbehn E, Kaufmann H, Scherzer D. The influence of full-body representation on translation and curvature gain. In Proc. the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, Mar. 2020, pp.154–159. DOI: 10.1109/VRW50115.2020.00032.
[48] Kim D, Shin J E, Lee J, Woo W. Adjusting relative translation gains according to space size in redirected walking for mixed reality mutual space generation. In Proc. the 2021 IEEE Virtual Reality and 3D User Interfaces, Mar. 27 -Apr. 1, 2021, pp.653–660. DOI: 10.1109/VR50410.2021.00091.
[49] Kim D, Kim S, Shin J E, Yoon B, Kim J, Lee J, Woo W. The effects of spatial configuration on relative translation gain thresholds in redirected walking. Virtual Reality, 2023, 27(2): 1233–1250. DOI: 10.1007/s10055-022-00734-3.
[50] Luo E X, Tang K Y, Xu S Z, Tong Q, Zhang S H. Walking telescope: Exploring the zooming effect in expanding detection threshold range for translation gain. In Proc. the 12th International Conference on Computational Visual Media, Apr. 2024, pp.252–273. DOI: 10.1007/978-981-97-2095-8_14.
[51] Serafin S, Nilsson N C, Sikstrom E, De Goetzen A, Nordahl R. Estimation of detection thresholds for acoustic based redirected walking techniques. In Proc. the 2013 IEEE Virtual Reality, Mar. 2013, pp.161–162. DOI: 10.1109/VR.2013.6549412.
[52] Paludan A, Elbaek J, Mortensen M, Zobbe M, Nilsson N C, Nordahl R, Reng L, Serafin S. Disguising rotational gain for redirected walking in virtual reality: Effect of visual density. In Proc. the 2016 IEEE Virtual Reality, Mar. 2016, pp.259–260. DOI: 10.1109/VR.2016.7504752.
[53] Nilsson N C, Suma E, Nordahl R, Bolas M, Serafin S. Estimation of detection thresholds for audiovisual rotation gains. In Proc. the 2016 IEEE Virtual Reality, Mar. 2016, pp.241–242. DOI: 10.1109/VR.2016.7504743.
[54] Brument H, Marchal M, Olivier A H, Argelaguet F. Influence of dynamic field of view restrictions on rotation gain perception in virtual environments. In Proc. the 2020 Virtual Reality and Augmented Reality: the 17th EuroVR International Conference, Nov. 2020, pp.20–40. DOI: 10.1007/978-3-030-62655-6_2.
[55] Brument H, Marchal M, Olivier A H, Argelaguet Sanz F. Studying the influence of translational and rotational motion on the perception of rotation gains in virtual environments. In Proc. the 2021 ACM Symposium on Spatial User Interaction, Nov. 2021, Article No. 1. DOI: 10.1145/3485279.3485282.
[56] Robb A, Kohm K, Porter J. Experience matters: Longitudinal changes in sensitivity to rotational gains in virtual reality. ACM Trans. Applied Perception, 2022, 19(4): 16. DOI: 10.1145/3560818.
[57] Wang C, Zhang S H, Zhang Y, Zollmann S, Hu S M. On rotation gains within and beyond perceptual limitations for seated VR. IEEE Trans. Visualization and Computer Graphics, 2023, 29(7): 3380–3391. DOI: 10.1109/TVCG.2022.3159799.
[58] Ogawa K, Fujita K, Sakamoto S, Takashima K, Kitamura Y. Exploring visual-auditory redirected walking using auditory cues in reality. IEEE Trans. Visualization and Computer Graphics, 2024, 30(8): 5782–5794. DOI: 10.1109/TVCG.2023.3309267.
[59] Neth C T, Souman J L, Engel D, Kloos U, Bulthoff H H, Mohler B J. Velocity-dependent dynamic curvature gain for redirected walking. IEEE Trans. Visualization and Computer Graphics, 2012, 18(7): 1041–1052. DOI: 10.1109/TVCG.2011.275.
[60] Grechkin T, Thomas J, Azmandian M, Bolas M, Suma E. Revisiting detection thresholds for redirected walking: Combining translation and curvature gains. In Proc. the 2016 ACM Symposium on Applied Perception, Jul. 2016, pp.113–120. DOI: 10.1145/2931002.2931018.
[61] Nguyen A, Rothacher Y, Lenggenhager B, Brugger P, Kunz A. Individual differences and impact of gender on curvature redirection thresholds. In Proc. the 2018 ACM Symposium on Applied Perception, Aug. 2018, Article No. 5. DOI: 10.1145/3225153.3225155.
[62] Rietzler M, Gugenheimer J, Hirzle T, Deubzer M, Langbehn E, Rukzio E. Rethinking redirected walking: On the use of curvature gains beyond perceptual limitations and revisiting bending gains. In Proc. the 2018 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2018, pp.115–122. DOI: 10.1109/ISMAR.2018.00041.
[63] Bölling L, Stein N, Steinicke F, Lappe M. Shrinking circles: Adaptation to increased curvature gain in redirected walking. IEEE Trans. Visualization and Computer Graphics, 2019, 25(5): 2032–2039. DOI: 10.1109/TVCG.2019.2899228.
[64] Nguyen A, Rothacher Y, Lenggenhager B, Brugger P, Kunz A. Effect of sense of embodiment on curvature redirected walking thresholds. In Proc. the 2020 ACM Symposium on Applied Perception, Sept. 2020, Article No. 16. DOI: 10.1145/3385955.3407932.
[65] Nguyen A, Rothacher Y, Efthymiou E, Lenggenhager B, Brugger P, Imbach L, Kunz A. Effect of cognitive load on curvature redirected walking thresholds. In Proc. the 26th ACM Symposium on Virtual Reality Software and Technology, Nov. 2020, Article No. 17. DOI: 10.1145/3385956.3418950.
[66] Li H, Bian Y, Yang C, Zhang F, Zhao Y, Liu J, Meng X, Fan L. Estimation of human sensitivity for curvature gain of redirected walking technology. In Proc. the 23rd International Conference on Mobile Human-Computer Interaction, Sept. 27-Oct. 1, 2021, Article No. 33. DOI: 10.1145/3447526.3472018.
[67] Mostajeran F, Schneider S, Bruder G, Kühn S, Steinicke F. Analyzing cognitive demands and detection thresholds for redirected walking in immersive forest and urban environments. In Proc. the 2024 IEEE Conference Virtual Reality and 3D User Interfaces, Mar. 2024, pp.61–71. DOI: 10.1109/VR58804.2024.00030.
[68] Hoshikawa Y, Fujita K, Takashima K, Fjeld M, Kitamura Y. RedirectedDoors: Redirection while opening doors in virtual reality. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.464–473. DOI: 10.1109/VR51125.2022.00066.
[69] Wichmann F A, Hill N J. The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 2001, 63(8): 1293–1313. DOI: 10.3758/bf03194544.
[70] Ehrenstein W H, Ehrenstein A. Psychophysical methods. In Modern Techniques in Neuroscience Research, Windhorst U, Johansson H (eds. ), Springer, 1999, pp.1211–1241. DOI: 10.1007/978-3-642-58552-4_43.
[71] Congdon B J, Steed A. Sensitivity to rate of change in gains applied by redirected walking. In Proc. the 25th ACM Symposium on Virtual Reality Software and Technology, Nov. 2019, Article No. 3. 10.1145/3359996.3364277 DOI: 10.1145/3359996.3364277.
[72] Taylor M M, Creelman C D. PEST: Efficient estimates on probability functions. The Journal of the Acoustical Society of America, 1967, 41(4A): 782–787. DOI: 10.1121/1.1910407.
[73] Hutton C, Ziccardi S, Medina J, Rosenberg E S. Individualized calibration of rotation gain thresholds for redirected walking. In Proc. the 2018 International Conference on Artificial Reality and Telexistence Eurographics Symposium on Virtual Environments, Nov. 2018. DOI: 10.2312/egve.20181315.
[74] Kennedy R S, Lane N E, Berbaum K S, Lilienthal M G. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. The International Journal of Aviation Psychology, 1993, 3(3): 203–220. DOI: 10.1207/s15327108ijap0303_3.
[75] Keshavarz B, Hecht H. Validating an efficient method to quantify motion sickness. Human Factors: The Journal of the Human Factors and Ergonomics Society, 2011, 53(4): 415–426. DOI: 10.1177/0018720811403736.
[76] Hart S G, Staveland L E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in Psychology, 1988, 52: 139–183. DOI: 10.1016/S0166-4115(08)62386-9.
[77] Kim H, Jeon S B, Lee I K. Locomotion techniques for dynamic environments: Effects on spatial knowledge and user experiences. IEEE Trans. Visualization and Computer Graphics, 2024, 30(5): 2184–2194. DOI: 10.1109/TVCG.2024.3372074.
[78] Boletsis C. A user experience questionnaire for VR locomotion: Formulation and preliminary evaluation. In Proc. the 7th International Conference, Augmented Reality, Virtual Reality, and Computer Graphics, Sept. 2020, pp.157–167. DOI: 10.1007/978-3-030-58465-8_11.
[79] Schubert T, Friedmann F, Regenbrecht H. The experience of presence: Factor analytic insights. Presence, 2001, 10(3): 266–281. DOI: 10.1162/105474601300343603.
[80] Warren W H Jr, Kay B A, Zosh W D, Duchon A P, Sahuc S. Optic flow is used to control human walking. Nature Neuroscience, 2001, 4(2): 213–216. DOI: 10.1038/84054.
[81] Jaekl P M, Allison R S, Harris L R, Jasiobedzka U T, Jenkin H L, Jenkin M R, Zacher J E, Zikovitz D C. Perceptual stability during head movement in virtual reality. In Proc. the 2002 IEEE Virtual Reality, Mar. 2002, pp.149–155. DOI: 10.1109/VR.2002.996517.
[82] Rothacher Y, Nguyen A, Lenggenhager B, Kunz A, Brugger P. Visual capture of gait during redirected walking. Scientific Reports, 2018, 8(1): Article No. 17974. DOI: 10.1038/s41598-018-36035-6.
[83] Bruder G, Steinicke F, Wieland P, Lappe M. Tuning self-motion perception in virtual reality with visual illusions. IEEE Trans. Visualization and Computer Graphics, 2012, 18(7): 1068–1078. DOI: 10.1109/TVCG.2011.274.
[84] Bolte B, Bruder G, Steinicke F, Hinrichs K, Lappe M. Augmentation techniques for efficient exploration in head-mounted display environments. In Proc. the 17th ACM Symposium on Virtual Reality Software and Technology, Nov. 2010, pp.11–18. DOI: 10.1145/1889863.1889865.
[85] Nogalski M, Fohl W. Acoustic redirected walking with auditory cues by means of wave field synthesis. In Proc. the 2016 IEEE Virtual Reality, Mar. 2016, pp.245–246. DOI: 10.1109/VR.2016.7504745.
[86] Gao P, Matsumoto K, Narumi T, Hirose M. Visual-auditory redirection: Multimodal integration of incongruent visual and auditory cues for redirected walking. In Proc. the 2020 IEEE International Symposium on Mixed and Augmented Reality, Nov. 2020, pp.639–648. DOI: 10.1109/ISMAR50242.2020.00092.
[87] Weller R, Brennecke B, Zachmann G. Redirected walking in virtual reality with auditory step feedback. The Visual Computer, 2022, 38(9): 3475–3486. DOI: 10.1007/S00371-022-02565-4.
[88] Lee J, Hwang S, Ataya A, Kim S. Effect of optical flow and user VR familiarity on curvature gain thresholds for redirected walking. Virtual Reality, 2024, 28(1): Article No. 35. DOI: 10.1007/s10055-023-00935-4.
[89] Steinicke F, Bruder G, Kohli L, Jerald J, Hinrichs K. Taxonomy and implementation of redirection techniques for ubiquitous passive haptic feedback. In Proc. the 2008 International Conference on Cyberworlds, Sept. 2008, pp.217–223. DOI: 10.1109/CW.2008.53.
[90] Matsumoto K, Ban Y, Narumi T, Tanikawa T, Hirose M. Curvature manipulation techniques in redirection using haptic cues. In Proc. the 2016 IEEE Symposium on 3D User Interfaces, Mar. 2016, pp.105–108. DOI: 10.1109/3DUI.2016.7460038.
[91] Matsumoto K, Aoyama K, Narumi T, Kuzuoka H. Redirected walking using noisy galvanic vestibular stimulation. In Proc. the 2021 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2021, pp.498–507. DOI: 10.1109/ISMAR52148.2021.00067.
[92] Sakono H, Matsumoto K, Narumi T, Kuzuoka H. Redirected walking using continuous curvature manipulation. IEEE Trans. Visualization and Computer Graphics, 2021, 27(11): 4278–4288. DOI: 10.1109/TVCG.2021.3106501.
[93] Congdon B J, Steed A. Sensitivity to rate of change in gains applied by redirected walking. In Proc. the 25th ACM Symposium on Virtual Reality Software and Technology, Nov. 2019, Article No. 3. DOI: 10.1145/3359996.3364277.
[94] Schmelter T, Hernadi L, Störmer M A, Steinicke F, Hildebrand K. Interaction based redirected walking. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2021, 4(1): Article No. 9. DOI: 10.1145/3451264.
[95] Nguyen A, Rothacher Y, Kunz A, Brugger P, Lenggenhager B. Effect of environment size on curvature redirected walking thresholds. In Proc. the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2018, pp.645–646. DOI: 10.1109/VR.2018.8446225.
[96] Kim D, Kim J, Shin J E, Yoon B, Lee J, Woo W. Effects of virtual room size and objects on relative translation gain thresholds in redirected walking. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.379–388. DOI: 10.1109/VR51125.2022.00057.
[97] Waldow K, Fuhrmann A, Grünvogel S M. Do textures and global illumination influence the perception of redirected walking based on translational gain? In Proc. the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2018, pp.717–718. DOI: 10.1109/VR.2018.8446587.
[98] Bachmann E R, Holm J, Zmuda M A, Hodgson E. Collision prediction and prevention in a simultaneous two-user immersive virtual environment. In Proc. the 2013 IEEE Virtual Reality, Mar. 2013, pp.89–90. DOI: 10.1109/VR.2013.6549377.
[99] Chen H, Chen S, Rosenberg E S. Redirected walking in irregularly shaped physical environments with dynamic obstacles. In Proc. the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2018, pp.523–524. DOI: 10.1109/VR.2018.8446563.
[100] Dong Z C, Fu X M, Yang Z, Liu L. Redirected smooth mappings for multiuser real walking in virtual reality. ACM Trans. Graphics (TOG), 2019, 38(5): Article No. 149. DOI: 10.1145/3345554.
[101] Lee D Y, Cho Y H, Lee I K. Real-time optimal planning for redirected walking using deep Q-learning. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.63–71. DOI: 10.1109/VR.2019.8798121.
[102] Messinger J, Hodgson E, Bachmann E R. Effects of tracking area shape and size on artificial potential field redirected walking. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.72–80. DOI: 10.1109/VR.2019.8797818.
[103] Dong T, Chen X, Song Y, Ying W, Fan J. Dynamic artificial potential fields for multi-user redirected walking. In Proc. the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2020, pp.146–154. DOI: 10.1109/VR46266.2020.00033.
[104] Li H, Fan L. Mapping various large virtual spaces to small real spaces: A novel redirected walking method for immersive VR navigation. IEEE Access, 2020, 8: 180210–180221. DOI: 10.1109/ACCESS.2020.3027985.
[105] Strauss R R, Ramanujan R, Becker A, Peck T C. A steering algorithm for redirected walking using reinforcement learning. IEEE Trans. Visualization and Computer Graphics, 2020, 26(5): 1955–1963. DOI: 10.1109/TVCG.2020.2973060.
[106] Thomas J, Hutton Pospick C, Suma Rosenberg E. Towards physically interactive virtual environments: Reactive alignment with redirected walking. In Proc. the 26th ACM Symposium on Virtual Reality Software and Technology, Nov. 2020, Article No. 10. DOI: 10.1145/3385956.3418966.
[107] Chen Z Y, Li Y J, Wang M, Steinicke F, Zhao Q. A reinforcement learning approach to redirected walking with passive haptic feedback. In Proc. the 2021 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2021, pp.184–192. DOI: 10.1109/ISMAR52148.2021.00033.
[108] Williams N L, Bera A, Manocha D. Redirected walking in static and dynamic scenes using visibility polygons. IEEE Trans. Visualization and Computer Graphics, 2021, 27(11): 4267–4277. DOI: 10.1109/TVCG.2021.3106432.
[109] Azmandian M, Yahata R, Grechkin T, Rosenberg E S. Adaptive redirection: A context-aware redirected walking meta-strategy. IEEE Trans. Visualization and Computer Graphics, 2022, 28(5): 2277–2287. DOI: 10.1109/TVCG.2022.3150500.
[110] Wang M, Chen Z Y, Cai W C, Steinicke F. Transferable virtual-physical environmental alignment with redirected walking. IEEE Trans. Visualization and Computer Graphics, 2024, 30(3): 1696–1709. DOI: 10.1109/TVCG. 2022.3224073.
[111] Xu S Z, Liu T Q, Liu J H, Zollmann S, Zhang S H. Making resets away from targets: POI aware redirected walking. IEEE Trans. Visualization and Computer Graphics, 2022, 28(11): 3778–3787. DOI: 10.1109/TVCG.2022.3203095.
[112] Xu S Z, Lv T, He G, Chen C H, Zhang F L, Zhang S H. Optimal pose guided redirected walking with pose score precomputation. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.655–663. DOI: 10.1109/VR51125.2022.00086.
[113] Wu X L, Hung H C, Babu S V, Chuang J H. Novel design and evaluation of redirection controllers using optimized alignment and artificial potential field. IEEE Trans. Visualization and Computer Graphics, 2023, 29(11): 4556–4566. DOI: 10.1109/TVCG.2023.3320208.
[114] Chen J J, Hung H C, Sun Y R, Chuang J H. APF-S2T: Steering to target redirection walking based on artificial potential fields. IEEE Trans. Visualization and Computer Graphics, 2024, 30(5): 2464–2473. DOI: 10.1109/TVCG.2024.3372052.
[115] Lee H J, Jeon S B, Cho Y H, Lee I K. MARR: A multi-agent reinforcement resetter for redirected walking. IEEE Trans. Visualization and Computer Graphics, 2024. DOI: 10.1109/TVCG.2024.3368043. (early access
[116] Lee H J, Jeon S B, Cho Y H, Lee I K. Redirection strategy switching: Selective redirection controller for dynamic environment adaptation. IEEE Trans. Visualization and Computer Graphics, 2024, 30(5): 2474–2484. DOI: 10.1109/TVCG.2024.3372056.
[117] Xu S Z, Huang K, Fan C W, Zhang S H. SafeRDW: Keep VR users safe when jumping using redirected walking. In Proc. the 2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR), Mar. 2024, pp.365–375. DOI: 10.1109/VR58804.2024.00058.
[118] Li Y J, Steinicke F, Wang M. A comprehensive review of redirected walking techniques: Taxonomy, methods, and future directions. Journal of Computer Science and Technology, 2022, 37(3): 561–583. DOI: 10.1007/s11390-022-2266-7.
[119] Azmandian M, Grechkin T, Rosenberg E S. An evaluation of strategies for two-user redirected walking in shared physical spaces. In Proc. the 2017 IEEE Virtual Reality, Mar. 2017, pp.91–98. DOI: 10.1109/VR.2017.7892235.
[120] Dong T, Song Y, Shen Y, Fan J. Simulation and evaluation of three-user redirected walking algorithm in shared physical spaces. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.894–895. DOI: 10.1109/VR.2019.8798319.
[121] Nitzsche N, Hanebeck U D, Schmidt G. Motion compression for telepresent walking in large target environments. Presence, 2004, 13(1): 44–60. DOI: 10.1162/105474604774048225.
[122] Interrante V, Ries B, Anderson L. Seven league boots: A new metaphor for augmented locomotion through moderately large scale immersive virtual environments. In Proc. the 2007 IEEE Symposium on 3D User interfaces, Mar. 2007. DOI: 10.1109/3DUI.2007.340791.
[123] Su J. Motion compression for telepresence locomotion. Presence: Teleoperators and Virtual Environments, 2007, 16(4): 385–398. DOI: 10.1162/pres.16.4.385.
[124] Hirt C, Zank M, Kunz A. Short-term path prediction for virtual open spaces. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.978–979. DOI: 10.1109/VR.2019.8797709.
[125] Zank M, Kunz A. Optimized graph extraction and locomotion prediction for redirected walking. In Proc. the 2017 IEEE Symposium on 3D User Interfaces, Mar. 2017, pp.120–129. DOI: 10.1109/3DUI.2017.7893328.
[126] Qi M, Liu Y, Cui J. A novel redirected walking algorithm for VR navigation in small tracking area. In Proc. the 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, Mar. 27-Apr. 1, 2021, pp.518–519. DOI: 10.1109/VRW52623.2021.00141.
[127] Thomas J, Yong S, Rosenberg E S. Inverse kinematics assistance for the creation of redirected walking paths. In Proc. the 2022 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2022, pp.593–602. DOI: 10.1109/ISMAR55827.2022.00076.
[128] Gandrud J, Interrante V. Predicting destination using head orientation and gaze direction during locomotion in VR. In Proc. the 2016 ACM Symposium on Applied Perception, Jul. 2016, pp.31–38. DOI: 10.1145/2931002.2931010.
[129] Bremer G, Stein N, Lappe M. Predicting future position from natural walking and eye movements with machine learning. In Proc. the 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality, Nov. 2021, pp.19–28. DOI: 10.1109/AIVR52153.2021.00013.
[130] Stein N, Bremer G, Lappe M. Eye tracking-based LSTM for locomotion prediction in VR. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.493–503. DOI: 10.1109/VR51125.2022.00069.
[131] Jeon S B, Jung J, Park J, Lee I K. F-RDW: Redirected walking with forecasting future position. IEEE Trans. Visualization and Computer Graphics, 2024. DOI: 10.1109/TVCG.2024.3376080. (early access
[132] Nescher T, Huang Y Y, Kunz A. Planning redirection techniques for optimal free walking experience using model predictive control. In Proc. the 2014 IEEE Symposium on 3D User Interfaces, Mar. 2014, pp.111–118. DOI: 10.1109/3DUI.2014.6798851.
[133] Azmandian M, Grechkin T, Bolas M, Suma E. Automated path prediction for redirected walking using navigation meshes. In Proc. the 2016 IEEE Symposium on 3D User Interfaces, Mar. 2016, pp.63–66. DOI: 10.1109/3DUI.2016.7460032.
[134] Hirt C, Zank M, Kunz A. PReWAP: Predictive redirected walking using artificial potential fields. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.976–977. DOI: 10.1109/VR.2019.8798118.
[135] Congdon B J, Steed A. Monte-Carlo redirected walking: Gain selection through simulated walks. IEEE Trans. Visualization and Computer Graphics, 2023, 29(5): 2637–2646. DOI: 10.1109/TVCG.2023.3247093.
[136] Jeon S B, Kwon S U, Hwang J Y, Cho Y H, Kim H, Park J, Lee I K. Dynamic optimal space partitioning for redirected walking in multi-user environment. ACM Trans. Graphics (TOG), 2022, 41(4): Article No. 90. DOI: 10.1145/3528223.3530113.
[137] Engel D, Curio C, Tcheang L, Mohler B, Bülthoff H H. A psychophysically calibrated controller for navigating through large environments in a limited free-walking space. In Proc. the 2008 ACM Symposium on Virtual Reality Software and Technology, Oct. 2008, pp.157–164. DOI: 10.1145/1450579.1450612.
[138] Suma E A, Bruder G, Steinicke F, Krum D M, Bolas M. A taxonomy for deploying redirection techniques in immersive virtual environments. In Proc. the 2012 IEEE Virtual Reality Workshops, Mar. 2012, pp.43–46. DOI: 10.1109/VR.2012.6180877.
[139] Teixeira J, Miellet S, Palmisano S. Effects of vection type and postural instability on cybersickness. Virtual Reality, 2024, 28(2): Article No. 82. DOI: 10.1007/s10055-024-00969-2.
[140] Yu R, Lages W S, Nabiyouni M, Ray B, Kondur N, Chandrashekar V, Bowman D A. Bookshelf and bird: Enabling real walking in large VR spaces through cell-based redirection. In Proc. the 2017 IEEE Symposium on 3D User Interfaces, Mar. 2017, pp.116–119. DOI: 10.1109/3DUI.2017.7893327.
[141] Langbehn E, Steinicke F. Redirected walking in virtual reality. In Encyclopedia of Computer Graphics and Games, Lee N (ed. ), Springer, 2018, pp.1–11. DOI: 10.1007/978-3-319-08234-9_253-1.
[142] Bolte B, Bruder G, Steinicke F. The jumper metaphor: An effective navigation technique for immersive display setups. In Proc. the 2011 Virtual Reality International Conference, April 2011.
[143] Rahimi K, Banigan C, Ragan E D. Scene transitions and teleportation in virtual reality and the implications for spatial awareness and sickness. IEEE Trans. Visualization and Computer Graphics, 2020, 26(6): 2273–2287. DOI: 10.1109/TVCG.2018.2884468.
[144] Bruder G, Steinicke F, Hinrichs K H. Arch-Explore: A natural user interface for immersive architectural walkthroughs. In Proc. the 2009 IEEE Symposium on 3D User Interfaces, Mar. 2009, pp.75–82. DOI: 10.1109/3DUI.2009.4811208.
[145] Freitag S, Rausch D, Kuhlen T. Reorientation in virtual environments using interactive portals. In Proc. the 2014 IEEE Symposium on 3D User Interfaces, Mar. 2014, pp.119–122. DOI: 10.1109/3DUI.2014.6798852.
[146] Liu J, Parekh H, Al-Zayer M, Folmer E. Increasing walking in VR using redirected teleportation. In Proc. the 31st Annual ACM Symposium on User Interface Software and Technology, Oct. 2018, pp.521–529. DOI: 10.1145/3242587.3242601.
[147] Simeone A L, Nilsson N C, Zenner A, Speicher M, Daiber F. The space bender: Supporting natural walking via overt manipulation of the virtual environment. In Proc. the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2020, pp.598–606. DOI: 10.1109/VR46266.2020.00082.
[148] Han J, Moere A V, Simeone A L. Foldable spaces: An overt redirection approach for natural walking in virtual reality. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.167–175. DOI: 10.1109/VR51125.2022.00035.
[149] Azmandian M, Grechkin T, Bolas M, Suma E. The redirected walking toolkit: A unified development platform for exploring large virtual environments. In Proc. the 2nd IEEE Workshop on Everyday Virtual Reality, Mar. 2016, pp.9–14. DOI: 10.1109/WEVR.2016.7859537.
[150] Lee H J, Jeon S B, Cho Y H, Lee I K. Multi-user reset controller for redirected walking using reinforcement learning. arXiv: 2306.11433, 2023. https://arxiv.org/abs/2306.11433, Aug. 2024.
[151] Zhang S H, Chen C, Zollmann S. One-step out-of-place resetting for redirected walking in VR. IEEE Trans. Visualization and Computer Graphics, 2023, 29(7): 3327–3339. DOI: 10.1109/TVCG.2022.3158609.
[152] Zhang S H, Chen C H, Zheng F, Yang Y L, Hu S M. Adaptive optimization algorithm for resetting techniques in obstacle-ridden environments. IEEE Trans. Visualization and Computer Graphics, 2023, 29(4): 2080–2092. DOI: 10.1109/TVCG.2021.3139990.
[153] Xie X, Lin Q, Wu H, Narasimham G, McNamara T P, Rieser J, Bodenheimer B. A system for exploring large virtual environments that combines scaled translational gain and interventions. In Proc. the 7th Symposium on Applied Perception in Graphics and Visualization, Jul. 2010, pp.65–72. DOI: 10.1145/1836248.1836260.
[154] Kwon S U, Jeon S B, Hwang J Y, Cho Y H, Park J, Lee I K. Infinite virtual space exploration using space tiling and perceivable reset at fixed positions. In Proc. the 2022 IEEE International Symposium on Mixed and Augmented Reality, 2022, pp.758–767. DOI: 10.1109/ISMAR55827.2022.00094.
[155] Bidder II W H, Tomlinson A. A comparison of saccadic and blink suppression in normal observers. Vision Research, 1997, 37(22): 3171–3179. DOI: 10.1016/S0042-6989(97)00110-7.
[156] Bolte B, Lappe M. Subliminal reorientation and repositioning in immersive virtual environments using saccadic suppression. IEEE Trans. Visualization and Computer Graphics, 2015, 21(4): 545–552. DOI: 10.1109/TVCG.2015.2391851.
[157] Langbehn E, Steinicke F, Lappe M, Welch G F, Bruder G. In the blink of an eye: Leveraging blink-induced suppression for imperceptible position and orientation redirection in virtual reality. ACM Trans. Graphics (TOG), 2018, 37(4): Article No. 66. DOI: 10.1145/3197517.3201335.
[158] Davis K, Hayase T, Humer I, Woodard B, Eckhardt C. A quantitative analysis of redirected walking in virtual reality using saccadic eye movements. In Proc. the 17th International Symposium on Visual Computing, Oct. 2022, pp.205–216. DOI: 10.1007/978-3-031-20716-7_16.
[159] Nguyen A, Kunz A. Discrete scene rotation during blinks and its effect on redirected walking algorithms. In Proc. the 24th ACM Symposium on Virtual Reality Software and Technology, Nov. 28-Dec. 1, 2018, Article No. 29. DOI: 10.1145/3281505.3281515.
[160] Sun Q, Patney A, Wei L Y, Shapira O, Lu J, Asente P, Zhu S, McGuire M, Luebke D, Kaufman A. Towards virtual reality infinite walking: Dynamic saccadic redirection. ACM Trans. Graphics (TOG), 2018, 37(4): 67. DOI: 10.1145/3197517.3201294.
[161] Pinson E, Pietroszek K, Sun Q, Eckhardt C. An open framework for infinite walking with saccadic redirection. In Proc. the 26th ACM Symposium on Virtual Reality Software and Technology, Nov. 2020, Article No. 41. DOI: 10.1145/3385956.3422091.
[162] Suma E A, Clark S, Krum D, Finkelstein S, Bolas M, Warte Z. Leveraging change blindness for redirection in virtual environments. In Proc. the 2011 IEEE Virtual Reality Conference, Mar. 2011, pp.159–166. DOI: 10.1109/VR.2011.5759455.
[163] Suma E A, Lipps Z, Finkelstein S, Krum D M, Bolas M. Impossible spaces: Maximizing natural walking in virtual environments with self-overlapping architecture. IEEE Trans. Visualization and Computer Graphics, 2012, 18(4): 555–564. DOI: 10.1109/TVCG.2012.47.
[164] Vasylevska K, Kaufmann H, Bolas M, Suma E A. Flexible spaces: Dynamic layout generation for infinite walking in virtual environments. In Proc. the 2013 IEEE Symposium on 3D User Interfaces, Mar. 2013, pp.39–42. DOI: 10.1109/3DUI.2013.6550194.
[165] Simons D J, Rensink R A. Change blindness: Past, present, and future. Trends in Cognitive Sciences, 2005, 9(1): 16–20. DOI: 10.1016/j.tics.2004.11.006.
[166] Vasylevska K, Kaufmann H. Towards efficient spatial compression in self-overlapping virtual environments. In Proc. the 2017 IEEE Symposium on 3D User Interfaces, Mar. 2017, pp.12–21. DOI: 10.1109/3DUI.2017.7893312.
[167] Langbehn E, Lubos P, Steinicke F. Redirected spaces: Going beyond borders. In Proc. the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2018, pp.767–768. DOI: 10.1109/VR.2018.8446167.
[168] Koltai B G, Husted J E, Vangsted R, Mikkelsen T N, Kraus M. Procedurally generated self overlapping mazes in virtual reality. In Proc. the 8th EAI International Conference on ArtsIT, and 4th EAI International Conference on DLI, Jul. 2020, pp.229–243. DOI: 10.1007/978-3-030-53294-9_16.
[169] Cheng L P, Ofek E, Holz C, Wilson A D. VRoamer: Generating on-the-fly VR experiences while walking inside large, unknown real-world building environments. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.359–366. DOI: 10.1109/VR.2019.8798074.
[170] Xu S Z, Huang K, Fan C W, Zhang S H. Spatial contraction based on velocity variation for natural walking in virtual reality. IEEE Trans. Visualization and Computer Graphics, 2024, 30(5): 2444–2453. DOI: 10.1109/TVCG.2024.3372109.
[171] Sun Q, Wei L Y, Kaufman A. Mapping virtual and physical reality. ACM Trans. Graphics (TOG), 2016, 35(4): Article No. 64. DOI: 10.1145/2897824.2925883.
[172] Dong Z C, Fu X M, Zhang C, Wu K, Liu L. Smooth assembled mappings for large-scale real walking. ACM Trans. Graphics (TOG), 2017, 36(6): Article No. 211. DOI: 10.1145/3130800.3130893.
[173] Dong Z C, Wu W, Xu Z, Sun Q, Yuan G, Liu L, Fu X M. Tailored reality: Perception-aware scene restructuring for adaptive VR navigation. ACM Trans. Graphics (TOG), 2021, 40(5): Article No. 193. DOI: 10.1145/3470847.
[174] Lee J, Hwang S, Kim K, Kim S. Auditory and olfactory stimuli-based attractors to induce reorientation in virtual reality forward redirected walking. In Proc. the 2022 CHI Conference on Human Factors in Computing Systems Extended Abstracts, Apr. 29-May 5, 2022, Article No. 446. DOI: 10.1145/3491101.3519719.
[175] Chen H, Fuchs H. Supporting free walking in a large virtual environment: Imperceptible redirected walking with an immersive distractor. In Proc. the 2017 Computer Graphics International Conference, Jun. 2017, Article No. 22. DOI: 10.1145/3095140.3095162.
[176] Chen H, Fuchs H. Towards imperceptible redirected walking: Integrating a distractor into the immersive experience. In Proc. the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Feb. 2017, Article No. 22. DOI: 10.1145/3023368.3036844.
[177] Cools R, Simeone A L. Investigating the effect of distractor interactivity for redirected walking in virtual reality. In Proc. the 2019 Symposium on Spatial User Interaction, Oct. 2019, Article No. 4. DOI: 10.1145/3357251.3357580.
[178] Mahmud M R, Stewart M, Cordova A, Quarles J. Auditory feedback to make walking in virtual reality more accessible. In Proc. the 2022 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2022, pp.847–856. DOI: 10.1109/ISMAR55827.2022.00103.
[179] Rewkowski N, Rungta A, Whitton M, Lin M. Evaluating the effectiveness of redirected walking with auditory distractors for navigation in virtual environments. In Proc. the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2019, pp.395–404. DOI: 10.1109/VR.2019.8798286.
[180] Matsumoto K, Narumi T, Ban Y, Yanase Y, Tanikawa T, Hirose M. Unlimited corridor: A visuo-haptic redirection system. In Proc. the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, Nov. 2019, Article No. 18. DOI: 10.1145/3359997.3365705.
[181] Askarbekkyzy N, Lin Y, Moon K, Bianchi A, Je S. Designing visuo-haptic illusions for Virtual Reality applications using floor-based shape-changing displays. In IASDR 2023: Life-Changing Design, De Sainz Molestina D, Galluzzo L, Rizzo F, Spallazzo D (eds.), 2023. DOI: 10.21606/iasdr.2023.466.
[182] Tanaka K, Nakamura T, Matsumoto K, Kuzuoka H. Effect of hanger reflex on detection thresholds for hand redirection during forearm rotation. In Proc. the 2023 ACM Symposium on Applied Perception, Aug. 2023, Article No. 6. DOI: 10.1145/3605495.3605792.
[183] Weiss Y, Villa S, Schmidt A, Mayer S, Müller F. Using pseudo-stiffness to enrich the haptic experience in virtual reality. In Proc. the 2023 CHI Conference on Human Factors in Computing Systems, Apr. 2023, Article No. 388. DOI: 10.1145/3544548.3581223.
[184] Yamaguchi A, Yokoi S, Matsumoto K, Narumi T. TableMorph: Haptic experience with movable tables and redirection. In Proc. the 2023 SIGGRAPH Asia Emerging Technologies, Dec. 2023, Article No. 19. DOI: 10.1145/3610541.3614574.
[185] Hoshikawa Y, Fujita K, Takashima K, Fjeld M, Kitamura Y. RedirectedDoors+: Door-opening redirection with dynamic haptics in room-scale VR. IEEE Trans. Visualization and Computer Graphics, 2024, 30(5): 2276–2286. DOI: 10.1109/TVCG.2024.3372105.
[186] Hwang S, Lee J, Kim Y, Seo Y, Kim S. Electrical, vibrational, and cooling stimuli-based redirected walking: Comparison of various vestibular stimulation-based redirected walking systems. In Proc. the 2023 CHI Conference on Human Factors in Computing Systems, Apr. 2023, Article No. 767. DOI: 10.1145/3544548.3580862.
[187] Sassi V F P, Porcino T, Clua E W G, Trevisan D G. Redefining redirected movement for wheelchair based interaction for virtual reality. In Proc. the 11th IEEE International Conference on Serious Games and Applications for Health, Aug. 2023. DOI: 10.1109/SeGAH57547.2023.10253796.
[188] Hwang S, Kim Y, Seo Y, Kim S. Enhancing seamless walking in virtual reality: Application of bone-conduction vibration in redirected walking. In Proc. the 2023 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2023, pp.1181–1190. DOI: 10.1109/ISMAR59233.2023.00135.
[189] Li Y J, Wang M, Steinicke F, Zhao Q. OpenRDW: A redirected walking library and benchmark with multi-user, learning-based functionalities and state-of-the-art algorithms. In Proc. the 2021 IEEE International Symposium on Mixed and Augmented Reality, Oct. 2021, pp.21–30. DOI: 10.1109/ISMAR52148.2021.00016.
[190] Azmandian M, Yahata R, Grechkin T, Thomas J, Rosenberg E S. Validating simulation-based evaluation of redirected walking systems. IEEE Trans. Visualization and Computer Graphics, 2022, 28(5): 2288–2298. DOI: 10.1109/TVCG.2022.3150466.
[191] Hirt C, Kompis Y, Holz C, Kunz A. The chaotic behavior of redirection-revisiting simulations in redirected walking. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.524–533. DOI: 10.1109/VR51125.2022.00072.
[192] Brooke J. SUS: A ‘quick and dirty’ usability scale. In Usability Evaluation in Industry, Jordan P W, Thomas B, McClelland I L, Weerdmeester B (eds. ), CRC Press, 1996. DOI: 10.1201/9781498710411.
[193] Martinez E S, Wu A S, McMahan R P. Research trends in virtual reality locomotion techniques. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.270–280. DOI: 10.1109/VR51125.2022.00046.
[194] Prinz L M, Mathew T, Weyers B. A systematic literature review of virtual reality locomotion taxonomies. IEEE Trans. Visualization and Computer Graphics, 2023, 29(12): 5208–5223. DOI: 10.1109/TVCG.2022.3206915.
[195] Paris R A, Buck L E, McNamara T P, Bodenheimer B. Evaluating the impact of limited physical space on the navigation performance of two locomotion methods in immersive virtual environments. In Proc. the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2022, pp.821–831. DOI: 10.1109/VR51125.2022.00104.
[196] Tseng W J, Bonnail E, McGill M, Khamis M, Lecolinet E, Huron S, Gugenheimer J. The dark side of perceptual manipulations in virtual reality. In Proc. the 2022 CHI Conference on Human Factors in Computing Systems, Apr. 29 -May 5, 2022, Article No. 612. DOI: 10.1145/3491102.3517728.
-
其他相关附件
-
本文附件外链
https://rdcu.be/dUUAd -
DOCX格式
Chinese Information 点击下载(33KB) -
PDF格式
2024-4-8-4585-Highlights 点击下载(164KB)
-