[MUSIC] Physical navigation is the most natural way to move around in VR. In 1992, Fred Books pointed out that physical motion powerfully aids the illusion of presence. And actual walking enables one to feel kinesthetically how large spaces are. Studies since the 90s have shown that real walking is better than virtual navigation or other hybrid methods. In terms of place illusion and plausibility illusion. In these studies, participants normally reported a stronger feeling of presence. Found it easier to use and had improved efficiency and spatial awareness of the space they explored when they could walk around in an environment physically, as they do in real life. Also, in this case, as there is no conflict by signals received by their visual and vestibular system, it is less likely to cause simulation sickness. The biggest problem with rear-working NVR is the limitation of the physical space the user is in. Firstly, it is hard to make assumptions about how much space the user would have available when they use your application. So in order to suit more users specific needs, you might want to think about ways to make the virtual space for them to navigate relatively small. For instance, in a lot of the applications in VR, we can have the users seated in a chair or standing at a relatively fixed position. With virtual objects in front of them creating a barrier. For instance, they can be chatting with another person in VR or they can be driving a car. Secondly, as users are encouraged to use their bodies as naturally as possible to explore the virtual world. There might be some health and safety concerns from the physical world as with HMDM, the real world is completely invisible. Some users could get very engaged in the virtual world and completely forget about the real barriers in the physical world. Others might be too worried about the fact that they might bump into things in the real world they can't see and become less engaged in the virtual world. We like to be able to avoid both, so high end VR systems which support physical navigation requires users to indicate the available physical space that are free from obstacles when setting up the system. And during their VR immersion when they get too close to the boundaries of this predefined space, wire-frame barriers will become visible to remind them of the physical barriers. Here are some cases that VR controllers will also generate a vibration to warn the user of the potential collision. Ideally, in order to further improve the experience, you might want to assign a avatar to the user, so when they look down where they expect to see their body, they can see a virtual body. However, it is not always easy to achieve this when the user moves around the environment. As even most high-end devices only track their head and hands. So the system can't really distinguish between when I shift my upper body or when I actually step to the side. Because the system does not know for sure what the rest of my body is doing. We might be able to use machine learning algorithms to learn from different patterns of upper body movement to predict what I'm doing with the lower part of my body. But a more reliable solution is to track our other body parts with additional trackers which comes with some consumer VR systems. Or, if you can afford it, you can use a full body motion capture suit to enable a truthful representation of the user's body movement in the virtual world. But the problem is that you are then limited to a much smaller range of potential customers who have access to such a body suit. [MUSIC]