Movement and Embodiment in VR
When I arrived at CREW, Eric kept stressing the importance of free body movement in VR, to roam around in large spaces. I didn’t quite pick up on it immediately. Sure large spaces are nice, but in the end VR is about visuals and sound, no matter how you move about. After a while, I realized he was on to something. You see the point is not the space, but the movement. Our mind constructs space by means of our own movement.
I think it’s an insight from when CREW did a lot of 360 video “VR”. In this type of immersion, the only thing that tracks is the rotation of your head, it’s not interactive in the obvious sense. CREW developed an entire language of how to trick people into immersion, one key element being that if the camera seems to move and the immersant moves in the same direction, people believe they see through the camera. If on top of that you’ve filmed a limb and the limb moves at the same time as your own, a very strong sense of embodiment (more on that later) and immersion can be provoked.
So back to 3D VR. The set-up used for games is called roomscale, you can manoeuvre around on 4 by 4 meters. The system locates you not only by rotation of your head, but also by location in space, the so called six degrees of freedom. You’re in such a confined space, that walking is not really possible, you’re often tethered to a computer too. Game developers have of course found creative solutions to this. One of them is to use the controller as a joystick. It’s the VR equivalent of an electric wheelchair, kind of works, but it causes simulation sickness. Another is teleporting, you point to a location in space and get swiftly transported to that space. It doesn’t make you sick but it sins against one of my main principles: it violates the rules of time and space, without consequences. The result is that you become detached from the immersion.
When you arrive in a physical space you don’t know, the first thing you want to do is walk around, take in the dimensions, reconstruct it for yourself. It’s behaviour we see a lot in children, unimpeded by rules of politeness and restraint. Why would it be different in the virtual? The perception of depth is constructed by movement, much more than by stereoscopy, as aptly demonstrated by CREW’s installation Collateral Rooms.
When testing our StrapTrack system, a device which locates the immersants in large spaces, you could pinpoint when the immersion really “stuck”. Immersants would pick up the pace and walk freely in the space, traveling long distances. Like awoken in Wonderland, they chase around, the virtual world is no longer something to look at, but a place to be in, to move in.
So we’re still missing the other component: embodiment. Embodiment is very strong in 360° video, but much less so in computer graphics VR. We no longer get the level of presence and embodiment that we had in earlier performances like ‘Eux’, and ‘U_Raging Standstill’. Why is that? It seems easier to identify with an environment and a view including your own body, if it is photographic.
The first thing people do in VR is to look at their hands, then their feet. Without expensive and cumbersome motion capture solutions, a VR headset only tracks your head. The controllers can give you hands, but very rudimentary ones. There have been some developments in this area, some systems now track your fingers and we can fake a body by a technique called inverse kinematics, a sort of reconstruction of the body based on how your head and hands move. It’s not perfect, but we get closer to embodiment. You can now grab virtual objects, see your body in the mirror and make hand gestures. In this way, you attain agency in the virtual world. The next step is nonverbal communication.
These principles are the prerequisites for our current work in VR, artistic and technical research which we are developing within the PRESENT Horizon 2020 EU project. In a framework of free movement and embodiment, we can construct experiences with real actors, virtual avatars and a mix between the two.