General Theoretical Background

What we perceive is a model of the world created by our brain on the basis of sensory signals. Those signals come from multiple senses, and are processed together with the information stored in memory. Our brain continuously makes predictions about upcoming sensory data. A model of the world is being tested, and if sensory signals do not agree with predictions, the model is corrected.

This is the general mechanism not only for perceiving the external world, but also perception of our own body. Information from several senses must be combined into one coherent model of
the world. What we consciously experience is that model. We don't have direct access to the external world, or even to raw data from our senses. Furthermore – we are usually not conscious of the process of “doing” perception, we only have access to the end product of this entire process – the model of the world which our brain has created. However, by creating unusual circumstances for our perceptual system, we can make this process of perception more explicit.
click below for detailed explanation of each scene:
Scene 1: Elongated limbs.
This scene demonstrates, how your bodily experience is dependent on visual information about the body. When the virtual arm becomes longer, many people begin to feel that their real hand is elongated too. This illusion was first achieved by using transformed video signal, and then by using virtual bodies. You can read more about those experiments .
There are at least two sensory modalities involved in this illusion. One is vision, and the other is proprioception – signals from muscles and tendons which inform you about your body position in space. Proprioception is a separate sense, different from touch.
At the beginning of the experiment you are moving your hand in order to collect white spheres – and the virtual arm reflects the movements of your physical arm. Your brain receives coherent signals from vision, proprioception, and also touch. In many people this evokes an illusion of embodiment of the virtual arm. A feeling as if the virtual limb somehow belongs to them.
Later, when the virtual limb becomes longer, your brain receives conflicting data – vision signals that the limb is longer, while proprioception does not signal any unusual stretching. In addition, you have certain knowledge about your body shape stored in memory.
When your brain encounters conflicting information, it tries to solve the conflict, and create a coherent model of the world. This model-creation is a low level, unconscious and automatic process. People do not experience consciously how this model of the world is created. They only have access to the end product – the model itself – which is their perception of the world and their body within that world.
The brain treats various sources of information differently. Some sources of information are considered more reliable, and others less so. For many people (or rather – for their brains) vision is a more reliable source of information than proprioception. When signals from vision and proprioception are not coherent, the brain may adjust the proprioceptive experience so it matches the visual data. This can happen despite a person's conscious knowledge that a limb cannot suddenly become longer.
Some people do not experience this illusion, or experience it only very slightly. You may try to intensify the illusion by feeding your brain with more data supporting the “long arm” model. If someone stretches your physical arm while the virtual limb becomes longer, proprioception together with vision will signal that something is happening to your limb. This will make the “long arm” hypothesis more likely for your brain, and increase the chance that you will experience the illusion.
Scene 2: Shrinking body.
This scene shows how your perception of external world is linked to the perception of your own body. We treat our bodies as a reference in judging distances and sizes of objects around us. First experiments with VR studying the relationship between body size and perception of objects were done using video signal from the camera mounted over a Barbie doll head. When participants were embodied in a small body, they perceived distances and sizes of objects as larger. The opposite effect was found if the illusion embodiment was created for a giant doll body. Similar phenomena were later found using virtual bodies instead of a video. What you experience in this scene is a variation of those studies.
Scene 3: Being touched.
This scene demonstrates how your perception of touch is related to vision. How visual information about touch influences your tactile experience. When your physical hand is being touched by a sponge or a pencil but you see the virtual hand being touched by another avatar's hand, your tactile experience may somehow adjust to what you see. Similarly to experiment 1 – your brain encounters conflicting sensory signals. Tactile receptors in the skin may signal being touched by a rigid or rough object, but vision signals being touched by a hand. In many cases visual data overrides tactile experience, and shapes the resulting multisensory perception – model of the world, which your brain creates.
In addition, as in experiment 1, by feeding your brain with coherent multisensory signals related to the virtual arm, you can create the illusion of embodiment of that arm. Such illusions may have important and interesting applications – read more about experiments on racial bias reduction using VR and embodiment.
Scene 4: Out of body experience.
This scene demonstrates how your body perception is related to a first-person perspective. Does ownership of the virtual body change when you see it from the outside?
Scene 5: Third hand.
This scene demonstrates how motor knowledge is acquired, and how you can embody virtual bodies which deviate from the human template. Movement of both arms is translated onto the movement of a third, middle arm, using abstract rules. People can quickly learn to navigate the third hand in order to hit white spheres. But they are not able to verbalize how they are using the third hand – that is, how movements of their left or right arm translate onto the movements of the third arm.
Scene 6: Navigating two bodies.
This scene demonstrates that agency and ownership are two separate cognitive processes. It is possible to have a feeling of agency over a virtual body without the illusion of ownership. On the other hand, it is possible to induce the illusion of ownership, but without agency. This was first demonstrated in an experiment using the classical rubber hand paradigm . The current scene is a variation on these topics using virtual bodies.
This experiment also shows how we can study cognition in a systematic way and discover what the building blocks, or components of our experiences are.