Semantic Avatars

One of the main claims of embodiment is that our understanding of the world comes from our bodies, that is through our sensorimotor experiences of the world. In my previous research on the embodiment landscape, I explored different perspectives on embodiment in Virtual Reality (VR) and got quite interested in the role of the digital avatar in embodied sense-making. Specifically, I am interested in the following question:

If our understanding of the world depends on our bodies, how does it change when we change our bodies?

As a subset of this larger question, I was interested in deliberately designing digital avatars to generate certain bodily actions and highlight certain meanings that may not be as directly accessible with our physical bodies. In the paper, I introduced the idea of "semantic avatars", that is digital avatars designed to highlight a specific meaning, explored through bodily actions. With semantic avatars, learners could spontaneously generate meaningful gestures that would be meaningless with their physical bodies only.

This is a useful concept for learning, but it can also have applications beyond this context. For example, what about semantic avatars in dance, mental health, accessibility or creativity research?

Concepts for semantic avatars

Four figures. First. A hand with five fingers is presented. The first three fingers are marked in green. The last two in blue. The palm of the hand displays 3 + 2 = 5. Second. Two five fingers hands are shown, for each of them the little finger is hidden from the avatar. The fingers are colored in order by 6 green dots. Then 4 blue dots are displayed, first filling in the two remaining fingers, then filling in the two first green fingers. The palms of the hands display 6 + 4 = 2. Third. Two hands are drawn, the angle between the thumb and the index finger is highlighted. On the first hand, the angle is of about 20 degrees. In the second hand, the angle is of 90 degrees and highlighted in green. Fourth. Two pairs of hands are displayed. The first hands are almost at the same level and highlighted in red. For the second pair of hand, one is far beyond the other, and they are highlighted in green.

Hand-based semantics avatars can be used for finger counting. Moreover, as VR research explores how to control avatars with various numbers of fingers, one could design a semantic avatar with four fingers per hand to support embodied meaning-making of base-8 counting.

Hand-based semantic avatars can also be used to revisit previous embodied learning activities. For example, in The Hidden Village, the hand of the learner could become a tool to highlight angular behavior by displaying the nature of the angle between two fingers. This approach could also offer a first-person perspective and reduce split attention effect in the Mathematical Imagery Trainer as the reference point would be embodied by the learner directly, and the feedback would be displayed on the hands, rather than on an external screen.

In addition, full-body semantic avatars can be explored. For example, using an avatar with stretchable arms learners could embody a space's referential and learn about 2D linear algebra. Such an avatar could activate the direct state induction mechanism of embodied learning: as learners move their arms to transform the space, they perform flexor and extensor movements and activate approach and avoidance processes.

A person has an arm folded and an arm extended. These two arms represent the x and y axis in a horizontal plane. The plane is distorted, as one arm is longer than the other. On the plane are 3D houses and trees to show the distortion.

Team

Embodiment landscape, initial introduction of semantic avatars (link)

Digital Gloves, semantic avatars in context, for interaction meaning (link)

Publications

Chatain, Julia, Manu Kapur, Robert W. Sumner. "Three Perspectives on Embodied Learning in Virtual Reality: Opportunities for Interaction Design". In CHI EA ’23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. (2023) (link) (pdf)

Chatain, Julia, Danielle M. Sisserman, Lea Reichardt, Violaine Fayolle, Manu Kapur, Robert W. Sumner, Fabio Zünd, Amit H. Bermano. 2020. DigiGlo: Exploring the Palm as an Input and Display Mechanism through Digital Gloves. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’20), November 2–4, 2020, Virtual Event, Canada. ACM, New York, NY, USA, 12 pages. (link) (pdf)