Semantic Avatars
Apr 2023One of the main claims of embodiment is that our understanding of the world comes from our bodies, that is through our sensorimotor experiences of the world. In my previous research on the embodiment landscape, I explored different perspectives on embodiment in Virtual Reality (VR) and got quite interested in the role of the digital avatar in embodied sense-making. Specifically, I am interested in the following question:
If our understanding of the world depends on our bodies, how does it change when we change our bodies?
As a subset of this larger question, I was interested in deliberately designing digital avatars to generate certain bodily actions and highlight certain meanings that may not be as directly accessible with our physical bodies. In the paper, I introduced the idea of "semantic avatars", that is digital avatars designed to highlight a specific meaning, explored through bodily actions. With semantic avatars, learners could spontaneously generate meaningful gestures that would be meaningless with their physical bodies only.
This is a useful concept for learning, but it can also have applications beyond this context. For example, what about semantic avatars in dance, mental health, accessibility or creativity research?
Concepts for semantic avatars
Hand-based semantics avatars can be used for finger counting. Moreover, as VR research explores how to control avatars with various numbers of fingers, one could design a semantic avatar with four fingers per hand to support embodied meaning-making of base-8 counting.
Hand-based semantic avatars can also be used to revisit previous embodied learning activities. For example, in The Hidden Village, the hand of the learner could become a tool to highlight angular behavior by displaying the nature of the angle between two fingers. This approach could also offer a first-person perspective and reduce split attention effect in the Mathematical Imagery Trainer as the reference point would be embodied by the learner directly, and the feedback would be displayed on the hands, rather than on an external screen.
In addition, full-body semantic avatars can be explored. For example, using an avatar with stretchable arms learners could embody a space's referential and learn about 2D linear algebra. Such an avatar could activate the direct state induction mechanism of embodied learning: as learners move their arms to transform the space, they perform flexor and extensor movements and activate approach and avoidance processes.
Team
- Julia Chatain - Concept, Research
Related projects
Embodiment landscape, initial introduction of semantic avatars (link)
Digital Gloves, semantic avatars in context, for interaction meaning (link)
Publications
Chatain, Julia, Manu Kapur, Robert W. Sumner. "Three Perspectives on Embodied Learning in Virtual Reality: Opportunities for Interaction Design". In CHI EA ’23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. (2023) (link) (pdf)
Chatain, Julia, Danielle M. Sisserman, Lea Reichardt, Violaine Fayolle, Manu Kapur, Robert W. Sumner, Fabio Zünd, Amit H. Bermano. 2020. DigiGlo: Exploring the Palm as an Input and Display Mechanism through Digital Gloves. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’20), November 2–4, 2020, Virtual Event, Canada. ACM, New York, NY, USA, 12 pages. (link) (pdf)