エピソード

  • CHI 2025 A Placebo Concert: The Placebo Effect for Visualization of Physiological Audience Data during Experience Recreation in Virtual Reality
    2025/08/20

    Xiaru Meng, Yulan Ju, Christopher Changmok Kim, Yan He, Giulia Barbareschi, Kouta Minamizawa, Kai Kunze, and Matthias Hoppe. 2025. A Placebo Concert: The Placebo Effect for Visualization of Physiological Audience Data during Experience Recreation in Virtual Reality. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 807, 1–16. https://doi.org/10.1145/3706598.3713594

    A core use case for Virtual Reality applications is recreating real-life scenarios for training or entertainment. Promoting physiological responses for users in VR that match those of real-life spectators can maximize engagement and contribute to more co-presence. Current research focuses on visualizations and measurements of physiological data to ensure experience accuracy. However, placebo effects are known to influence performance and self-perception in HCI studies, creating a need to investigate the effect of visualizing different types of data (real, unmatched, and fake) on user perception during event recreation in VR. We investigate these conditions through a balanced between-groups study (n=44) of uninformed and informed participants. The informed group was provided with the information that the data visualizations represented previously recorded human physiological data. Our findings reveal a placebo effect, where the informed group demonstrated enhanced engagement and co-presence. Additionally, the fake data condition in the informed group evoked a positive emotional response.

    https://doi.org/10.1145/3706598.3713594

    続きを読む 一部表示
    12 分
  • CHI2025 Heartbeat Resonance: Inducing Non-contact Heartbeat Sensations in the Chest
    2025/08/08

    Perceiving and altering the sensation of internal physiological states, such as heartbeats, is key for biofeedback and interoception. Yet, wearable devices used for this purpose can feel intrusive and typically fail to deliver stimuli aligned with the heart’s location in the chest. To address this, we introduce Heartbeat Resonance, which uses low-frequency sound waves to create non-contact haptic sensations in the chest cavity, mimicking heartbeats. We conduct two experiments to evaluate the system’s effectiveness. The first experiment shows that the system created realistic heartbeat sensations in the chest, with 78.05 Hz being the most effective frequency. In the second experiment, we evaluate the effects of entrainment by simulating faster and slower heart rates. Participants perceived the intended changes and reported high confidence in their perceptions for +15% and -30% heart rates. This system offers a non-intrusive solution for biofeedback while creating new possibilities for immersive VR environments.

    Waseem Hassan, Liyue Da, Sonia Elizondo, and Kasper Hornbæk. 2025. Heartbeat Resonance: Inducing Non-contact Heartbeat Sensations in the Chest. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 913, 1–22. https://doi.org/10.1145/3706598.3713959

    続きを読む 一部表示
    13 分
  • CHI 2025 Living Bento: Heartbeat-Driven Noodles for Enriched Dining Dynamics
    2025/08/01

    To enhance focused eating and dining socialization, previous Human-Food Interaction research has indicated that external devices can support these dining objectives and immersion. However, methods that focus on the food itself and the diners themselves have remained underdeveloped. In this study, we integrated biofeedback with food, utilizing diners’ heart rates as a source of the food’s appearance to promote focused eating and dining socialization. By employing LED lights, we dynamically displayed diners’ real-time physiological signals through the transparency of the food. Results revealed significant effects on various aspects of dining immersion, such as awareness perceptions, attractiveness, attentiveness to each bite, and emotional bonds with the food. Furthermore, to promote dining socialization, we established a “Sharing Bio-Sync Food” dining system to strengthen emotional connections between diners. Based on these findings, we developed tableware that integrates biofeedback into the culinary experience.

    Weijen Chen, Qingyuan Gao, Zheng Hu, Kouta Minamizawa, and Yun Suen Pai. 2025. Living Bento: Heartbeat-Driven Noodles for Enriched Dining Dynamics. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 353, 1–18. https://doi.org/10.1145/3706598.3713108

    続きを読む 一部表示
    16 分
  • CHI 2025 NeuResonance: Exploring Feedback Experiences for Fostering the Inter-brain Synchronization
    2025/07/25

    When several individuals collaborate on a shared task, their brain activities often synchronize. This phenomenon, known as Inter-brain Synchronization (IBS), is notable for inducing prosocial outcomes such as enhanced interpersonal feelings, including closeness, trust, empathy, and more. Further strengthening the IBS with the aid of external feedback would be beneficial for scenarios where those prosocial feelings play a vital role in interpersonal communication, such as rehabilitation between a therapist and a patient, motor skill learning between a teacher and a student, and group performance art. This paper investigates whether visual, auditory, and haptic feedback of the IBS level can further enhance its intensity, offering design recommendations for feedback systems in IBS. We report findings when three different types of feedback were provided: IBS level feedback by means of on-body projection mapping, sonification using chords, and vibration bands attached to the wrist.

    Jamie Ngoc Dinh, Snehesh Shrestha, You-Jin Kim, Jun Nishida, and Myungin Lee. 2025. NeuResonance: Exploring Feedback Experiences for Fostering the Inter-brain Synchronization. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 363, 1–16. https://doi.org/10.1145/3706598.3713872

    続きを読む 一部表示
    18 分
  • CHI2025 Haptic Empathy: Investigating Individual Differences in Affective Haptic Communications
    2025/07/14

    Yulan Ju, Xiaru Meng, Harunobu Taguchi, Tamil Selvan Gunasekaran, Matthias Hoppe, Hironori Ishikawa, Yoshihiro Tanaka, Yun Suen Pai, and Kouta Minamizawa. 2025. Haptic Empathy: Investigating Individual Differences in Affective Haptic Communications. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 501, 1–25. https://doi.org/10.1145/3706598.3714139

    Nowadays, touch remains essential for emotional conveyance and interpersonal communication as more interactions are mediated remotely. While many studies have discussed the effectiveness of using haptics to communicate emotions, incorporating affect into haptic design still faces challenges due to individual user tactile acuity and preferences. We assessed the conveying of emotions using a two-channel haptic display, emphasizing individual differences. First, 24 participants generated 187 haptic messages reflecting their immediate sentiments after watching 8 emotionally charged film clips. Afterwards, 19 participants were asked to identify emotions from haptic messages designed by themselves and others, yielding 593 samples. Our findings suggest potential links between haptic message decoding ability and emotional traits, particularly Emotional Competence (EC) and Affect Intensity Measure (AIM). Additionally, qualitative analysis revealed three strategies participants used to create touch messages: perceptive, empathetic, and metaphorical expression.

    https://dl.acm.org/doi/10.1145/3706598.3714139

    続きを読む 一部表示
    27 分
  • TEI 2025 : Ambient Display Utilizing Anisotropy of Tatami
    2025/03/30
    Riku Kitamura, Kenji Yamada, Takumi Yamamoto, and Yuta Sugiura. 2025. Ambient Display Utilizing Anisotropy of Tatami. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '25). Association for Computing Machinery, New York, NY, USA, Article 3, 1–15. https://doi.org/10.1145/3689050.3704924

    Recently, digital displays such as liquid crystal displays and projectors have enabled high-resolution and high-speed information transmission. However, their artificial appearance can sometimes detract from natural environments and landscapes. In contrast, ambient displays, which transfer information to the entire physical environment, have gained attention for their ability to blend seamlessly into living spaces. This study aims to develop an ambient display that harmonizes with traditional Japanese tatami rooms by proposing an information presentation method using tatami mats. By leveraging the anisotropic properties of tatami, which change their reflective characteristics according to viewing angles and light source positions, various images and animations can be represented. We quantitatively evaluated the color change of tatami using color difference. Additionally, we created both static and dynamic displays as information presentation methods using tatami.

    https://doi.org/10.1145/3689050.3704924

    続きを読む 一部表示
    25 分
  • DIS 2025 ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot
    2025/02/20

    Hu, Yuhan, Peide Huang, Mouli Sivapurapu, and Jian Zhang. "ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot." arXiv preprint arXiv:2501.12493(2025).

    https://arxiv.org/abs/2501.12493

    Nonverbal behaviors such as posture, gestures, and gaze are essential for conveying internal states, both consciously and unconsciously, in human interaction. For robots to interact more naturally with humans, robot movement design should likewise integrate expressive qualities—such as intention, attention, and emotions—alongside traditional functional considerations like task fulfillment, spatial constraints, and time efficiency. In this paper, we present the design and prototyping of a lamp-like robot that explores the interplay between functional and expressive objectives in movement design. Using a research-through-design methodology, we document the hardware design process, define expressive movement primitives, and outline a set of interaction scenario storyboards. We propose a framework that incorporates both functional and expressive utilities during movement generation, and implement the robot behavior sequences in different function- and social-oriented tasks. Through a user study comparing expression-driven versus function-driven movements across six task scenarios, our findings indicate that expression-driven movements significantly enhance user engagement and perceived robot qualities. This effect is especially pronounced in social-oriented tasks.

    続きを読む 一部表示
    12 分
  • ISMAR 2024 Do you read me? (E)motion Legibility of Virtual Reality Character Representations
    2025/02/07

    K. Brandstätter, B. J. Congdon and A. Steed, "Do you read me? (E)motion Legibility of Virtual Reality Character Representations," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bellevue, WA, USA, 2024, pp. 299-308, doi: 10.1109/ISMAR62088.2024.00044.

    We compared the body movements of five virtual reality (VR) avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motioncapture, and the state-of-the-art deep-learning model AGRoL. Participants’ emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-capture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications.

    https://ieeexplore.ieee.org/document/10765392

    続きを読む 一部表示
    11 分