HCI Deep Dives

HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.

Listen on:

  • Apple Podcasts
  • YouTube
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio
  • PlayerFM
  • Podchaser
  • BoomPlay

Episodes

Friday Oct 11, 2024

We use cross-Modal correspondence -the interaction between two or more sensory modalities- to create an engaging user experience. We present atmoSphere, a system that provides users immersive music experiences using spatial audio and haptic feedback. We focused on cross-modality of auditory and haptic sensations to augment the sound environment. The atmoSphere consists of a spatialized music and a sphere shaped device which provides haptic feedback. It provides users imagination of large sound environment although they feel haptic sensation in their hands. First user feedback is very encouraging. According to participants, atmoSphere creates an engaging experience.
https://dl.acm.org/doi/10.1145/3084822.3084845

Friday Oct 11, 2024

Navigating in a natural way in augmented reality (AR) and virtual reality (VR) spaces is a large challenge. To this end, we present ArmSwingVR, a locomotion solution for AR/VR spaces that preserves immersion, while being low profile compared to current solutions, particularly walking-in-place (WIP) methods. The user simply needs to swing their arms naturally to navigate in the direction where the arms are swung, without any feet or head movement. The benefits of ArmSwingVR are that arm swinging feels natural for bipedal organisms second only to leg movement, no additional peripherals or sensors are required, it is less obtrusive to swing our arms as opposed to WIP methods, and requires less energy allowing prolong uses for AR/VR. A conducted user study found that our method does not sacrifice immersion while also being more low profile and less energy consumption compared to WIP.
https://dl.acm.org/doi/10.1145/3152832.3152864

Friday Oct 11, 2024

This paper presents a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires. The proposed concept allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators. We explore possible design considerations and applications for such devices, present user studies proving the feasibility of delivering meaningful information and use nonlinear autoregressive neural networks to compensate for SMA inherent drawbacks, such as delayed onset, enabling us to characterize and predict the physical behavior of the device.
https://dl.acm.org/doi/10.1145/3267242.3267257

Sunday Oct 06, 2024

As the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students.Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user’s eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness.In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment.
https://dl.acm.org/doi/10.1145/3526113.3545687
 

Saturday Oct 05, 2024

What we wear (our clothes and wearable accessories) can represent our mood at the moment. We developed Emolleia to explore how to make aesthetic wears more expressive to become a novel form of non-verbal communication to express our emotional feelings. Emolleia is an open wearable kinetic display in form of three 3D printed flowers that can dynamically open and close at different speeds. With our open-source platform, users can define their own animated motions. In this paper, we described the prototype design, hardware considerations, and user surveys (n=50) to evaluate the expressiveness of 8 pre-defined animated motions of Emolleia. Our initial results showed animated motions are feasible to communicate different emotional feelings especially at the valence and arousal dimensions. Based on the findings, we mapped eight pre-defined animated motions to the reported, user-perceived valence, arousal and dominance and discussed possible directions for future work.
https://dl.acm.org/doi/10.1145/3490149.3505581

Friday Oct 04, 2024

Robotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers’ agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies.
https://dl.acm.org/doi/10.1145/3544548.3581124

Friday Oct 04, 2024

Wheelchair dance is an important form of disability art that is still subject to significant levels of ableism and art exclusion. Wheelchair dancers face challenges finding teachers and choreographers who can accommodate their needs, documenting and sharing choreographies that suit their body shapes and their assistive technologies. In turn, this hinders their ability to share creative expressions. Accessible resources and communication tools could help address these challenges. The goal of this research is the development of a visualization system grounded on Laban Movement Analysis (LMA) that notates movement quality while opening new horizons on perceptions of disabled bodies and the artistic legitimacy of wheelchair dance. The system uses video to identify the body landmarks of the dancer and wheelchair and extracts key features to create visualizations of expressive qualities from LMA basic effort. The current evaluation includes a pilot study with the general public and an online questionnaire targeting professionals to gain feedback supporting practical implementation and real-world deployment. Results from the general public evaluation showed that the visualization was effective in conveying basic effort movement qualities even to a novice audience. Expert consulted via questionnaire stated that the tool could be employed for reflective evaluation, as well as performance augmentation. The LMA visualization tool can support the artistic legitimization of wheelchair dancing through education, communication, performance, and documentation.
https://dl.acm.org/doi/10.1145/3597628

Friday Oct 04, 2024


In this paper, we propose a method for utilizing musical artifacts and physiological data as a means for creating a new form of live music experience that is rooted in the physiology of the performers and audience members. By utilizing physiological data (namely Electrodermal Activity (EDA) and Heart Rate Variability (HRV)) and applying this data to musical artifacts including a robotic koto (a traditional 13-string Japanese instrument fitted with solenoids and linear actuators), a Eurorack synthesizer, and Max/MSP software, we aim to develop a new form of semi-improvisational and significantly indeterminate performance practice. It has since evolved into a multi-modal methodology which honors improvisational performance practices and utilizes physiological data which offers both performers and audiences an ever-changing and intimate experience.In our first exploratory phase, we focused on the development of a means for controlling a bespoke robotic koto in conjunction with a Eurorack synthesizer system and Max/MSP software for controlling the incoming data. We integrated a reliance on physiological data to infuse a more directly human elements into this artifact system. This allows a significant portion of the decision-making to be directly controlled by the incoming physiological data in real-time, thereby affording a sense of performativity within this non-living system. Our aim is to continue the development of this method to strike a novel balance between intentionality and impromptu performative results.
https://dl.acm.org/doi/10.1145/3623509.3633356
 
 

Friday Oct 04, 2024

Running and jogging are popular activities for many visually impaired individuals thanks to the relatively low entry barriers. Research in HCI and beyond has focused primarily on leveraging technology to enable visually impaired people to run independently. However, depending on their residual vision and personal preferences, many chose to run with a sighted guide. This study presents a comprehensive analysis of the partnership between visually impaired runners and sighted guides. Using a combination of interaction and thematic analysis on video and interview data from 6 pairs of runners and guides, we unpack the complexity and directionality of three layers of vocal communication (directive, contextual, and recreational) and distinguish between intentional and unintentional corporeal communication. Building on the understanding of the importance of synchrony we also present some exploratory data looking at physiological synchrony between 2 pairs of runners with different level of experience and articulate recommendations for the HCI community.
https://dl.acm.org/doi/10.1145/3613904.3642388
 

Thursday Oct 03, 2024

Detecting interpersonal synchrony in the wild through ubiquitous wearable sensing invites promising new social insights as well as the possibility of new interactions between humans-humans and humans-agents. We present the Offset-Adjusted SImilarity Score (OASIS), a real-time method of detecting similarity which we show working on visual detection of Duchenne smile between a pair of users. We conduct a user study survey (N = 27) to measure a user-based interoperability score on smile similarity and compare the user score with OASIS as well as the rolling window Pearson correlation and the Dynamic Time Warping (DTW) method. Ultimately, our results indicate that our algorithm has intrinsic qualities comparable to the user score and measures well to the statistical correlation methods. It takes the temporal offset between the input signals into account with the added benefit of being an algorithm which can be adapted to run in real-time will less computational intensity than traditional time series correlation methods.
https://dl.acm.org/doi/10.1145/3544549.3585709

Copyright 2024 All rights reserved.

Podcast Powered By Podbean

Version: 20241125