HCI Deep Dives
HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.
Episodes

Friday Oct 04, 2024
Friday Oct 04, 2024
Robotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers’ agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies.
https://dl.acm.org/doi/10.1145/3544548.3581124

Friday Oct 04, 2024
Friday Oct 04, 2024
Wheelchair dance is an important form of disability art that is still subject to significant levels of ableism and art exclusion. Wheelchair dancers face challenges finding teachers and choreographers who can accommodate their needs, documenting and sharing choreographies that suit their body shapes and their assistive technologies. In turn, this hinders their ability to share creative expressions. Accessible resources and communication tools could help address these challenges. The goal of this research is the development of a visualization system grounded on Laban Movement Analysis (LMA) that notates movement quality while opening new horizons on perceptions of disabled bodies and the artistic legitimacy of wheelchair dance. The system uses video to identify the body landmarks of the dancer and wheelchair and extracts key features to create visualizations of expressive qualities from LMA basic effort. The current evaluation includes a pilot study with the general public and an online questionnaire targeting professionals to gain feedback supporting practical implementation and real-world deployment. Results from the general public evaluation showed that the visualization was effective in conveying basic effort movement qualities even to a novice audience. Expert consulted via questionnaire stated that the tool could be employed for reflective evaluation, as well as performance augmentation. The LMA visualization tool can support the artistic legitimization of wheelchair dancing through education, communication, performance, and documentation.
https://dl.acm.org/doi/10.1145/3597628

Friday Oct 04, 2024
Friday Oct 04, 2024
In this paper, we propose a method for utilizing musical artifacts and physiological data as a means for creating a new form of live music experience that is rooted in the physiology of the performers and audience members. By utilizing physiological data (namely Electrodermal Activity (EDA) and Heart Rate Variability (HRV)) and applying this data to musical artifacts including a robotic koto (a traditional 13-string Japanese instrument fitted with solenoids and linear actuators), a Eurorack synthesizer, and Max/MSP software, we aim to develop a new form of semi-improvisational and significantly indeterminate performance practice. It has since evolved into a multi-modal methodology which honors improvisational performance practices and utilizes physiological data which offers both performers and audiences an ever-changing and intimate experience.In our first exploratory phase, we focused on the development of a means for controlling a bespoke robotic koto in conjunction with a Eurorack synthesizer system and Max/MSP software for controlling the incoming data. We integrated a reliance on physiological data to infuse a more directly human elements into this artifact system. This allows a significant portion of the decision-making to be directly controlled by the incoming physiological data in real-time, thereby affording a sense of performativity within this non-living system. Our aim is to continue the development of this method to strike a novel balance between intentionality and impromptu performative results.
https://dl.acm.org/doi/10.1145/3623509.3633356

Friday Oct 04, 2024
Friday Oct 04, 2024
Running and jogging are popular activities for many visually impaired individuals thanks to the relatively low entry barriers. Research in HCI and beyond has focused primarily on leveraging technology to enable visually impaired people to run independently. However, depending on their residual vision and personal preferences, many chose to run with a sighted guide. This study presents a comprehensive analysis of the partnership between visually impaired runners and sighted guides. Using a combination of interaction and thematic analysis on video and interview data from 6 pairs of runners and guides, we unpack the complexity and directionality of three layers of vocal communication (directive, contextual, and recreational) and distinguish between intentional and unintentional corporeal communication. Building on the understanding of the importance of synchrony we also present some exploratory data looking at physiological synchrony between 2 pairs of runners with different level of experience and articulate recommendations for the HCI community.
https://dl.acm.org/doi/10.1145/3613904.3642388

Thursday Oct 03, 2024
Thursday Oct 03, 2024
Detecting interpersonal synchrony in the wild through ubiquitous wearable sensing invites promising new social insights as well as the possibility of new interactions between humans-humans and humans-agents. We present the Offset-Adjusted SImilarity Score (OASIS), a real-time method of detecting similarity which we show working on visual detection of Duchenne smile between a pair of users. We conduct a user study survey (N = 27) to measure a user-based interoperability score on smile similarity and compare the user score with OASIS as well as the rolling window Pearson correlation and the Dynamic Time Warping (DTW) method. Ultimately, our results indicate that our algorithm has intrinsic qualities comparable to the user score and measures well to the statistical correlation methods. It takes the temporal offset between the input signals into account with the added benefit of being an algorithm which can be adapted to run in real-time will less computational intensity than traditional time series correlation methods.
https://dl.acm.org/doi/10.1145/3544549.3585709

Thursday Oct 03, 2024
Thursday Oct 03, 2024
The use of wearable sensor technology opens up exciting avenues for both art and HCI research. To be effective, such work requires close collaboration between performers and researchers. In this article, we report on the co-design process and research insights from our work integrating physiological sensing and live performance. https://dl.acm.org/doi/10.1145/3557887