HCI Deep Dives

HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.

Listen on:

  • Apple Podcasts
  • YouTube
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio
  • PlayerFM
  • Podchaser
  • BoomPlay

Episodes

Sunday Oct 13, 2024

We deep dive today into the ISWC 2024 best paper award.
 
Tactile feedback mechanisms enhance the user experience of modern wearables by stimulating the sense of touch and enabling intuitive interactions. Electro-tactile stimulation-based tactile interfaces stand out due to their compact form factor and ability to deliver localized tactile sensations. Integrating force sensing with electro-tactile stimulation creates more responsive bidirectional systems that are beneficial in applications requiring precise control and feedback. However, current research often relies on separate sensors for force sensing, increasing system complexity and raising challenges in system scalability. We propose a novel approach that utilizes 3D-printed modified surfaces as the electro-tactile electrode interface to sense applied force and deliver feedback simultaneously without additional sensors. This method simplifies the system, maintains flexibility, and leverages the rapid prototyping capabilities of 3D printing. The functionality of this approach is validated through a user study (N=10), and two practical applications are proposed, both incorporating simultaneous sensing and tactile feedback.
https://dl.acm.org/doi/10.1145/3675095.3676612

Sunday Oct 13, 2024

We deep dive today into an ISWC 2024 Honorable Mention.
Self-recording eating behaviors is a step towards a healthy lifestyle recommended by many health professionals. However, the current practice of manually recording eating activities using paper records or smartphone apps is often unsustainable and inaccurate. Smart glasses have emerged as a promising wearable form factor for tracking eating behaviors, but existing systems primarily identify when eating occurs without capturing details of the eating activities (E.g., what is being eaten). In this paper, we present EchoGuide, an application and system pipeline that leverages low-power active acoustic sensing to guide head-mounted cameras to capture egocentric videos, enabling efficient and detailed analysis of eating activities. By combining active acoustic sensing for eating detection with video captioning models and large-scale language models for retrieval augmentation, EchoGuide intelligently clips and analyzes videos to create concise, relevant activity records on eating. We evaluated EchoGuide with 9 participants in naturalistic settings involving eating activities, demonstrating high-quality summarization and significant reductions in video data needed, paving the way for practical, scalable eating activity tracking.
https://dl.acm.org/doi/10.1145/3675095.3676611
 

Sunday Oct 13, 2024

We deep dive today into an ISWC 2024 Honorable Mention.
We present RetailOpt, a novel opt-in, easy-to-deploy system for tracking customer movements offline in indoor retail environments. The system uses readily accessible information from customer smartphones and retail apps, including motion data, store maps, and purchase records. This eliminates the need for additional hardware installations/maintenance and ensures customers full data control. Specifically, RetailOpt first uses inertial navigation to recover relative trajectories from smartphone motion data. The store map and purchase records are cross-referenced to identify a list of visited shelves, providing anchors to localize the relative trajectories in a store through continuous and discrete optimization. We demonstrate the effectiveness of our system in five diverse environments. The system, if successful, would produce accurate customer movement data, essential for a broad range of retail applications including customer behavior analysis and in-store navigation.
https://dl.acm.org/doi/pdf/10.1145/3675095.3676623

Saturday Oct 12, 2024

Today we deep dive into one publication that received a UbiComp 2024 distinguished paper awards.
Applying customized epidermal electronics closely onto the human skin offers the potential for biometric sensing and unique, always-available on-skin interactions. However, iterating designs of an on-skin interface from schematics to physical circuit wiring can be time-consuming, even with tiny modifications; it is also challenging to preserve skin wearability after repeated alteration. We present SkinLink, a reconfigurable on-skin fabrication approach that allows users to intuitively explore and experiment with the circuitry adjustment on the body. We demonstrate SkinLink with a customized on-skin prototyping toolkit comprising tiny distributed circuit modules and a variety of streamlined trace modules that adapt to diverse body surfaces. To evaluate SkinLink's performance, we conducted a 14-participant usability study to compare and contrast the workflows with a benchmark on-skin construction toolkit. Four case studies targeting a film makeup artist, two beauty makeup artists, and a wearable computing designer further demonstrate different application scenarios and usages.
https://dl.acm.org/doi/10.1145/3596241

Saturday Oct 12, 2024

Today we deep dive into one publication that received a UbiComp 2024 distinguished paper awards.
We present MoCaPose, a novel wearable motion capturing (MoCap) approach to continuously track the wearer's upper body's dynamic poses through multi-channel capacitive sensing integrated in fashionable, loose-fitting jackets. Unlike conventional wearable IMU MoCap based on inverse dynamics, MoCaPose decouples the sensor position from the pose system. MoCaPose uses a deep regressor to continuously predict the 3D upper body joints coordinates from 16-channel textile capacitive sensors, unbound by specific applications. The concept is implemented through two prototyping iterations to first solve the technical challenges, then establish the textile integration through fashion-technology co-design towards a design-centric smart garment. A 38-hour dataset of synchronized video and capacitive data from 21 participants was recorded for validation. The motion tracking result was validated on multiple levels from statistics (R2 ~ 0.91) and motion tracking metrics (MP JPE ~ 86mm) to the usability in pose and motion recognition (0.9 F1 for 10-class classification with unsupervised class discovery). The design guidelines impose few technical constraints, allowing the wearable system to be design-centric and usecase-specific. Overall, MoCaPose demonstrates that textile-based capacitive sensing with its unique advantages, can be a promising alternative for wearable motion tracking and other relevant wearable motion recognition applications.
https://dl.acm.org/doi/10.1145/3580883

Friday Oct 11, 2024

We use cross-Modal correspondence -the interaction between two or more sensory modalities- to create an engaging user experience. We present atmoSphere, a system that provides users immersive music experiences using spatial audio and haptic feedback. We focused on cross-modality of auditory and haptic sensations to augment the sound environment. The atmoSphere consists of a spatialized music and a sphere shaped device which provides haptic feedback. It provides users imagination of large sound environment although they feel haptic sensation in their hands. First user feedback is very encouraging. According to participants, atmoSphere creates an engaging experience.
https://dl.acm.org/doi/10.1145/3084822.3084845

Friday Oct 11, 2024

Navigating in a natural way in augmented reality (AR) and virtual reality (VR) spaces is a large challenge. To this end, we present ArmSwingVR, a locomotion solution for AR/VR spaces that preserves immersion, while being low profile compared to current solutions, particularly walking-in-place (WIP) methods. The user simply needs to swing their arms naturally to navigate in the direction where the arms are swung, without any feet or head movement. The benefits of ArmSwingVR are that arm swinging feels natural for bipedal organisms second only to leg movement, no additional peripherals or sensors are required, it is less obtrusive to swing our arms as opposed to WIP methods, and requires less energy allowing prolong uses for AR/VR. A conducted user study found that our method does not sacrifice immersion while also being more low profile and less energy consumption compared to WIP.
https://dl.acm.org/doi/10.1145/3152832.3152864

Friday Oct 11, 2024

This paper presents a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires. The proposed concept allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators. We explore possible design considerations and applications for such devices, present user studies proving the feasibility of delivering meaningful information and use nonlinear autoregressive neural networks to compensate for SMA inherent drawbacks, such as delayed onset, enabling us to characterize and predict the physical behavior of the device.
https://dl.acm.org/doi/10.1145/3267242.3267257

Sunday Oct 06, 2024

As the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students.Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user’s eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness.In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment.
https://dl.acm.org/doi/10.1145/3526113.3545687
 

Saturday Oct 05, 2024

What we wear (our clothes and wearable accessories) can represent our mood at the moment. We developed Emolleia to explore how to make aesthetic wears more expressive to become a novel form of non-verbal communication to express our emotional feelings. Emolleia is an open wearable kinetic display in form of three 3D printed flowers that can dynamically open and close at different speeds. With our open-source platform, users can define their own animated motions. In this paper, we described the prototype design, hardware considerations, and user surveys (n=50) to evaluate the expressiveness of 8 pre-defined animated motions of Emolleia. Our initial results showed animated motions are feasible to communicate different emotional feelings especially at the valence and arousal dimensions. Based on the findings, we mapped eight pre-defined animated motions to the reported, user-perceived valence, arousal and dominance and discussed possible directions for future work.
https://dl.acm.org/doi/10.1145/3490149.3505581

Copyright 2024 All rights reserved.

Podcast Powered By Podbean

Version: 20241125