Publikationen von Prof. Dr. Michael Rohs


  • Can You Ear Me? A Comparison of Different Private and Public Notification Channels for the Earlobe
    Dennis Stanke, Tim Dünte, Kerem Can Demir and Michael Rohs
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies - IMWUT '23
    The earlobe is a well-known location for wearing jewelry, but might also be promising for electronic output, such as presenting notifications. This work elaborates the pros and cons of different notification channels for the earlobe. Notifications on the earlobe can be private (only noticeable by the wearer) as well as public (noticeable in the immediate vicinity in a given social situation). A user study with 18 participants showed that the reaction times for the private channels (Poke, Vibration, Private Sound, Electrotactile) were on average less than 1 s with an error rate (missed notifications) of less than 1 %. Thermal Warm and Cold took significantly longer and Cold was least reliable (26 % error rate). The participants preferred Electrotactile and Vibration. Among the public channels the recognition time did not differ significantly between Sound (738 ms) and LED (828 ms), but Display took much longer (3175 ms). At 22 % the error rate of Display was highest. The participants generally felt comfortable wearing notification devices on their earlobe. The results show that the earlobe indeed is a suitable location for wearable technology, if properly miniaturized, which is possible for Electrotactile and LED. We present application scenarios and discuss design considerations. A small field study in a fitness center demonstrates the suitability of the earlobe notification concept in a sports context.
  • Colorful Electrotactile Feedback on the Wrist
    Tim Dünte, Justin Schulte, Malte Lucius and Michael Rohs
    Proceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia - MUM '23
    Providing rich feedback on small devices, like smartwatches, can be difficult. We propose colorful electrotactile feedback on the back of a smartwatch. Colorful electrotactile feedback provides private notifications, is energy efficient, and can express various sensations in different qualities. In a first study, 13 participants explored 49 different combinations of frequency and pulse width regarding the perceived “colorfulness” of electrotactile feedback. We investigated what sensations can be expressed with electrotactile feedback and which qualities of these sensations are conveyed. To describe the sensations, participants chose the best fitting terms from a list of 21 terms. The three most frequently selected terms were prickling (177), vibrating (163), and irritating (112). The three least frequently selected ones were twitching (31), tickling (29), and itching (28). In a second study with 17 participants we evaluated a reduced set of 9 sensations that we selected and refined based on the results of study 1. We evaluated these sensations regarding recognition rates and achieved recognition rates of up to 84% without prior learning. Furthermore, we investigated the acceptance of colorful electrotactilefeedbackandpresentamethodforaneasierandfaster calibration of electrotactile feedback.


  • EnvironZen: Immersive Soundscapes via Augmented Footstep Sounds in Urban Areas
    Maximilian Schrapel, Janko Happe and Michael Rohs
    i-com: Journal of Interactive Media, Volume 21, Issue 2
    Urban environments are often characterized by loud and annoying sounds. Noise-cancelling headphones can suppress negative influences and superimpose the acoustic environment with audio-augmented realities (AAR). So far, AAR exhibited limited interactivity, e. g., being influenced by the location of the listener. In this paper we explore the superimposition of synchronized, augmented footstep sounds in urban AAR environments with noise-cancelling headphones. In an online survey, participants rated different soundscapes and sound augmentations. This served as a basis for selecting and designing soundscapes and augmentations for a subsequent in-situ field study in an urban environment with 16 participants. We found that the synchronous footstep feedback of our application EnvironZen contributes to creating a relaxing and immersive soundscape. Furthermore, we found that slightly delaying footstep feedback can be used to slow down walking and that particular footstep sounds can serve as intuitive navigation cues.
  • Sign H3re: Symbol and X-Mark Writer Identification Using Audio and Motion Data from a Digital Pen
    Maximilian Schrapel, Dennis Grannemann and Michael Rohs
    Proceedings of Mensch Und Computer 2022 - MuC '22
    Although in many cases contracts can be made or ended digitally, laws require handwritten signatures in certain cases. Forgeries are a major challenge with digital contracts, as their validity is not always immediately apparent without forensic methods. Illiteracy or disabilities may result in a person being unable to write their full name. In this case x-mark signatures are used, which require a witness for validity. In cases of suspected fraud, the relationship of the witnesses must be questioned, which involves a great amount of effort. In this paper we use audio and motion data from a digital pen to identify users via handwritten symbols. We evaluated the performance our approach for 19 symbols in a study with 30 participants. We found that x-marks offer fewer individual features than other symbols like arrows or circles. By training on three samples and averaging three predictions we reach a mean F1-score of F1 = 0.87, using statistical and spectral features fed into SVMs.
  • Ubiquitous Work Assistant: Synchronizing a Stationary and a Wearable Conversational Agent to Assist Knowledge Work
    Shashank Ahire, Michael Rohs and Benjamin Simon
    2022 Symposium on Human-Computer Interaction for Work - CHIWORK '22
    Recent research in Human-Computer Interaction for work has shown that conversational agents (CA) are beneficial for supporting focused work and well-being while at work. Knowledge workers struggle in maintaining focus, work schedule, and well-being. Typically, they rely on multiple tools and services for work productivity, scheduling tasks, and reminding breaks. With the goal of tackling these problems, we propose the concept of a ubiquitous work assistant (UWA), which consists of two components: a stationary CA (S-CA) and a wearable CA (W-CA). S-CA is meant to be placed on user’s work desk while W-CA is fixed on the user’s wrist. The UWA interface is distributed between S-CA and W-CA. We initiated our study by conducting semi-structured interviews with knowledge workers (N = 14). We identified their expectations from conversational agents (CAs) that would assist them in their daily work life. From the interview findings, we developed an UWA prototype that could assist users by briefing their daily schedule, monitoring their schedule, and reminding breaks. We conducted a lab study simulating a home-office environment. The findings of the study show that the knowledge workers see potential in the UWA system. Further, we discuss implications of distributed user interface (DUI) for UWA design.
  • Exploring the Design Space of Headphones as Wearable Public Displays
    Dennis Stanke, Pia Brandt and Michael Rohs
    CHI Conference on Human Factors in Computing Systems Extended Abstracts - CHI EA '22
    The need for online meetings increased drastically during the COVID-19 pandemic. Wearing headphones for this purpose makes it difficult to know when a headphone wearing person is available or in a meeting. In this work, we explore the design possibilities of headphones as wearable public displays to show the current status or additional information of the wearer to people nearby. After two brainstorming sessions and specifying the design considerations, we conducted an online survey with 63 participants to collect opinions of potential users. Besides the preference of the colors red and green as well as using text to indicate availability, we found that only 54 % of our participants would actually wear headphones with public displays attached. The benefit of seeing the current availability status of a headphone-wearing person in an online meeting or phone call scenario were nonetheless mentioned even by participants that would not use such headphones.
  • TrackballWatch: Trackball and Rotary Knob as a Non-Occluding Input Method for Smartwatches in Map Navigation Scenarios
    Dennis Stanke, Peer Schroth and Michael Rohs
    Proceedings of the ACM on Human-Computer Interaction, Volume 6, Issue MHCI - MobileHCI '22
    A common problem of touch-based smartwatch interaction is the occlusion of the display. Although some models provide solutions like the Apple "digital crown" or the Samsung rotatable bezel, these are limited to only one degree of freedom (DOF). Performing complex tasks like navigating on a map is still problematic as the additional input option helps to zoom, but touching the screen to pan the map is still required. In this work, we propose using a trackball as an additional input device that adds two DOFs to prevent the occlusion of the screen. We created several prototypes to find a suitable placement and evaluated them in a typical map navigation scenario. Our results show that the participants were significantly faster (15.7%) with one of the trackball setups compared to touch input. The results also show that the idle times are significantly higher with touch input than with all trackball prototypes, presumably because users have to reorient themselves after panning with finger occlusion.











  • Attjector: an Attention-Following Wearable Projector
    Sven Kratz, Michael Rohs, Felix Reitberger and Jörg Moldenhauer
    Kinect Workshop at Pervasive 2012
    Mobile handheld projectors in small form factors, e.g., integrated into mobile phones, are getting more common. However, managing the projection puts a burden on the user as it requires holding the hand steady over an extended period of time and draws attention away from the actual task to solve. To address this problem, we propose a body worn projector that follows the user's locus of attention. The idea is to take the user's hand and dominant ngers as an indication of the current locus of attention and focus the projection on that area. Technically, a wearable and steerable camera-projector system positioned above the shoulder tracks the ngers and follows their movement. In this paper, we justify our approach and explore further ideas on how to apply steerable projection for wearable interfaces. Additionally, we describe a Kinect-based prototype of the wearable and steerable projector system we developed.
  • PalmSpace: Continuous Around-device Gestures vs. Multitouch for 3D Rotation Tasks on Mobile Devices
    Sven Kratz, Michael Rohs, Dennis Guse, Jörg Müller, Gilles Bailly and Michael Nischt
    Proceedings of the International Working Conference on Advanced Visual Interfaces - AVI '12
    Rotating 3D objects is a diffcult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A at hand pose de nes a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. Pilot tests show that PalmSpace hand gestures are feasible. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction - behind and beside the device - with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touchscreens. Our results show that both variants of PalmSpace have signi cantly lower task completion times in comparison to the virtual trackball.
  • ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications
    Gilles Bailly, Jörg Müller, Michael Rohs, Daniel Wigdor and Sven Kratz
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '12
    When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.
  • Sketch-a-TUI: Low Cost Prototyping of Tangible Interactions Using Cardboard and Conductive Ink
    Alexander Wiethoff, Hanna Schneider, Michael Rohs, Andreas Butz and Saul Greenberg
    Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction - TEI '12
    Graspable tangibles are now being explored on the current generation of capacitive touch surfaces, such as the iPad and the Android tablet. Because the size and form factor is relatively new, early and low fidelity prototyping of these TUIs is crucial in getting the right design. The problem is that it is difficult for the average interaction designer to develop such physical prototypes. They require a substantial amount time and effort to physically model the tangibles, and expertise in electronics to instrument them. Thus prototyping is sometimes handed off to specialists, or is limited to only a few design iterations and alternative designs. Our solution contributes a low fidelity prototyping approach that is time and cost effective, and that requires no electronics knowledge. First, we supply non-specialists with cardboard forms to create tangibles. Second, we have them draw lines on it via conductive ink, which makes their objects recognizable by the capacitive touch screen. They can then apply routine programming to recognize these tangibles and thus iterate over various designs.