Publikationen von Prof. Dr. Michael Rohs

2020

2019

2018

2017

2016

2015

2014

2013

2012

  • Attjector: an Attention-Following Wearable Projector
    Michael Rohs, Sven Kratz, Felix Reitberger and Jörg Moldenhauer
    Kinect Workshop at Pervasive 2012
    Mobile handheld projectors in small form factors, e.g., integrated into mobile phones, are getting more common. However, managing the projection puts a burden on the user as it requires holding the hand steady over an extended period of time and draws attention away from the actual task to solve. To address this problem, we propose a body worn projector that follows the user's locus of attention. The idea is to take the user's hand and dominant ngers as an indication of the current locus of attention and focus the projection on that area. Technically, a wearable and steerable camera-projector system positioned above the shoulder tracks the ngers and follows their movement. In this paper, we justify our approach and explore further ideas on how to apply steerable projection for wearable interfaces. Additionally, we describe a Kinect-based prototype of the wearable and steerable projector system we developed.
  • PalmSpace: Continuous Around-device Gestures vs. Multitouch for 3D Rotation Tasks on Mobile Devices
    Michael Rohs, Sven Kratz, Dennis Guse, Jörg Müller, Gilles Bailly and Michael Nischt
    Proceedings of the International Working Conference on Advanced Visual Interfaces
    Rotating 3D objects is a diffcult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A at hand pose de nes a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. Pilot tests show that PalmSpace hand gestures are feasible. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction - behind and beside the device - with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touchscreens. Our results show that both variants of PalmSpace have signi cantly lower task completion times in comparison to the virtual trackball.
  • ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications
    Michael Rohs, Gilles Bailly, Jörg Müller, Daniel Wigdor and Sven Kratz
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
    When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.
  • Sketch-a-TUI: Low Cost Prototyping of Tangible Interactions Using Cardboard and Conductive Ink
    Michael Rohs, Alexander Wiethoff, Hanna Schneider, Andreas Butz and Saul Greenberg
    Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
    Graspable tangibles are now being explored on the current generation of capacitive touch surfaces, such as the iPad and the Android tablet. Because the size and form factor is relatively new, early and low fidelity prototyping of these TUIs is crucial in getting the right design. The problem is that it is difficult for the average interaction designer to develop such physical prototypes. They require a substantial amount time and effort to physically model the tangibles, and expertise in electronics to instrument them. Thus prototyping is sometimes handed off to specialists, or is limited to only a few design iterations and alternative designs. Our solution contributes a low fidelity prototyping approach that is time and cost effective, and that requires no electronics knowledge. First, we supply non-specialists with cardboard forms to create tangibles. Second, we have them draw lines on it via conductive ink, which makes their objects recognizable by the capacitive touch screen. They can then apply routine programming to recognize these tangibles and thus iterate over various designs.

2011

2010

2009

  • Squeezing the Sandwich: A Mobile Pressure-Sensitive Two-Sided Multi-Touch Prototype
    Michael Rohs, Georg Essl and Sven Kratz
    Demonstration at the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST), Victoria, BC, Canada
    Two-sided pressure input is common in everyday interactions such as grabbing, sliding, twisting, and turning an object held between thumb and index finger. We describe and demonstrate a research prototype which allows for twosided multitouch sensing with continuous pressure input at interactive rates and we explore early ideas of interaction techniques that become possible with this setup. The advantage of a two-sided pressure interaction is that it enables high degree-of-freedom input locally. Hence rather complex, yet natural interactions can be designed using little finger motion and device space.
  • PhotoMap: Using Spontaneously Taken Images of Public Maps for Pedestrian Navigation Tasks on Mobile Devices
    Michael Rohs, Johannes Schöning, Antonio Krüger, Keith Cheverst, Markus Löchtefeld and Faisal Taher
    Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
    In many mid- to large-sized cities public maps are ubiquitous. One can also find a great number of maps in parks or near hiking trails. Public maps help to facilitate orientation and provide special information to not only tourists but also to locals who just want to look up an unfamiliar place while on the go. These maps offer many advantages compared to mobile maps from services like Google Maps Mobile or Nokia Maps. They often show local landmarks and sights that are not shown on standard digital maps. Often these 'You are here' (YAH) maps are adapted to a special use case, e.g. a zoo map or a hiking map of a certain area. Being designed for a fashioned purpose these maps are often aesthetically well designed and their usage is therefore more pleasant. In this paper we present a novel technique and application called PhotoMap that uses images of 'You are here' maps taken with a GPS-enhanced mobile camera phone as background maps for on-the-fly navigation tasks. We discuss different implementations of the main challenge, namely helping the user to properly georeference the taken image with sufficient accuracy to support pedestrian navigation tasks. We present a study that discusses the suitability of various public maps for this task and we evaluate if these georeferenced photos can be used for navigation on GPS-enabled devices.
  • Impact of Item Density on Magic Lens Interactions
    Michael Rohs, Georg Essl, Johannes Schöning, Anja Naumann, Robert Schleicher and Antonio Krüger
    Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
    We conducted a user study to investigate the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore objects on a map and look for a specific attribute shown on the display. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. We found that visual context is most effective for sparse item distributions and the performance benefit decreases with increasing density. User performance in the magic lens case approaches the performance of the dynamic peephole case the more densely spaced the items are. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces by suggesting when external visual context is most beneficial.
  • LittleProjectedPlanet: An Augmented Reality Game for Camera Projector Phones
    Michael Rohs, Markus Löchtefeld, Johannes Schöning and Antonio Krüger
    Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2009), Bonn, Germany, September 15, 2009
    With the miniaturization of projection technology the integration of tiny projection units, normally referred to as pico projectors, into mobile devices is not longer ction. Such integrated projectors in mobile devices could make mobile projection ubiquitous within the next few years. These phones soon will have the ability to project large-scale information onto any surfaces in the real world. By doing so the interaction space of the mobile device can be expanded to physical objects in the environment and this can support interaction concepts that are not even possible on modern desktop computers today. In this paper, we explore the possibilities of camera projector phones with a mobile adaption of the Playstation3 game LittleBigPlanet. The camera projector unit is used to augment the hand drawings of a user with an overlay displaying physical interaction of virtual objects with the real world. Players can sketch a 2D world on a sheet of paper or use an existing physical configuration of objects and let the physics engine simulate physical procedures in this world to achieve game goals.
  • HoverFlow: Expanding the Design Space of Around-Device Interaction
    Michael Rohs and Sven Kratz
    Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
    In this paper we explore the design space of around-device interaction (ADI). This approach seeks to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space around it. This enables rich 3D input, comprising coarse movement-based gestures, as well as static position-based gestures. ADI can help to solve occlusion problems and scales down to very small devices. We present a novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen. Our prototype uses infrared proximity sensors to track hand and finger positions in the device's proximity. We present an algorithm for detecting hand gestures and provide a rough overview of the design space of ADI-based interfaces.
  • Bridging the gap between the Kodak and the Flickr generations: A novel interaction technique for collocated photo sharing
    Michael Rohs, Christian Kray, Jonathan Hook and Sven Kratz
    Passing around stacks of paper photographs while sitting around a table is one of the key social practices defining what is commonly referred to as the ‘Kodak Generation’. Due to the way digital photographs are stored and handled, this practice does not translate well to the ‘Flickr Generation’, where collocated photo sharing often involves the (wireless) transmission of a photo from one mobile device to another. In order to facilitate ‘cross-generation’ sharing without enforcing either practice, it is desirable to bridge this gap in a way that incorporates familiar aspects of both. In this paper, we discuss a novel interaction technique that addresses some of the constraints introduced by current communication technology, and that enables photo sharing in a way, which resembles the passing of stacks of paper photographs. This technique is based on dynamically generated spatial regions around mobile devices and has been evaluated through two user studies. The results we obtained indicate that our technique is easy to learn and as fast, or faster than, current technology such as transmitting photos between devices using Bluetooth. In addition, we found evidence of different sharing techniques influencing social practice around photo sharing. The use of our technique resulted in a more inclusive and group-oriented behavior in contrast to Bluetooth photo sharing, which resulted in a more fractured setting composed of sub-groups.
  • Impact of item density on the utility of visual context in magic lens interactions
    Michael Rohs, Robert Schleicher, Johannes Schöning, Georg Essl, Anja Naumann and Antonio Krüger
    This article reports on two user studies investigating the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User performance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spatially tracked personal displays or combined personal and public displays, by suggesting when to use visual context.
  • Interactivity for Mobile Music-Making
    Michael Rohs and Georg Essl
    Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.
  • Using Hands and Feet to Navigate and Manipulate Spatial Data
    Michael Rohs, Johannes Schöning, Florian Daiber and Antonio Krüger
    Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
    We demonstrate how multi-touch hand gestures in combination with foot gestures can be used to perform navigation tasks in interactive systems. The geospatial domain is an interesting example to show the advantages of the combination of both modalities because the complex user interfaces of common Geographic Information System (GIS) requires a high degree of expertise from its users. Recent developments in interactive surfaces that enable the construction of low cost multi-touch displays and relatively cheap sensor technology to detect foot gestures allow the deep exploration of these input modalities for GIS users with medium or low expertise. In this paper, we provide a categorization of multitouch hand and foot gestures for the interaction with spatial data on a large-scale interactive wall. In addition we show with an initial evaluation how these gestures can improve the overall interaction with spatial information.
  • Map Torchlight: A Mobile Augmented Reality Camera Projector Unit
    Michael Rohs, Johannes Schöning, Sven Kratz, Markus Löchtefeld and Antonio Krüger
    Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
    The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.
  • Unobtrusive Tabletops: Linking Personal Devices with Regular Tables
    Michael Rohs and Sven Kratz
    Workshop Multitouch and Surface Computing at CHI'09
    In this paper we argue that for wide deployment, interactive surfaces should be embedded in real environments as unobtrusively as possible. Rather than deploying dedicated interactive furniture, in environments such as pubs, cafés, or homes it is often more acceptable to augment existing tables with interactive functionality. One example is the use of robust camera-projector systems in real-world settings in combination with spatially tracked touch-enabled personal devices. This retains the normal usage of tabletop surfaces, solves privacy issues, and allows for storage of media items on the personal devices. Moreover, user input can easily be tracked with high precision and low latency and can be attributed to individual users.
  • Improving the Communication of Spatial Information in Crisis Response by Combining Paper Maps and Mobile Devices
    Michael Rohs, Johannes Schöning, Antonio Krüger and Christoph Stasch
    Mobile Response
    Efficient and effective communication between mobile units and the central emergency operation center is a key factor to respond successfully to the challenges of emergency management. Nowadays, the only ubiquitously available modality is a voice channel through mobile phones or radio transceivers. This makes it often very difficult to convey exact geographic locations and can lead to misconceptions with severe consequences, such as a fire brigade heading to the right street address in the wrong city. In this paper we describe a handheld augmented reality approach to support the communication of spatial information in a crisis response scenario. The approach combines mobile camera devices with paper maps to ensure a quick and reliable exchange of spatial information.

2008