Telepresence: integrating shared task and person spaces

Buxton, W. (1992). Telepresence: integrating shared task and person spaces. In Proceedings of Graphics Interface ’92, pages 123–129, Vancouver, B.C., Canada. Canadian Human-Computer Communications Society. [pdf]

—————-

This paper argues that current research on telepresence is dicotomized between works aiming at establishing consistent shared spaces and those supporting a sense of shared presence. The paper concludes that the integration of these two spaces is important: the smoothness of transitions. There are different cases and we should adapt technology to support this variety.

The impact of increased awareness while face-to-face

DiMicco, J. M., Hollenbach, K. J., Pandolfo, A., and Bender, W. (2007). The impact of increased awareness while face-to-face. Human-Computer Interaction, 22(1). [pdf]

——————

The experimental results presented in this paper demonstrates that a display showing real-time participation levels, imposing a norm of equal participation on a group, causes those at the hightst levels of participation to decrease the amount they speak. Reviewing the turn-taking patterns with a visualization causes those who spoke the least to increase the amount the speak in a subsequent discussion.

This paper presents Second Messenger, a system of dynamic awareness displays that reveal speaker participantion patterns in a face-to-face meetings, increasing indviducals’ awareness of their own and others’ participantion in discussion. Experimental results indicate that these displays influence the amount an individual participates in a discussion and the procerss of information ahsring used during a decision.making task. These findings suggest that awareness applications brings about systematic changes in group communication styles, highlighting the potential for such applications to be designed to improve group interactions.

Dimicco Secondmessanger

Microsoft Surface and Map applications

Microsoft Surface seems to be a fine piece of interactive furniture. What I like of the project is that put together many year of academic research into a corporate-layout.

The name Surface comes from “surface computing,” and Microsoft envisions the coffee-table machine as the first of many such devices. Surface computing uses a blend of wireless protocols, special machine-readable tags and shape recognition to seamlessly merge the real and the virtual world — an idea the Milan team refers to as “blended reality.” The table can be built with a variety of wireless transceivers, including Bluetooth, Wi-Fi and (eventually) radio frequency identification (RFID) and is designed to sync instantly with any device that touches its surface.

[more on popularmechanics]

What attracted me was the video where they show how the table can be used to interact with maps. The user place some landmarks on the map just by pointing to the relevant parts, then he select some extra items from a side menu that offers contextual information on those points. Finally, he asks to the system to calculate the shortest path between the landmarks. What this scenario is lacking is how surface can support collaboration and particularly remote collaboration.

Microsoft Surface2

Microsoft Surface1

Copyright notice: the present content was taken from the following URL, the copyrights are reserved by the respective author/s.

Tags: , ,

Clearboard: A seamless medium for shared drawing and conversation with eye contact

Hishi, H. and Kobayashi, M. (1992). Clearboard: A seamless medium for shared drawing and conversation with eye contact. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 525–532, Monterey, CA, USA. ACM Press. [pdf]

————-

This paper present the design of ClearBoard, a system that allows users to collaboratively sketch on a shared display while maintaining eye-contact. The main point of the paper is that eye-contact is very important for interaction regulation: “eyes are as eloquent as the tongue”. Eye-contact allows the users to switch their focus smootly from one to the other according to the task content.

The transparent diplay allows the users drawing contemporarily and to indicate on points of the drawing (replicating features of face-to-face) interaction.

The paper also contains a nice task for collaborative work. It is called the “river crossing problem”.

Hishii Metaphors-Shared-Drawing

Semantic telepointers for groupware

Greenberg, S., Gutwin, C., and Roseman, M. (1996). Semantic telepointers for groupware. In Proceedings of OzCHI’96, Sixth Australian Conference on Computer-Human Interaction, pages 54–61, Hamilton, New Zealand. IEEE Computer Society Press. [pdf]

—————-

This paper present a seminal work on the use of telepointers in the relaxed-WYSIWIS framework. The authors lists a couple of factors that limit the use of telepointers when the shared screens are not kept identical. Their solution consists in overloading the telepointer with semantic information and/or mapping the telepointer absolute coordinates to relative coordinates to each participant’s display.

Real time groupware systems often display telepointers (multiple cursors) of all participants in the shared visual workspace. Through the simple mechanism of telepointers, participants can communicate their location, movement, and probable focus of attention within the document, and can gesture over the shared view. Yet telepointers can be improved. First, they can be applied to groupware where people’s view of the work surface differs—through viewport, object placement, or representation variation—by mapping telepointers to the underlying objects rather than to Cartesian coordinates. Second, telepointers can be overloaded with semantic information to provide participants a stronger sense of awareness of what is going on, with little consumption of screen real estate.

Action as language in a shared visual space

D. Gergle, R. E. Kraut, and S. R. Fussell. Action as language in a shared visual space. In Proceedings of the Computer Supported Cooperative Work (CSCW’04), pages 487–496, Chicago, IL, USA, November 6-10 2004. Association for Computing Machinery. [pdf]

—————

A shared visual workspace allows multiple people to see similar views of objects and environments. Prior empirical literature demonstrates that visual information helps collaborators understand the current state of their task and enables them to communicate and ground their conversations efficiently. We present an empirical study that demonstrates how action replaces explicit verbal instruction in a shared visual workspace. Pairs performed a referential communication task with and without a shared visual space. A detailed sequential analysis of the communicative content reveals that pairs with a shared workspace were less likely to explicitly verify their actions with speech. Rather, they relied on visual information to provide the necessary communicative and coordinative cues.

Gergle Sequential-Analysis

Fluid annotations through open hypermedia: Using and extending emerging web standards

N. O. Bouvin, P. T. Zellweger, K. Grønbæk, and J. D. Mackinlay. Fluid annotations through open hypermedia: Using and extending emerging web standards. In Proceedings of WWW 2002, Honolulu, HI, USA, May 7-11 2002. Association for Computing Machinery. [pdf]

———

This paper describe a system called Fluid Annotations that is used to annotate web pages. The authors report an extensive rationale of why they choose this particular interaction mechanism for their system. The paper contains no evaluation.

Typographic conventions such as footnotes and marginalia have long been used to place supporting information on a static page without disrupting the primary information. Computer-based documents have recently augmented these conventions with hypertext links to supporting documents. Compared to static typography, hypertext has fewer limits on the size or complexity of an annotation, but at the cost of removing the supporting information from its context on the page.

We are exploring a new technique for annotation, called Fluid Documents, which uses lightweight interactive animation to incorporate annotations in their context. Our approach initially uses the space on a page for primary information, indicating the presence of supporting material with small visual cues. When a user expresses interest in a cue, its annotation gradually expands nearby. Meanwhile, the surrounding information alters its typography and/or layout to create the needed visual space.

We have demonstrated the value of Fluid Documents in two prototype applications. Fluid Links use animated glosses to support informed and incremental hypertext browsing, and Fluid Spreadsheets use animated graphics to make formulas and cell dependencies visible in a spreadsheet. We have also developed a “negotiation architecture” to support Fluid Document applications. This architecture allows the primary and supporting information to adjust their typography and layout appropriately.

Results of a recent observational study of subjects using Fluid Links indicate that the basic concepts underlying fluid documents can be effective: users can process moving text even in a serious reading situation, and providing information close to the anchor seems to be beneficial. Subjective preferences were varied, which suggests that architectures like our negotiation architecture, which supports multiple fluid techniques, may be crucial to user acceptance.

More on the project web site.

Annotea Compress

Khashee: a mobile phone software to assist religious practices

Khashee is specially designed to assist Muslims all over the world to get accurate prayer time alerts through their mobile handsets and to keep the mobile mute during prayer time, irrespective of the location or time zones. Khashee also provides additional features like quibla direction, supplications, events scheduler, fatwa etc. All the features like mute delay, duration for each prayer, location and time zones etc. are fully customizable.

Screenshot2652  Screenshot2640

Timeline visualization of workspace gazes

Premise: to have an idea of the experiment that I am currently analyzing, refer to this technical report.

Following my previous work on visualizing eye movements on a shared map during collaborative work at distance, I came out with another visualization that shows how gazes alternate across the different components of the interface. I divided the workspace in three functional areas: (1) the shared map, (2) the composition pane of the chat window; (3) the history pane of the chat window. Then I aggregated the gaze movements in sequences of two or more fixations (100ms or more in the same area) in the same functional areas.

A sequence of fixations in the map window was then colored in orange on the timeline below. A sequence of two or more fixations in the composition panel was colored in blue and finally a sequence of two or more fixations in the history pane was colored in red. As in the ShoutSpace condition (e.g., ss) there was no difference between the history and the composition pane (these two function share the same space in the interface), I used an alternative color: yellow. Little traits mark the posting of utterances with the relative coding information.

At macro level it is possible to see that in the MSN condition participants alternate interface components with a higher frequency than in the other two conditions. This is due to the fact that they need to rely on text to express positions in space and analyze their partner’s intention, while in the ss or the cc condition, participants work mostly inside the map space, with an immediacy of references and referent information. This has implications for the task performances.

Bargraph Exp 6

Bargraph Exp 23

Bargraph Exp 42

Tags: , , ,

CHI conference report, day 4

On the last day of the conference I attended many interesting talks. The first session was kids and family. The first paper was presented by J. A. Kientz and was titled: “Grow and Know: Understanding Record-Keeping Needs for Tracking the Development of Young Children“. The main idea presented was a platform for supporting the parents and all the caregivers helping them to record relevant facts for the child, a sort of interactive baby book to store relevant information.

Jonas Landgren presented “A Study of Emergency Response Work: Patterns of Mobile Phone Interaction“. The author presented an ethnographic account of the role of mobile phones in time-critical organizing, with some inspiration for designers of systems and applications for time-critical settings. Mobile phones are the common technological denominator for crisis response actors. Instead of thinking about other pieces of technology to give to these workers we should think about designing better services that runs on mobile networks.

In the afternoon I attended the session on programming by and with end-users. Jeffrey Wong presented a system called Marmite that helped users to easily build mash-ups. There is much information on the web that is not always combined in a useful manner. The solution are mash-ups, but unfortunately these are difficult t build (e.g. programmableweb.org). Marmite is an environment for programming using examples.

In the same manner, J. Zimmerman presented “VIO: A Mixed-Initiative Approach to Learning and Automating Procedural Update Tasks“. The authors started from the same assumption: many mundane tasks are repetitive and learnable. Their system should learn these tasks and help the user perform them more efficiently. Their paper contained also a great literature review on forms and end-user programming.

Finally, I attended a session on social influence. Brooke Focault presented a paper titled “Provoking Sociability“. The authors’ point was that negative social behavior might provoke positive social outcomes. They built a system augmenting gossip to enhance bonding and community formation. Loki is an agent that likes to gossip about his coworkers.

Tags: ,