Eye-Tracking Interface: Gamers’ Looks Can Kill

Stephen Vickers, of De Montfort University, UK, is working on this fascinating research project to allow eye-tracking control of real-time 3D games. There are lots of reasons why this kind of research is relevant nowadays. Personally, I like two: first, these solutions can give disabled people the opportunity to equally participate to 3D online communities than able-bodied people; second, this technology can further extend the possibilities of able-bodied people in conjunction with other input modalities.

While we begin to have an extended knowledge of eye-tracking interaction for 2D worlds, few attempts have been made on 3D interfaces. This because of the increasing complexity of interaction possibilities. In virtual worlds we need to perform a large suite of commends in order to move a a character or avatar, change the viewpoint of the scene, and to manipulate objects. Finally we need an extra set of commands to communicate with other players.

The software developed by Vickers and collaborators approaches the complexity by splitting all these commands into different input modalities in order to subset the range of possible commands. Glancing momentarily off-screen in a particular direction switches between these modalities (e.g., to a mode that rotates the avatar or viewpoint, etc.). Also, the software allows the user to define gaze gestures to enable specific commands that do not belong to a certain modality, like that of turn off the eye-gaze control, to avoid unintentional selections.

Finally, this work was presented by Howell Instance at the the Eye Tracking Research & Applications Symposium 2008 in Savannah, US, where I chaired the session on Gaze Interfaces.

PhD thesis submitted: Annotations of Maps in Collaborative Work at a Distance

The thesis if finally done. The discussion is scheduled for the 2nd of June. Darren Gergle, Sebastiano Bagnara and Denis Gillet are in the committee. I feel relieved now …

This thesis inquires how map annotations can be used to sustain remote collaboration.  Maps condense the interplay of space and communication, solving linguistic references by linking conversational content to the actual places to which it refers. This is a mechanism people are accustomed to. When we are face-to-face, we can point to things around us. However, at a distance, we need to recreate a context that can help disambiguate what we mean. A map can help recreate this context. However other technological solutions are required to allow deictic gestures over a shared map when collaborators are not co-located. This mechanism is here termed Explicit Referencing.

Two filed experiments were conducted to investigate the production of collaborative annotations of maps with mobile devices, looking for the reasons why people might want to produce these notes and how they might do so. Both studies led to very disappointing results. The reasons for this failure are attributed to the lack of a critical mass of users (social network), the lack of useful content, and limited social awareness. More importantly, the study identified a compelling effect of the way messages were organized in the tested application, which caused participants to refrain from engaging in content-driven explorations and synchronous discussions.

This last qualitative observation was refined in a controlled experiment where remote participants had to solve a problem collaboratively, using chat tools that differed in the way a user could relate an utterance to a shared map. Results indicated that team performance is improved by the Explicit Referencing mechanisms. However, when this is implemented in a way that is detrimental to the linearity of the conversation, resulting in the visual dispersion or scattering of messages, its use has negative consequences for collaborative work at a distance. Additionally, an analysis of the eye movements of the participants over the map helped to ascertain the interplay of deixis and gaze in collaboration. A primary relation was found between the pair’s recurrence of eye movements and their task performance.

Finally, this thesis presents an algorithm that detects misunderstandings in collaborative work at a distance. It analyses the movements of collaborators’ eyes over the shared map, their utterances containing references to this workspace, and the availability of ‘remote’ deictic gestures. The algorithm associates the distance between the gazes of the emitter and gazes of the receiver of a message with the probability that the recipient did not understand the message.

Keywords: Computer-Mediated Communication, Computer-Supported Cooperative Work (CSCW), Deictic Gestures, Eye-Tracking, Human-Computer Interaction (HCI), Location-Based Services, Map Annotations, Remote Deixis, Spatial Cognition.

Deixis and gaze in collaborative work at a distance (over a shared map)

The paper I recently presented in Savannah, GA during Eye-Tracking Research and Applications ETRA, was recently published in the ACM digital library.

Cherubini, M., Nüssli, M.-A., and Dillenbourg, P. Deixis and gaze in collaborative work at a distance (over a shared map): a computational model to detect misunderstandings. In Proceedings of the International Symposium on Eye Tracking Research & Applications (ETRA2008) (Savannah, GA, USA, March 26-28 2008), Association for Computing Machinery, ACM Press. [url]

This paper presents an algorithm that detects misunderstandings in collaborative work at a distance. It analyses the movements of collaborators’ eyes on the shared workspace, their utterances containing references about this workspace, and the availability of ‘remote’ deictic gestures. This method is based on two findings: 1. participants look at the points they are talking about in their message; 2. their gazes are more dense around these points compared to other random looks in the same timeframe. The algorithm associates the distance between the gazes of the emitter and gazes of the receiver of a message with the probability that the recipient did not understand the message.

More

International Motor Show, Geneva

Two weeks ago, I visited the 78th international motor show in Geneva. It was a very interesting experience. I was particularly interested in seeing all the hybrid cars that were proposed as innovative. With my big surprise there were few. Lots of SUVs and extremely expensive, shiny and pollutant cars. 🙁

Img 0100

For instance, my attention was captured by the great color of the new Alfa Romeo cars presented. They spent a great deal of money to develop a metallic color that shines like hell. They did not care about the car’s consumption though.

Img 0114

Then, I visited the stand of TATA motors. This is an Indian group that is looking for a different approach to car conception. Instead of striving for the style, they look at consumption and costs. They are producing a car called Tata Nano that is going to the market with an initial price of 2000 euro. The car sports a 750 cc engine that reduces of a fourth the usual consumption of a city car. I consider this as a killer product.

Img 0103  Img 0105

Finally, It is worth to mention the PIVO2, a concept car from Nissan. First of all, the car sports an hybrid engine, something rare for its size. Then the design looks really futuristic. The car has one door only, in the front. Then the cabin and the wheels turns in all directions, making it perfect for parking in a densely urbanized area.

Img 0108

Feral Robots and Environmental Health Clinic

I have always been fascinated by the work of Natalie Jeremijenko. She is an artist whose background includes studies in biochemistry, physics, neuroscience and precision engineering.

Lately she has been working on installations and projects to make people aware of their ecological footprints. One of her projects I reviewed in the past was OneTrees: a great deal of cloned trees. She planted them in the SF area. Because of their genetic similarities, their differences in growth can be attributed to the different levels of CO2 to which they are exposed. Therefore she thought of using the trees as CO2 sensors.

More recently she worked on the Feral Robots project, an Open Source robotics project providing resources and support for upgrading the raison d’etre of commercially available robotic dog toys. Because the dogs follow concentration gradients of the contaminants they are equipped to sniff, their release renders information legible to diverse participants, provides the opportunity for evidence driven discussion, and facilitates public participation in environmental monitoring and remediation.

Feral Robot

Last in class

A couple of days ago, I received a message on the MIT mailing list. Some journalists are making a documentary film about people who finished “last in class” and the lives they have lived since. I found it extremely interesting as many people that drop out of formal education end up doing great things as well. This is because, I think, each person has different cognitive abilities and ways of acquiring knowledge (a.k.a., learning).

For instance, I think I have good mathematical skills, however, I failed two times the calculous course during my first year in the physics program. I was working hard but my brain just refused to see things as the teacher was pushing me to see.

Anyway, they are still looking for people that fit the bill. If you know somebody that might help just pass the call.

Everyone remembers the college valedictorian, but what about the students who ranked last in the class?

I know what it’s like to be near the bottom (notice the C+ in Archery) and I can tell you that hanging on for dear life is every bit as challenging as finishing first.

Join us as we explore the lives of people not originally destined for greatness. We’re seeking those who finished at – or very near – the bottom of their classes. Was that ranking an indicator of things to come or was it an aberration? I know my story, but what about the others? With your help, we can uncover the secrets to finishing last and learn what they tell us about the future.

[more]

iPoint Presenter: gesture driven camputers

At the CeBit conference in Hannover, Germany, Fraunhofer Institute researchers will present new human-computer interfaces that demonstrate how computers can be operated by gesturing or pointing a finger. The iPoint Presenter uses a series of cameras to observe a person standing in front of a projection screen. When users start moving their hands, the computer reacts without being touched. Users can point to buttons or use gestures to manipulate virtual objects. Multipointing interaction enables users to issue commands using multiple fingers for tasks such as rotating, enlarging, or minimizing objects. Fraunhofer scientist Paul Chojecki says the iPoint Presenter is unique because it is entirely contact-free, making it ideal for use in an operating theater or during a presentation in a large auditorium.

Meanwhile, researchers from the Fraunhofer Institute for Digital Media Technology (IDMT) are teaching computers to understand human gestures and are developing a method for automatically recognizing different hand signals. A prototype containing an intelligent camera connected to a computer running IDMT pattern recognition software will be at the conference where it will record and analyze visitors’ gestures, converting the hand signals into machine commands.

[More]

Pi11 Ipointpresenter Fog Tcm6-90968

Copyright notice: the present content was taken from the following URL, the copyrights are reserved by the respective author/s.

SNiFTAG: datalogging for your pet

SNIF Tag is a matchbook-sized wearable computer for your dog. Small, comfortable, and stylish, the SNIF Tag clips securely to your dog’s collar. Using the latest wireless technology, the Tag records and transmits a record of your pup’s activities and encounters with other SNIF dogs to a sleek base station in your home.

Bubba 390  Base Station Tag 265

The SNIF Tag is made of durable plastic composites, is water-resistant, and easily clips on and off your dog’s collar for once-a-week charging. The Tag and Base Station aren’t just cutting edge technology, they’re cutting edge design, too: the Tag’s faceplate can be customized to suit your personal style and the discreet, streamlined Base Station is as much sculpture as it is cutting edge technology.

The SNIF website is intuitive and easy to use. When you log in, you’ll find all the information your dog’s SNIF Tag has recorded: the other dogs he’s met; his activities and exercise logs; and all kinds of helpful tools to help you understand the life of your pet better. For the first time, you can check in on your pet’s overall health from anywhere in the world with an Internet connection.

Activity Grab 390

[more]

Copyright notice: the present content was taken from the following URL, the copyrights are reserved by the respective author/s.