A Nice Gesture by Jeroen Arendsen

Various personal interests and public info, gesture, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Month: February 2009 Page 1 of 2

Eila Goldhahn

Eila Goldhahn:
What can be learnt from the MoverWitness Exchange for the development of gesture-based human-computer interfaces?

Goldhahn holds a very cloudy talk about people being movers and witnesses, holds up Durer’s famous woodcut of perception drawing. I am totally missing the point. We the engineers should all be ‘movers’ as well? So we can share a more embodied knowledge with each other or with our ‘subjects’. Really, no idea what she is trying to get at. But it must be my limited engineer’s point of view or something.

Fortunately she is going to show us some videos. Perhaps it will become clearer now.
– A man is licking a wall, apprently enjoying a very deep sensory haptic embodied experience…
– A woman is looking like she needs to go to the bathroom…
– Ah, a nice one with people falling/flying. She mentions how associations and imagination can play a role in our perceptions (really?) and how these can mediate between the mover and the witness. Good point.

Asked about a more concrete example of what is missing in ‘our methods’ she points out how, in the talk by Stoessel on the elderly, how they could have engaged the movements of the elderly in a more open way. One could let the elderly talk about how they had experienced the movement and then see if this coincides with the ‘witness’s observation of the movement. Hmm, interesting.

Christian Stoessel Helps the Elderly

Christian Stoessel, Hartmut Wandke & Lucienne Blessing:
Gestural interfaces for elderly users: Help or hindrance?
Publications here

Christian starts out by pointing towards the changing demographics (Dutch ‘vergrijzing’). There will be many elderly people. Or perhaps  better, many people above 65 years, because these people may be more healthy in body and mind than previous generations of elderly (is that true?).

There is a somewhat optimistic view of the potential of gesture technology, in the sense that he thinks it is possible to identify sets of ‘intuitive’ gestures in a gestural interface. In the end he is measuring accuracy with which people performed gestures, rather than if they were intuitive or not.

Regarding the reality of creating a set of ‘intuitive’ gestures. He expands nicely on ‘intuitiveness’ as being something that is fuzzy. They don’t mean a gesture is intuitive from the start, but perhaps it will be more easy to remember.

Frédéric Landragin Puts-That-There Again

Frédéric Landragin:
Effective and spurious ambiguities due to some co-verbal gestures in multimodal dialogue
Publications here and here

Landragin talks about ‘put-that-there‘, the classic multimodal interface developed by MIT. He also developed a similar application.

He did his PhD in Nancy, but a Dutch Professor of Computational Linguistics, called Henk Zeevat, was one of his promotors. Zeevat is at the UvA… “The Institute for Logic, Language and Computation (ILLC) is a research institute of the University of Amsterdam, in which researchers from the Faculty of Science and the Faculty of Humanities collaborate”.

The content of his presentation revolves arounds a single idea that I find puzzling. He treats the transitionary movement between the that-deictic and the there-deictic as a gesture that says something about the manner in which ‘that’ is supposed to be ‘put’ ‘there’. I would contend that normally speaking no meaning resides in the transitional movement.

It starts getting interesting though, as he introduces ‘move that there’ as an indication that a path is intended with the transitional movement. I can imagine the difference between ‘put’ and ‘move’. Moreover, he says that the nature of ‘there’ depends on the nature of ‘that’. If ‘that’ is a carpet, then ‘there’ may be broad. If ‘that’ is a nail, then ‘there’ is probably quite precise. Good point, if you’ll excuse the expression.

GW2009 Keynote: Antonio Camurri

Keynote: Antonio Camurri (also here)
Toward computational models of empathy and emotional entrainment

Casa PaganiniInfoMusEyesWeb

Camurri has already done a lot of interesting work on movement and gesture, all of it in the ‘expressive corner’, working with dance and with music.

He just talked about a really nice application: He created a system to paint with you body movements, But it does so only if you move without hesitation. So, patients with hesitant movements (Parkinson?) get a stimulus to move better.

Next, about part of Humaine: something about the visibility of emotion in musical movements (not the sounds). There were previous talks in this area:

Florian Grond, Thomas Hermann, Vincent Verfaille & Marcelo Wanderley:
Methods for effective ancillary gesture sonification of clarinetists

Rolf Inge Godøy, Alexander Refsum Jensnius & Kristian Nymoen:
Chunking by coarticulation in music-related gestures

Next work with Gina Castellana (?): influence the way you listen to music through movement and gesture. Nice video.

There is also work on robotic interfaces. A ‘concert from trombone and robot’. Stockhausen, Milano. Robot had a radio, drove around, so spatially and in playing the robot had to be in tune with the trombone player. Collaboration with S. Hashimoto and K. Suzuki (Waseda University), See here for a publication.

He also worked together with Klaus Scherer from Geneva. Gael talked about Scherer’s work on the emotions as being quite good.

Camurri seems to be involved in many European networks and projects.

He is now explaining a project on synchronization. Quite interesting stuff about violin players (as cases of oscillators) try to get synchronized with a manipulated signal or with each other. It is going too fast to write much about it, but it all looks really nice. Violinists synchronizing their movements. And he is making much of a concept called ’emotional entrainment’. There is decent explanantion of the term here, but I’ll quote it:

A Quote by Daniel Goleman on emotional entrainment, influence, charisma, and power
Setting the emotional tone of an interaction is, in a sense, a sign of dominance at a deep and intimate level: it means driving the emotional state of the other person. This power to determine emotion is akin to what is called in biology a zeitgeher (literally, “time grabber”), process (such as the day-night cycle of the monthly phases of the moon) that entrains biological rhythms. For a couple dancing, the music is a bodily zeitgeber. When it comes to personal encounters, the person who has the more forceful expressivity – or the most power – is typically the one whose emotions entrain the other. Dominant partners talk more, while the subordinate partner watches the others face more – a setup for the transmissions effect. By the same token, the forcefulness of a good speaker – a politician or an evangelist, say – works to entrain the emotions of the audience. That is what we mean by, “He had them in the palm of his hand.” Emotional entrainment is the heart of influence.
Daniel Goleman : Harvard PhD, author, behavioral science journalist for The New York Times
Daniel Goleman
Source: Emotional Intelligence: Why It Can Matter More Than IQ, Page: 117

Interesting remark about the violinists who synchronize with an adjusted signal: They did not hear their own sound but rather a manipulation of the pitch of the movement. So what they did did not match what they heard. At some point these players got motion sickness…

Now there is a weird video from the opera, where a man and a woman use a chair to communicate (?). He lost me there for a moment.

Announcement: eNTERFACE 2009, European Workshop on Multimodal Interfaces, 13 July – 7 Aug, Casa Paganini (here)

Questions:
– About publications: you can download them from ftp.infomus.org/pub/camurri

Matthieu Aubry and others on Movement Synthesis from France

Matthieu Aubry, Frédéric Julliard & Sylvie Gibet:
Modeling joint synergies to synthesize realistic movements

Daniel Raunhardt (here) & Ronan Boulic (here):
Controlling gesture with time dependent motion synergy constraints

This morning started with two talks from French guys, both of them about movement synthesis. Boht from a computer science perspective. I was late for the first and didn’t catch the point of the second, so I cannot say much about them. It does appear to be the case that the French are concentrating, especially around the ladies Sylvie Gibet (gesture) and Annelies Braffort (LSF), on synthesis rather than on recognition. And they seem to be making good progress with a series of good computer science students. (Segouat was another). They do however appear to be mostly very high on computer skills but perhaps less high on gesture knowledge. But I could be mistaken, it is but a first impression.

The next Gesture Workshop

There is already an offer from Athens to host the next Gesture Workshop, in 2011, from Eleni Efthimiou.
This is her page at the ILSP / R.C. “Athena”.

The 2010 ISGS conference

There was a good announcements from the ISGS, voiced by secretary Judith Holler.

The 4th conference of the ISGS will take place in Frankfurt an der Oder and Berlin, Germany
Cornelia Mueller, Ellen Fricke, Hedda Lausberg and Katja Liebal are organizing the next ISGS conference in 2010, 20-25th of July.

Ellen Fricke

Ellen Fricke:
Sign or non-sign? Abstraction and concretization in an Uexküll-based model for multimodal interaction
her page

Her talk inspired me to read up on Jakob von Uexküll and biosemiotics.Interesting stuff, though somewhat elusive. Can’t say much about it yet though, still chewing on it. There are some interesting thoughts about gesturing plants and gesturing animals brewing though. Those are things I find quite interesting.

Alexis Heloir on Languages to Describe Gestures

Alexis Heloir & Michael Kipp:
Requirements for a gesture specification language
His page

Alexis is an old online acquaintance, see for example this story concerning Vervets that he pointed out to me. He has been working on the synthesis of gesture in a project called Samsara. And he recently finished his PhD at the University of Bretagne Sud and started working in Saarbrucken, Germany:

Sadly enough, I missed his lecture this morning (sorry Alexis) as I was too busy with catching some sleep after a few weeks of little sleep. He talked about different gesture description languages: BML, the Behavior Markup Language, MURML a markup language developed by Stefan Kopp & Co for Max and other ECA’s, and something he calls LV, which was developed for French Sign Language. Apparently, he pointed out some shortcomings of MURML which sparked comments from the home crowd here. But well, I guess I should have been there to say anything about it. 🙁

Isabel Galhano Rodrigues

Isabel Galhano Rodrigues:
Gesture choreography and gesture space in european vs. african portuguese interactions
her page.

She talked about cultural differences, regarding proxemics, regarding ‘touch gestures’.
Hmmm, she just showed two really nice videos, one with Portuguese students and one with Angolese students, who also talk Portuguese. They differ quite a lot in their gesturing, in the sense of their use of space.
Rodrigues points out that the Africans have little problem with gesturing close to the other people, whereas the Europeans appear to respect a large ‘body buffer’.

Questions: Were they representative of their ‘group’? Answer: this was just a first recording that I analysed.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén