A Nice Gesture by Jeroen Arendsen

Various personal interests and public info, gesture, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: Academia Page 2 of 4

Stuart Battersby on Interaction Studies Experiments

Patrick G. T. Healey, Chris Frauenberger, Marco Gillies & Stuart Battersby:
Experimenting with non-verbal interaction

For me personally, one of the more interesting posters was explained to me by Stuart Battersby (his page), of Queen Mary, University of London. He was working on theory from Interaction Studies, whcih I would also like to apply in robotics. He did not work on robotics but he wanted to create an experimental environment to test observations from interaction studies. This would be done with virtual environments where people are not communicating directly but through avatars. Then, he could tinker with their behavior without the participants knowing. Very clever.

Some literature he was using:
– Most of Kendon’s stuff (conducting interaction, stuff from the 90’s papers): F-frames
– Furuyama (here)
– Early work by Asli Özyürek (regarding spatial relationships)
– Older stuff from Goffman, Scheflen…

Do I Share Common Ground with Holler?

Judith Holler & Katie Wilkin:
Gesture use on common ground
Her page.

Thank god, she starts off by stating that the grand theories about why we gesture are not necessarily mutually exclusive. Hear, hear!

Nice explanation about common ground. What is it and how do people build up common ground between them. And how people adjust their speech as they share more or less common ground. E.g. experts versus novices amongst each other. Nice reference to the 2004 Gerwing and Bavelas study.

Common ground versus non-common ground conditions. Surspirsingly, participants did not show a reduced gesture rate when they shared common ground, which was what Holler expected. Also, gesture did not become smaller or less informative. It stayed rather large and informative. This was perhaps because they were grounding the conversation, they would keep much information in gestures so the speech could become more ‘elliptical’ (?)

Yoshioka on Gesture in Different Languages

Keiko Yoshioka:
Gestural reference to space by speakers of typologically different languages
some publication here and here

This talk was introduced by De Ruiter as a nice follow-up on Asli’s talk. Keiko is introducing Talmy’s theory about ‘sattelite-framed languages’ (English, Dutch, Chinese) versus ‘verb-framed languages’ (Spanish, etc). She explains how, in a verb-framed languages it is less easy to ‘compact Ground elements’. These are new concepts to me, and I have a hard time staying with her. She quotes Slobin a lot (1996) who showed that speakers of a verb-framed language allocate more attention to Ground, which shows in their preference for a certain rhetorical style.

She compared Dutch and Japanese speakers. The explained a story called ‘Frog, where are you’. She is also comparing head-front and head-final languages. She compares speech with and without gestures, and whether the ground was referenced in the verb phrase (VP) or the non-verb phrase. Damn, if only I had studied comparative linguistics better…

Take home message, because our languages differ, we differ in the placement and the content of our gestures.

Keynote by Asli Özyürek

Keynote: Asli Özyürek
The role of gesture in production and comprehension of language: Insights from behavior and brain

Asli is giving a good keynote, presenting a good overview of current gesture theory. I do have some trouble to follow her presentation of two different ‘grand hypotheses’ about gestures and speech. She seems to polarize the work on gestures into two views, which may not be necessary in most cases. Poor old Krauss is still being held up as a straw man who thought gestures were not intended to communicate. In my own presentation I skipped this part, assuming everyone would already agree on gestures being movements intended to communicate, but it turns out that is not the case yet.

For example, Hedda is talking about fidgeting as ‘self-touch gestures’, disregarding the differences between movements that communicate and movements that are intended to communicate. Other people I talk too, are also sometimes questioning whether gestures are intended to communicate. Quite a surprise I must say. The overall level of knowledge of the current state of the work on gestures is surprisingly low. Kendon’s book of 2004, is not something you can rely on as a shared source of knowledge, for example. McNeill’s 1992 book is more or less common knowledge, but that includes some of the misconceptions that have arisen from that book. For example, people are not properly aware of the limited scope of McNeill’s 1992 book: It was solely about certain types of co-speech gestures. It was not about all gestures. And the difference is very important if you are talking, as I did, about emblems. Or if you consider, like Kendon (1995), the differences between emblems and other gestures, to be graded.

Anyway, Asli is using the opportunity to recount all the ‘Sylvester and Tweety Bird’ work that followed McNeill’s work. Bit by bit she is demonstrating the extent to which gestures and speech are intertwined. This is really a classic collection of work, performed by her and by her colleagues and other co-researchers.

Some quotes:
“gestural differences (in study 1) are not due to deep culture and language-specific representation”
“gesturers from all language backgrounds used an SOV order when asked to pantomime something without speech…. We sometimes refer to it as the cognitive deault language…”
“what can be packaged semantically in a clause determines the gesture… the effect is not absolute!”

She also presents the work on brain research, done at the FC Donders centre with Hagoort and Willems.
Quite a few questions from different directions.

References for Jérémie Segouat

Jérémie Segouat & Annelies Braffort:
Toward modeling sign language coarticulation

I promised to refer Mister Segouat to early work on sign languages that also treat the rich aspects of the signed languages:

1. The historical overview in the first chapters of Kendon (2004), Gesture, visible action as utterance
2. Wilhelm Wundt, The Language of Gestures (e.g. 1973 English edition)
3. Tylor, Edward B. (1870) Researches into the Early History of Mankind. London, John Murray.

The book by Adam Kendon is also an excellent source to read up on the relationships between sign language and other kinds of gestures. It even has a specific chapter on it. Alternatively you could check Kendon’s recent paper on this subject in Gesture 8(3).

The French are, by the way, present in some numbers. Their work on sign language synthesis is quite interesting, and Segouat’s work on coarticulation is also quite interesting. Their treatment of other gestures is, however, in my view, not in line with most current insights in the general nature of gesture. But perhaps my treatment of sign language is, in their view, not in line with most current insights either 🙂

NEUROGES by Hedda und Uta

Hedda Lausberg, Uta Sassenberg:
The Neuropsychological Gesture Coding System (NGCS)

Uta und Hedda are presenting a system to annotate gestures that was developed for psychiatry. Hedda was referring to coding the beginning and the ending of the movement and how this leads to a certain interrater agreement. It reminds me of the time I also needed to code movement phases. In my 2007 paper (here) I gave an overview of the reliability between raters with regards to movement phases. I found that raters can sometimes disagree to about 200 ms about the beginning of the preparations and of the stroke, for example. But most of the time the difference is within about 80 ms, or two frames.

An important paper in this area, which I also used, is:
Kita, S., Gijn, I. v., & Hulst, H. v. d. (1998). Movement Phases in Signs and Co-speech Gestures, and Their Transcription by Human Coders. Lecture Notes in Computer Science, 1371, 23.

A good thing about this NEUROGES, or NGCS is that it is implemented as a template of ELAN, so you can just use ELAN to code according to NGCS.

– De Ruiter mentions that he developed a way to go about interrater reliability in the face of temporal intervals. I should perhaps also look at that.
– Miss ? asks about ‘self touch gestures’, which I call fidgeting. Hedda mentioned that self-touch gestures are indicative of mental stress, and that they function for self-regulation. In other words, they are important to watch out for when you are in psychiatry. Well, I know of this view of fidgeting but I don’t support it. There is probably a correlation somewhere, but I don’t think it can be used very productively. A colleague of mine tried to look for fidgeting movements, back in our lab, to learn something about product experience but I think it was all way too subtle too draw any conclusions.

Link: At Noldus, Hedda also talked about NGCS (here)

What’s going on at the GW2009

Hello, dear reader. Did you know there is an interesting gesture conference going on at the moment? Or are you in fact reading this blog from the very ‘plenarsaal’ in Bielefeld’s ZiF, this year’s home of the Gesture Workshop (for previous occasions of the workshop, see here), where I am typing this blog entry? It could be, because all around me I see people who are interested in gesture and sign language using their laptops while the discussions are going on.

It’s quite a stimulating little workshop 🙂

At the moment Kirsten Bergman is taking questions from Jan Peter de Ruiter and Asli Özyürek about how she and her colleague Stefan Kopp implemented their theories about gesture and speech into an Embodied Conversational Agent (ECA). De Ruiter just complimented her with the ‘best model of gesture production he has seen so far’, and it is indeed quite an impressive and comprehensive treatment of gesture and speech synthesis.

Time for a coffee now, but I will be posting some entries in the coming days about the conference, highlighting stuff that I find to be particularly interesting. If you couldn’t make it but are interested then check back regularly to get a bit of the flavor of the workshop.

Me at the FG2008

I would almost forget, but I also presented some work at the FG2008 conference: Acceptability Ratings by Humans and Automatic Gesture Recognition for Variations in Sign Productions.

Abstract: In this study we compare human and machine acceptability judgments for extreme variations in sign productions. We gathered acceptability judgments of 26 signers and scores of three different Automatic Gesture Recognition (AGR) algorithms that could potentially be used for automatic acceptability judgments, in which case the correlation between human ratings and AGR scores may serve as an ‘acceptability performance’ measure. We found high human-human correlations, high AGR-AGR correlations, but low human-AGR correlations. Furthermore, in a comparison between acceptability and classification performance of the different AGR methods, classification performance was found to be an unreliable predictor of acceptability performance.

Snapshots of the three signs used in the experiment
Snapshots of the three signs used in the experiment.

Examples of three manipulations of the sign SAW
Examples of three manipulations of the sign SAW. We tested about 68 sign manipulations in total. These were run through the automatic recognition algorithms we had been working on and they were rated by human signers. The paper is about how humans and machines can be compared.

Nadia Magnenat-Thalmann at the FG2008

One of the more interesting lectures at the FG2008 conference was a keynote speech delivered by Nadia Magnenat-Thalmann, director of the MIRALab in Geneva. She talked about Communicating with a Virtual Human or a Robot that has Emotions, Memory and Personality. She went far beyond the simplistic notion of expressing ‘the six basic emotions’ and talked about how mood, personality and relationships may affect our facial expressions.

Example of MIRALab's facial expression techniques
The talk by Magnenat-Thalmann focused on facial expression. (source)

By coincidence I got an invitation to write a paper for another conference, organized by Anton Nijholt and Nadia Magnenat-Thalmann (and others), called the Conference on Computer Animation and Social Agents (CASA 2009). It is organized by people from the University of Twente but held in Amsterdam. Call for papers: deadline February 2009.

Nadia also mentioned a researcher at Utrecht University called Arjan Egges. He got his PhD at the MIRALab and is now working on “the integration of motion capture animation with navigation and object manipulation”.

Gestures in language development

Gesture 8:2 came out recently. It is a special issue on ‘Gestures in language development’. Amanda Brown, a friend who stayed at the MPI doing PhD research, published a paper on Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker. Marianne Gullberg, Kees de Bot and Virginia Volterra wrote an introductory chapter ‘Gestures and some key issues in the study of language development‘. Kees de Bot (LinkedIn) is a professor in Groningen working on (second) language acquisition.

Page 2 of 4

Powered by WordPress & Theme by Anders Norén