A Nice Gesture by Jeroen Arendsen

Various personal interests and public info, gesture, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Month: January 2007 Page 1 of 2

iPhone: Add a Touch of Gesture

I can not buy my iPhone yet They tell me it may come in June A Zune perhaps I’ll buy instead? Or sing along with Apple’s Tune?

The iPhone user interface is a fabulous piece of design as one might expect. They got many details right which is harder than it looks. And the screen space doubles as a touchscreen that allows multi-finger gestures in various flavours.

People say: Apple kidnapped the Fingerworks inventors and used their ideas…

Which is my favourite gesture amongst the tapping, scrolling, dragging, and pinching that tumbles over the touchscreen? I think I like the speed-sensitive scrolling best, because it reminds me of spinning a big wheel of fortune (I know it isn’t a new idea but I like how they did it here).

The zooming through two-fingered pinching is technically nice, but may not be more usable than tapping. For really nice multi-touch interaction, future iPhone designers may well profit from Han’s demo. Custom gesture shortcuts would be nice too.

Sutton-Spence Unravels Sign Language Poetry

For those of you who want to know more about sign language poetry:

Cover of Book by Sutton-Spence
Rachel Sutton-Spence wrote a book (2004) with Paddy Ladd and Gillian Rudd on the topic (sourceAmazon).

Can’t be bothered to get the book?

There is also a good analysis (28-page pdf) of sign language poetry available online by the same Sutton-Spence (of Bristol University). She wrote it for the European Cultural Heritage Online project (ECHO) and is dated december 2003 (so just before the book).

A short preview from the online document (from the introduction):

Sign language poetry is the ultimate form of aesthetic signing, in which the form of language used is as important as – or even more important than – the message. Like so much poetry in any language, sign language poetry is a means of expressing ideas unusually succinctly, through means of heightened “art” language. It uses specific language devices to maximise the significance of the poem, just as in the poetry of spoken languages, although the language devices are rather different from the rhymes and alliteration that are familiar to most hearing audiences. The metaphors and images used in sign language poems may also be different from those in spoken language poems. In general, though, the basic idea of maximising the message through specially heightened language is the same in poetry in all languages, whether signed or spoken.

The ECHO site also contains a big collection of online European cultural heritage, in the form on sign language videos. They contain stories, poetry, interviews, lexicons, etc. Just check their ‘data’ link in the sidebar. There is NGT poetry by Wim Emmerik and BSL poetry by two poets.

Sign Language Music Video of Zombie

There is a wonderful sign language video of the song Zombie on YouTube.
I started a collection of other Sign Language Poetic Performances around it.

Seeing these videos strengthen my opinion that the poetic mechanisms available in sign language are quite rich and not all available in written, spoken, or sung language.

Fidgeting is to Gesture as ‘Ehm’ is to Speech?

Some people are actively interested in the stuff I am doing in my PhD studies, or at least they ask me questions about it. I usually tell them about my first experiment. That experiment was entirely about the difference between meaningless movements I call fidgeting and meaningful gestures, in this case sign language signs.

“Press the spacebar as soon as you see a sign”

It struck me then, and it still strikes me, that a bunch of people talking respond to each other so appropriately. Many, many times I saw people reacting to gestures of all sorts. Maybe just a little headnod or a palm-up gesture, or a raising of the eyebrows. And how often do you see anyone accidentally responding to a movement that was not intended to communicate after all?

Imagine the following chitchat:
You: “Nice weather huh?”
Her: “Yeah” (and makes some sort of movement)
You: “What do you mean, you think I am crazy?” (misinterpreting the movement)
Her: “I didn’t do anything, what are you talking about?” (now starts thinking you are crazy)

Rather unlikely? It just doesn’t happen. No matter how much we talk and interact, it hardly ever goes wrong. I will take the exceptional examples as exemplifying the rule.

So, I set out to see if I could test this in a lab. How fast can people make judgements about the status of a movement. I used sign language signs and fidgeting, and told people to press a button as soon as they saw a sign. And I found people could do that very well and very fast. Even non-signers could do it. (In case you want to read more: the journal Gesture recently accepted my publication of these results, hooray!).

If you want you can repeat the experiment in real life whenever you (and a friend) watch a conversation. Just put up your finger as soon as you see the talking people make a gesture. I bet you will both skip the fidgeting and spot the gestures.

Now, imagine a gesture recognizing computer trying to do the same trick and ignore fidgeting. Currently computers that are programmed to recognize gestures, simply assume any movement is a gesture candidate, and will try to classify it against their vocabulary.

In speech recognition one might see a similar problem. People say things like “ehm” or “ehr..” during an utterance. They may also cough, sneeze or scrape their throat. But is that really comparable to fidgeting? I am tempted to think that they are quite different. Coughing or sneezing is a bodily function, whereas fidgeting is usually just a ritualized watered-down version of some bodily function, if any. The reason behind it is quite different. Saying “ehm” is mostly a way to fill the gap, or keep the floor, in a poorly planned utterance. It is in a way as much a deliberate part of the communication as the words used. Nevertheless the computers task is more or less the same: it must withstand the disruptions and continue recognizing the words (or gestures) as if nothing happened. Both “ehm” and fidgeting should be ignored without damaging other processes. And that is quite a challenge as it is.

In speech recognition several techniques have been invented to cope with “ehm” and out-of-vocabulary (OOV) words. Most importantly ‘word spotting’ and ‘filler and garbage models’. Perhaps gesture recognition would do well to have a closer look at those techniques to start safely ignoring fidgeting?

Fidgeting is to Gesture as ‘Ehm’ is to Speech?

Some people are actively interested in the stuff I am doing in my PhD studies, or at least ask me questions about it. I usually tell them about my first experiment. That experiment was entirely about the difference between meaningless movements I call fidgeting and meaningful gestures, in this case sign language signs.


“Press the spacebar as soon as you see a sign”

It struck me then and it still strikes me that a bunch of people talking respond to each other so appropriately. Many, many times did I see people reacting to gestures of all sorts. Maybe just a little headnod or a palm-up gesture, or a raising of the eyebrows. And how often do you see anyone accidentally responding to a movement that was not intended to communicate after all?

Imagine the following chitchat:
You: “Nice weather huh?”
Her: “Yeah” (and makes some sort of movement)
You: “What do you mean, you think I am crazy?” (misinterpreting the movement)
Her: “I didn’t do anything, what are you talking about?” (now starts thinking you are crazy)

Rather unlikely?
It just doesn’t happen.
No matter how much we talk and interact, it hardly ever goes wrong.
I will take the exceptional examples as exemplifying the rule.

So, I set out to see if I could test this in a lab. How fast can people make judgements about the status of a movement. I used sign language signs and fidgeting, and told people to press a button as soon as they saw a sign.

And I found people could do that very well and very fast. Even non-signers could do it. (In case you want to read more: the journal Gesture recently accepted my publication of these results, hooray!).

If you want you can repeat the experiment in real life whenever you (and a friend) watch a conversation. Just put up your finger as soon as you see the talking people make a gesture. I bet you will both skip the fidgeting and spot the gestures.

Now, imagine a gesture recognizing computer trying to do the same trick and ignore fidgeting. Currently computers that are programmed to recognize gestures, simply assume any movement is a gesture candidate, and will try to classify it against their vocabulary. In speech recognition one might see a similar problem. People say things like “ehm” or “ehr..” during an utterance. They may also cough, sneeze or scrape their throat. But is that really comparable to fidgeting?

I am tempted to think that they are quite different. Coughing or sneezing is a bodily function, whereas fidgeting is usually just a ritualized watered-down version of some bodily function, if any. The reason behind it is quite different. Saying “ehm” is mostly a way to fill the gap, or keep the floor, in a poorly planned utterance. It is in a way as much a deliberate part of the communication as the words used. Nevertheless the computers task is more or less the same: it must withstand the disruptions and continue recognizing the words (or gestures) as if nothing happened. Both “ehm” and fidgeting should be ignored without damaging other processes. And that is quite a challenge as it is. In speech recognition several techniques have been invented to cope with “ehm” and out-of-vocabulary (OOV) words. Most importantly ‘word spotting’ and ‘filler and garbage models’. Perhaps gesture recognition would do well to have a closer look at those techniques to start safely ignoring fidgeting?

Nelson Goodman on Labanotation

In his book (1ed. 1969) Languages of Art, Nelson Goodman treats Labanotation as the most important dance notation system. It gets a thorough examination based on Goodman’s theory of symbol sytems. All in all Goodman considers Labanotation a fairly succesfull notational system, comparable to a music score.

Unfortunately, the philosophical concepts Goodman uses in his book are often beyond my current comprehension (and I was not interested enough to dig further). Perhaps I will return to the book later, since it did seem to offer some solid ground.

He does not say much about gesture and nothing about sign language. The only bit I recall is about the use of stylized gestures and Mudras in dance, which are opposed to modern (disco) dancing which does not denote anything (but the rhythm of the music and prior performances of the moves?). Languages of Art is based on Goodman’s John Locke Lectures of 1961/62.

Sign Language Poetry and Theatre from Draadloos

I went to a cultural event today. There was poetry, there was music and even a short play. But the reason I went there was because someone pointed out to me that there was going to be sign language poetry. I have seen some signed poems in videos at Slope and ASL Quest, and even in a book (Gebarentaal, 1993, Koenen has an illustrated version of ‘Amsterdam’ by Wim Emmerik) but this was a nice opportunity to get some hands-on perceptual experience.

What culture looks like? (source)

The event was part of Utrecht’s Culturele Zondag, Nieuwjaarsduik (I live near Utrecht), which is a start of the cultural life of the new year I believe. Stichting Nadorst (a group of semi-professional poets and friends) organized poetry theatre, featuring the group Draadloos with sign language poetry.

Draadloos consists of Suzanne Pach, Cora Mulder, Lotte Bijloo, Tina van Dijk, Hanrike Berkhof, Judith Vogels, and Elke Wildenborg. The group is attached to the Sign Language education program at the Hogeschool Utrecht, where Mulder (who is interpreter as well) teaches drama parttime. The members are mostly students in the program for NGT interpreter or teacher. 

What is sign language poetry? Let me skip a truly fundamental discussion and just give you some thoughts. First, poetry employs many means to be poetic. One of them is through associations with imagery. We can say some poets paint with words. As mentioned on HandSpeak, it may then be an advantage to use a visual/manual language like ASL. This sort of reasoning is rather abstract and not unlike saying that all roses are red and therefore everything red is well suited to being a rose. I did find that the signing poets were able to create very rich poetry by using imagery. Especially the dormant iconicity of signs can be awakened and put to full poetic use. A butterfly can be made to fly. Rain can keep dripping down on the butterfly’s wings.

A sign can be iconic or, as Els van der Kooij says, motivated when its form is (partly) caused by its meaning. The term transparent is also used in these cases. 

Also, signs can be shaped to resemble eachother without hurting their original meaning. A heart can beat (with two hands) like a butterfly, and still be a heart. That is a power that I find difficult to imagine with spoken words. Can I say a word in a different manner without hurting its original meaning? The closest I can think of is using a compound or new word, like butterfly-heart, which is not nearly as nice. Can a signed poem be translated into another (spoken) language? Handspeak suggests it is not possible. I disagree. It just takes another skilled poet, like in the Flying Words duo Peter Cook and Kenny Lerner. But that is true for all poetry translations, a purely literal translation is not enough.

Flying Words: Two poets for the price of one?

The poems from Draadloos all existed first in Dutch (written or spoken). They were all ‘translated’ into NGT. In general you can say that the meaning was enriched during the process, especially ‘Liedje van de vlinder’ and ‘Tuin’. The literal meanings of the words were practically all included and then things were added, modulated, or combined (as far as I can tell with my limited sign language knowledge). Was all of the imagery associated with the original words still there in the translation? Probably not, but you always loose something. There is no reason to assume that any original Sign Language poem cannot be translated into written form in the same manner. I see no objection whatsoever, as long as it is clear that a translated poem is another poem on its own. Rather than being clones the two versions are more like brother and (Deaf) sister.

MS13 and other Gangs’ Gestures

In the long winter night I wasted an hour on watching a documentary on an American gang called MS13, or Mara Salvatrucha. The entire documentary reeks of US-style fear mongering and is not recommendable. But at some point a girl called Brenda Paz started explaining some of the gestures, or hand signals, of the gang. I got really interested when she said the gestures can be combined, which she called “stacking”. Did they create a nice gesture system of their own? Maybe even an alternate gangster sign language?

After about one minute in the movie Brenda Paz shows how the signs are ‘stacked’.

There is some info online at the Gang Hand Signs Index. It has small collections of signs from MS, the Bloods, Nortenos, Surenos, and a few other gangs. Another small collection is at Gang Signs. Dear old Wikipedia has a bit on Gang Hand Signs, where “stacking” is explained as “throwing up a gang sign”. It also states “Individual letters can be used to tell stories when flashed in rapid succession, each representing a word beginning with that letter”. No details are available. So, besides what miss Paz showed, I actually saw only few gestures. And I have not seen any sort of systematic structure in them. The signs appear mostly used to identify yourself as a member of a gang, or a specific group. Flashing such a sign may serve as a warning or threat as well, more than a hidden form of communication. For MS13, the system of tattoos may be much more elaborate. But then again, perhaps they simply succeeded in keeping their sign language secret?

(source)

Nevertheless, if a language is a dialect with its own army, then I will grant the estimated 50.000 gangsters using these signals their sign language. The words they speak do not form a more memorable contribution to Mankind’s legacy anyway.

Demo Video Visicast

As an illustration of the use of sign language synthesis this video was made for the Visicast project:

(More videos are available on the Visicast site)

It is a combination of speech recognition, automatic translation and sign language synthesis. The avatar is Tessa. Unfortunately, it does not work the other way. So, if the signer needs to ask a question it gets a bit difficult. But then again, it is better than nothing. As these things go, it is but a demo video. I believe no such system exists at the moment.

Grammar, Gesture, and Meaning in ASL

I recently read (or glanced through parts) of the 2003 book by Scott K. Liddell: Grammar, Gesture, and Meaning in American Sign Language

The main message of the book is one that I would have found trivial if I did not know anything about linguistics. It must take a linguist to surprise a linguist, I guess. Liddell basically points out that there is more to talking than just what is said. I wonder if there are really any structural linguisitic professors out there that would argue against this?

Using many examples he shows how in ASL there are many processes of meaning-making at work. And he suggests that the same is true for spoken languages. When we speak we use words and grammar, but we also use intonation, and we gesture, raise our eyebrows, roll our eyes, etc, etc. Not surprisingly, the same is true for American Sign Language, and undoubtedly for all spoken and signed languages across the globe. When we sign we also gesture, use space in different ways, raise our eyebrows in different flavours, and roll our eyes in all directions. Every language has lexical items (signs and words) and grammatical processes to combine and alter them, but there is always so much more going on when we express ourselves.

Deaf or Hearing?

Liddell is however probably now the first and foremost figure in the Sign Language research community to move to a new agenda. The old (or current) agenda is proving that Sign Language is at a par with spoken/written languages at all levels (such as categorical perception of phonological properties). Alongside runs research showing (dis)similarities in neurological processing between so-called non-linguistic gestures and linguistic gestures (further proof that sign language is like ‘real’ language and not like gesturing).

Signers or Talkers?

When I started reading about sign language and gestures I found it difficult to believe how little interaction there was between research on both topics. Gesture researchers were finding out that gestures and speech are not separated by a fence called ‘linguistic status’, while at the same time Sign Language researchers kept on proving the inferior nature of “gesticulation”. Did they choose to be blind to normal gestures of hearing people? Is there still fear of not being taken seriously? Perhaps there is, and I cannot fathom whether such fear is warranted nor whether ASL status still requires defence beyond reason.

ASL or English?

I heartily recommend Liddell’s book to anyone interested in the similarities between signed and spoken languages and the similarities between sign language and gestures. Rest assured that Liddell provides a score of wonderful material on ASL meaning-making mechanisms, which will clear anyone of the notion that it is a poor or primitive language. The richness he documents is testimony to what matter most: people´s enormous potential to communicate effectively with eachother, through any and all means available.

Two out of four of the above pictures contain people that are ‘signers’, the other are mere ‘talkers and gesturers’. Can you spot them?

Page 1 of 2

Powered by WordPress & Theme by Anders Norén