Another nice gesture story in the news, although, sadly, it is once again about a performer giving the finger. This time it is the artist M.I.A..
M.I.A. giving everyone the finger during Superbowl 46 (Source BBC news – Getty Images)
Wikipedia (always good with the bare facts):
While performing with Madonna at the Super Bowl 46, M.I.A. gave the middle finger to a camera just before a cutaway during the halftime show. The gesture came during a performance of Madonna’s new single, “Give Me All Your Luvin’.” At the end of her lines, M.I.A. sang, “I don’t give a ***t.” The incident prompted apologies to be issued by NBC and the NFL.
One somewhat interesting element of the story is that apparently the gesture wasn’t picked up by the people responsible for detecting inappropriate stuff in the ‘delay system’. Well, it is fairly quick, but still easy to see. But then again, there is so much to see on the stage that perhaps they missed because they were looking at other things.
An interesting story in the news (here) and on YouTube today about gestures made by Robin van Persie. Best to watch the video first:
The video containing the gesture (for as long as it stays online…)
Apparently some people interpreted his gesture combination as the Roman/Fascist/Hitler greeting. He himself twittered in response:
Persie_Official Robin van Persie: It has been brought to my attention of some ridiculous allegations concerning my celebration of one of my goals yesterday. It is totally ludicrous to suggest that. My action of brushing my shoulder and pointing to my fans could be construed as anything else but of a showing of joy and celebration. To suggest this meant anything to the contrary is insulting and absolutely absurd as nothing else came into my mind.”
Apart from his grammar, I support his explanation of the gestures. “Brushing your shoulders” is indeed a Dutch gesture performed after performing great feats to indicate “that only ruffled my suit a bit” or “that hardly cost any effort”. Often accompanied with a grin or smirk and brash composure (as displayed here as well). And in this case he uses a salute to direct the gesture towards the audience, which I would interpret as an additional “and I do it all for you”.
This is however also a wonderful example of the importance of context, the perception of intentions, and the sensitivities of observers when it comes to interpreting the meaning of gestures. Someone who is suspicious of Van Persie (for whatever reason) or otherwise prone to ascribe ill intentions to him, may actually look at these gestures, in this situation, quite differently than most people. In this case however it would mean they think extremely lowly of him and of the Arsenal fans. Their line of thinking would run roughly as follows (and just to be certain: I do not agree with it): “I hate fascists/nazi’s. Van Persie may well a secret fascist/nazi. There are more like him in the Arsenal audience that he wishes to salute. He is using the pretext of cheering after a goal to make a (badly) camouflaged fascist salute. But he won’t get away with it, because I saw what I saw.” Well, I pity the one who thinks like that, sorry.
Just to end on a positive note: congrats to Van Persie for a wonderful performance. My hat’s off to you. You indeed make it look so easy sometimes.
Here is a funny video with Adam Hills, a comedian, about the funny side of a couple of BSL signs. It is a nice illustration of ambiguity, iconicity, distinctiveness and of how people can play with signs, gestures and language.
Avatar Kinect is a new social entertainment experience on Xbox LIVE bringing your avatar to life! Control your avatar’s movements and expressions with the power of Avatar Kinect. When you smile, frown, nod, and speak, your avatar will do the same.
Ah, new developments on the Kinect front, the premier platform for Vision based human action recognition if we were to judge by frequency of geeky news stories. For a while we have been seeing various gesture recognition ‘hacks’ (such as here). In a way, you could call all interaction people have with their Xbox games using a Kinect gesture recognition. After all, they communicate their intentions to the machine through their actions.
What is new about Avatar Kinect? Well, the technology appears to pay specific attention to facial movements, and possibly to specific facial gestures such as raising your eye brows, smiling, etc. The subsequent display of your facial movements on the face of your avatar is also a new kind of application for Kinect.
The Tech Behind Avatar Kinect
So, to what extent can smiles, frowns, nods and such expressions be recognized by a system like Kinect? Well, judging from the demo movies, the movements appear to have to be quite big, even exaggerated, to be handled correctly. The speakers all use exaggerated expressions, in my opinion. This limitation of the technology would certainly not be surprising because typical facial expressions consist of small (combinations of) movements. With the current state of the art in tracking and learning to recognize gestures making the right distinctions while ignoring unimportant variation is still a big challenge in any kind of gesture recognition. For facial gestures this is probably especially true given the subtlety of the movements.
A playlist with Avatar Kinect videos.
So, what is to be expected of Avatar Kinect. Well, first of all, a lot of exaggerating demonstrators, who make a point of gesturing big and smiling big. Second, the introduction of Second Life styled gesture routines for the avatar, just to spice up your avatars behaviour (compare here and here). That would be logical. I think there is already a few in the demo movies, like the guy waving the giant hand in a cheer and doing a little dance.
Will this be a winning new feature of the Kinect? I am inclined to think it will not be, but perhaps this stuff can be combined with social media features into some new hype. Who knows nowadays?
In any case it is nice to see the Kinect giving a new impulse to gesture and face recognition, simply by showcasing what can already be done and by doing it in a good way.
A pleasurable pastime it is. Browsing through Garfield comics and smiling at the nice gestures Jim Davis draws to convey Garfield’s communication. I created a couple of lists ealier (here and here), and here is another list:
Here is a must-see video for anyone who is interested in gestures and body language and has a sense of humour. Be warned, it may force you to rethink some of your ideas about the conventionality of body language and the extent to which interpreting it can be taught (should you be a communications trainer).
In any case, it’s good for a laugh 🙂
Here is a collection of the sort of body language instruction that the above video is a parody of (with the exception of the fifth which again is a parody):
In 2007 an interesting book was published that I believe is also relevant to gesture researchers:
Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions.
Chrystopher L. Nehaniv, Kerstin Dautenhahn (Eds.). Cambridge University Press, 2007 – 479 pagina’s (available online in a limited way, here)
The book is an excellent volume with many interesting chapters, some with contributions by the editors themselves but also by many other authors. Personally, I found the following chapters most interesting (of 21 chapters):
1. Imitation: thoughts about theories (Bird & Heyes)
2. Nine billion correspondence problems (Nehaniv)
7. The question of ‘what to imitate’: inferring goals and intentions from demonstrations (Carpenter & Call)
8. Learning of gestures by imitation in an humanoid robot (Calinon & Billard)
10. Copying strategies by people with autistic spectrum disorder: why only imitation leads to social cognitive development (Williams)
11. A Bayesian model of imitation in infants and robots (Rao et al.)
12. Solving the correspondence problem in robotic imitation across ambodiments: synchrony, perception and culture in artifacts (Alissandrakis et al.)
15. Bullying behaviour, empathy and imitation: an attempted synthesis (Dautenhahn et al.)
16. Multiple motivations for imitation in infancy (Nielsen & Slaughter)
21. Mimicry as deceptive resemblance: beyond the one-trick ponies (Norman & Tregenza)
I’ll probably update this post with more in-depth review remarks later… But at least chapter 21 has connections to earlier posts here regarding animal gestures, such as here.
ScienceDaily (Feb. 3, 2011) — Surgeons of the future might use a system that recognizes hand gestures as commands to control a robotic scrub nurse or tell a computer to display medical images of the patient during an operation.
Purdue industrial engineering graduate student Mithun Jacob uses a prototype robotic scrub nurse with graduate student Yu-Ting Li. Researchers are developing a system that recognizes hand gestures to control the robot or tell a computer to display medical images of the patient during an operation. (Credit: Purdue University photo/Mark Simons)
I have noticed similar projects earlier, where surgeons in the OR were target users of gesture recognition. The basic idea behind this niche application area for gesture recognition is fairly simple: A surgeon wants to control an increasing battery of technological systems and he does not want to touch them, because that would increase the chance of infections. So, he can either gesture or talk to the machines (or let other people control them).
In this case the surgeon is supposed to control a robotic nurse with gestures (see more about the robotic nurse here). You can also view a nice video about this story here; it is a main story of the latest Communications of the ACM.
Well, I have to say I am in doubt if this is a viable niche for gesture recognition. So far, speech recognition has been used with some succes to dictate operating reports during the procedure. I don’t know if it has been used to control computers in the OR. Frankly, it sounds a bit scary and also a bit slow. Gesture and speech recognition are known for their lack of reliability and speed. Compared to pressing a button, for example, they give more errors and time delays. Anything that is mission-critical during the operation should therefore not depend on gesture or speech control would be my opinion.
However, the real question is what the alternatives for gesture or speech control are and how reliable and fast those alternatives are. For example, if the surgeon has to tell another human what to do with the computer, for example displaying a certain image, then this can also be unreliable (because of misinterpretations) and slow.
The article mentions several challenges: “… providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures”. That sounds like they also run into problems with fidgeting or something similar that surgeons do.
In sum, it will be interesting to see if surgeons will be using gesture recognition in the future, but I wouldn’t bet on it.