Still a long way to go before he can try for world champion airguitar, I think. But the product is interesting to consider. At first I thought it looked quite nice and cool. But then I wondered: why would anyone want to actually have an AirGuitar? Isn’t the point of playing air guitar that you don’t have to have the damn thing? If I am going to buy something to play guitar I might as well, or even better, buy a real (toy) guitar, right?
Is this going to be cheaper than a real guitar? I would guess that the additional electronics will not be cheaper than the bits of extra wood, metal or plastic needed for a physical guitar. But then again, microelectronics can be cheap if they are sold in large quantities.
So, is this going to provide a better experience? I think that by definition that is impossible. The point of playing air guitar is to imitate the actual playing, to go thorugh the motions and almost ‘feel like’ you are really playing. In other words, it can never be better than the real thing, or can it?
Maybe it can. Maybe it can help people who can not play guitar ‘feel more like’ they are playing guitar. Maybe the AirGuitar can take care of the difficult stuff like putting your fingers in the right position on the strings and remembering the chords and licks, and leave the exciting stuff to you, like strumming wildly, creating vibrato or smashing it.
That would be neat, Ronald if you read this, can you make it so it can be smashed?
Reviving an ancient art: students from the University’s Faculty of Music worked with theatre director Helga Hill to present a fully-staged and gestured season of Eccles’ The Judgment of Paris: Above, Paul Bentley as Paris and Janelle Hopman as Venus. [Photo: Mark Wilson] (source)
If we go further back in time, the work of Quintillian (and Cicero) is related. They wrote for orators, which were actors as much as they were politicians and lawyers. Wittgenstein is also referenced a lot.
Microsoft is making a big deal out of their Surface. Basically, it is a regular computer with some fancy software that works together with a new type of table sized touchscreen. It enables people to work with the ten fingers of two hands or with artefacts (multi-touch) and it is sensitive to pressure. This idea was most eloquently presented by Jeff Han earlier, maybe Microsoft bought the idea?
Anyway, here it is, one of the most expensive tables you will ever desire:
A Dutch local newspaper, Leidsch Dagblad, has written a good report of the annual holiday gathering organized by the national foundation for the Deafblind (De Nederlandse Stichting voor Doofblinden). About 70 deafblind people (and their interpreters) apparently had a good time there.
So, wouldn’t it be great to set up some research on this haptic sign language. There are plenty of people who are interested in sign language because it provides insight in the human language capacity. They compare how people listen and talk (and gesture) to how they watch and sign (and gesture). General human language processing must be separated from modality dependant processing stuff (though it is actually more like oral/auditory+visual/gestural vs. visual/gestural). Very interesting nevertheless. Lots of brain research with fMRI scanners…
Just imagine what we could learn by studying deafblind people while letting them ‘talk’ or ‘listen’ in haptic sign language. They should probably go two-by-two? Or else, what would be the stimulus material to which to must respond? Prepared haptic sign language material? Hmm, maybe some observations should be the first step, or recordings using video or perhaps datagloves?
Anyway, I would love to see more of it. Investigate how deafblind people manage to defy the odds and together create a language of their own. They are apparently already telling jokes. When shall we see/feel the first haptic sign language poem? And how can it be captured, transcribed or annotated? What sort of grammar does it have? Does iconicity play a role in sign formation and language use? Is iconicity achieved using similar strategies as in gesture and sign language? An ambitious man could write a research proposal for a nice post-doc position about it.
Sometimes you don’t have to go to small villages in Africa or the Middle East to find interesting languages. Sometimes you just need to hold out your hand.
A wonderful bit of news has been hitting the headlines:
BBC News: Technique links words to signing: Technology that translates spoken or written words into British Sign Language (BSL) has been developed by researchers at IBM. The system, called SiSi (Say It Sign It) was created by a group of students in the UK. SiSi will enable deaf people to have simultaneous sign language interpretations of meetings and presentations. It uses speech recognition to animate a digital character or avatar.
IBM says its technology will allow for interpretation in situations where a human interpreter is not available. It could also be used to provide automatic signing for television, radio and telephone calls.
Serendipity. Just this week a man called Thomas Stone inquired whether he could get access to the signing avatars of the eSign project. I passed him on to Inge Zwitserlood. She first passed him on to the eSign coordinator at Hamburg University, which was a dead end. Finally, he was pointed to the University of East Anglia, to John Glauert. And who is the man behind the sign synthesis in SiSi?
From the press release from IBM:
John Glauert, Professor of Computing Sciences, UEA, said: “SiSi is an exciting application of UEA’s avatar signing technology that promises to give deaf people access to sign language services in many new circumstances.”
This project is an example of IBM’s collaboration with non-commercial organisations on worthy social and business projects. The signing avatars and the award-winning technology for animating sign language from a special gesture notation were developed by the University of East Anglia and the database of signs was developed by RNID (Royal National Institute for Deaf People).
Well done professor Glauert, thank you for keeping the dream alive.
Now for some criticism: the technology is not very advanced yet. It is not at a level where I think it is wise to make promises about useful applications. The signing is not very natural and I think much still needs to be done to achieve of basic level of acceptability for users. But it is good to see that the RNID is on board, although they choose their words of praise carefully.
It is amazing how a nice technology story gets so much media attention so quickly. Essentially these students have just linked a speech recognition module to a sign synthesis module. The inherent problems with machine translation (between any two languages) is not even discussed. And speech recognition only works under very limited conditions and produces limited results.
IBM says: “This type of solution has the potential in the future to enable a person giving a presentation in business or education to have a digital character projected behind them signing what they are saying. This would complement the existing provision, allowing for situations where a sign language interpreter is not available in person”.
First, speech recognition is incredibly poor in a live event like a business presentation (just think of interruptions, sentences being rephrased, all the gesturing that is linked to the speech, etc.) and second, the idea that it will be (almost) as good as an interpreter is ludicrous for at least the next 50 years. The suggestion alone will probably be enough to put off some Deaf people. They might (rightly?) see it as a way for hearing people to try to avoid the costs of good interpreters.
I think the media just fell in love at first sight with the signing avatar and the promises it makes. I also love SiSi, but as I would like to say to her and to all the avatars I’ve loved before: My love is not unconditional. If you hear what I say, will you show me a sign?
Here is a very entertaining video (nice music) that tells the tale of gesture and the origins of language in a nutshell. Much has been written about how the language capability may have evolved in humans with gesture as a stepping stone or how Man’s first language may have been a signed language. Recent brain research findings (gesture+speech, mirror neurons, lateralization, sign language aphasia) have added more indirect ‘evidence’ for these theories. It is still hard to really prove anything about pre-historic events though…
One thing that struck me is how the author talks about how people might be aided in their thinking when the gesture, or doodle and fidget. A reference to fidgeting! Hooray! Should I point out that I think gesture and fidgeting are quite different? No, I will just let it be.
Here is the Air Guitar World Champion 2007, Ochi “Dainoji” Yosuke (Japan) performing at Air Guitar World Championships 2007, Oulu, Finland:
What a nice gesture performance: the pantomime, the gestures, the emotional expressions, the mimicry of the actual guitar play, and of course the dramatic gestures of a lead guitar player on stage. It makes me realize that a language may be found around in every hidden corner of human activity. In this case Dainoji shows a hilarious command of the body language of lead guitars.
It also makes me wonder what exactly would remain of ‘musical gestures‘, when all of a musicians ‘body language’ were hidden to the audience? I guess something would remain, and that would then be the real musical gesture.
Here is a wonderful flash animation from Babystrology, featuring a signing baby:
Lord knows, I am not the world’s biggest fan of baby signing, but this is positively funny. I hope the creators keep treating baby sign with the same sense of humor. It is far too important a subject to ever talk seriously about.
The videos were recorded in 1930, Browning Montana, when sign talkers from 14 different Plains nations gathered as participants in a conference organized by General Hugh L. Scott for the purpose of demonstrating their use of sign language.
The first four videos (see this playlist) contain material from the participants at the conference themselves: Indians telling stories.
Another six videos are a video version of a dictionary of the language (see this playlist).
Following the 1930 Plains Indian Sign Language Conference, General Scott intended to produce a cinematic dictionary of over thirteen hundred signs. Due to the Great Depression it would have been too difficult to get a second appropriation bill passed through congress to finish the cinematic dictionary. He did manage to get over three hundred signs filmed. (Note from Tommy Foley)
An important documenter of the Plains Indian Sign Language was Col. Garrick Mallery. He wrote ‘Sign Language Among North American Indians Compared With That Among Other People And Deaf-Mutes’ a report for the Smithsonian Insitute which was published in 1881, which is avaliable for free download as an e-book via Project Gutenberg.
The people with whom I am working in a project on Automatic Sign Language Recognition are organising a workshop. It is a national event, so the language is Dutch. We organised one workshop before, also called ‘Een Mooi Gebaar‘. The workshop is open to the public. All it takes is for you to register by sending an email to Anja van den Berg. Here is the program (pdf) and the full invitation in Dutch:
Geachte heer, geachte mevrouw, Hierbij willen we u graag uitnodigen voor de tweede workshop ‘Een Mooi Gebaar’, georganiseerd door de Nederlandse Stichting voor het Dove en Slechthorende Kind, de Technische Universiteit Delft en de Koninklijke Auris Groep. In deze tweede workshop zal het resultaat van het project ELo worden gepresenteerd. In dit project is gewerkt aan een Elektronische Leeromgeving voor het leren van gebarenschat door jonge dove en zwaar slechthorende kinderen.
Dit project is uitgevoerd in een samenwerkingsverband tussen NSDSK, de TU Delft en de Koninklijke Auris Groep en werd gesubsidieerd door het VSB fonds. In het project is een multimedia leeromgeving ontwikkeld om jonge dove en zwaar slechthorende kinderen effectief te helpen bij het leren van actieve en passieve gebarenschat. Het project heeft drie jaar gedraaid en we willen nu graag in de workshop de resultaten presenteren aan en bediscussiëren met het werkveld (onderzoekers, onderwijzers, hulpverleners, etc., op het gebied van gebarentaalonderwijs aan dove en slechthorende kinderen).
Naast sprekers vanuit het project hebben we dr. Hans van Balkom (Viataal) bereid gevonden om iets te vertellen over een andere interactieve leeromgeving voor gehandicapte kinderen en dr. Els van der Kooij (Radboud Universiteit Nijmegen) over haar onderzoek naar variatie in de productie van gebaren. De dagvoorzitter is prof. Don Bouwhuis van de TU Eindhoven, afdeling Mens, Techniek en Interactie. De workshop wordt gehouden op vrijdagmiddag 2 november 2007 in collegezaal D van de faculteit EWI van de TU Delft, Mekelweg 4, 2626 CD in Delft. De voertaal is Nederlands. Voor een tolk NGT kan worden gezorgd. Wanneer u hiervan gebruik wilt maken, dan graag aangeven bij het aanmelden. Het programma van de workshop is bijgevoegd. Indien u op onze uitnodiging wilt ingaan, verzoeken we u een e-mail te sturen naar mevrouw Anja van den Berg met vermelding deelname workshop Een Mooi Gebaar 2007. Aan de workshop zijn geen kosten verbonden. Voorafgaand aan de workshop worden koffie en broodjes aangeboden.
Een verslag van de eerste workshop ‘Een Mooi Gebaar’ kunt u hier vinden op.
Wij hopen u graag vrijdag 2 november 2007 te mogen begroeten. Met vriendelijke groeten, Dr. Emile Hendriks, TU Delft Dr. Connie Fortgens, Koninklijke Auris Groep Dr. Gerard Spaai, NSDSK.