A wonderful bit of news has been hitting the headlines:

BBC News: Technique links words to signing: Technology that translates spoken or written words into British Sign Language (BSL) has been developed by researchers at IBM. The system, called SiSi (Say It Sign It) was created by a group of students in the UK. SiSi will enable deaf people to have simultaneous sign language interpretations of meetings and presentations. It uses speech recognition to animate a digital character or avatar.
IBM says its technology will allow for interpretation in situations where a human interpreter is not available. It could also be used to provide automatic signing for television, radio and telephone calls.

Read the full story at IBM: IBM Research Demonstrates Innovative ‘Speech to Sign Language’ Translation System

Demo or scripted scenario?

Serendipity. Just this week a man called Thomas Stone inquired whether he could get access to the signing avatars of the eSign project. I passed him on to Inge Zwitserlood. She first passed him on to the eSign coordinator at Hamburg University, which was a dead end. Finally, he was pointed to the University of East Anglia, to John Glauert. And who is the man behind the sign synthesis in SiSi?

From the press release from IBM:

John Glauert, Professor of Computing Sciences, UEA, said: “SiSi is an exciting application of UEA’s avatar signing technology that promises to give deaf people access to sign language services in many new circumstances.”
This project is an example of IBM’s collaboration with non-commercial organisations on worthy social and business projects. The signing avatars and the award-winning technology for animating sign language from a special gesture notation were developed by the University of East Anglia and the database of signs was developed by RNID (Royal National Institute for Deaf People).

Well done professor Glauert, thank you for keeping the dream alive.

Now for some criticism: the technology is not very advanced yet. It is not at a level where I think it is wise to make promises about useful applications. The signing is not very natural and I think much still needs to be done to achieve of basic level of acceptability for users. But it is good to see that the RNID is on board, although they choose their words of praise carefully.

It is amazing how a nice technology story gets so much media attention so quickly. Essentially these students have just linked a speech recognition module to a sign synthesis module. The inherent problems with machine translation (between any two languages) is not even discussed. And speech recognition only works under very limited conditions and produces limited results.

IBM says: “This type of solution has the potential in the future to enable a person giving a presentation in business or education to have a digital character projected behind them signing what they are saying. This would complement the existing provision, allowing for situations where a sign language interpreter is not available in person”.

First, speech recognition is incredibly poor in a live event like a business presentation (just think of interruptions, sentences being rephrased, all the gesturing that is linked to the speech, etc.) and second, the idea that it will be (almost) as good as an interpreter is ludicrous for at least the next 50 years. The suggestion alone will probably be enough to put off some Deaf people. They might (rightly?) see it as a way for hearing people to try to avoid the costs of good interpreters.

I think the media just fell in love at first sight with the signing avatar and the promises it makes. I also love SiSi, but as I would like to say to her and to all the avatars I’ve loved before: My love is not unconditional. If you hear what I say, will you show me a sign?