Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: My Own Research Page 1 of 2

A ‘young doctor’ was born

Well, it’s official.
I have shed my fur and emerged with a newfound dignity.
My PhD days have come to an end last monday.
You are reading a weblog by dr. ir. Arendsen.

I suddenly don’t know what to blog anymore…

Ah yes, some pictures of the day are here and you can also listen to an audio recording of the layman´s talk and the defense itself.

DWDD about our ‘computer die kinderen helpt gebaren te leren’

So, how did ‘De Wereld Draait Door’, a big Dutch TV news show, interpret our press release?
Watch it at roughly a minute into the clip…

Press release and media attention for PhD defenses

The TU Delft sent out a press release about my PhD work and PhD defense next monday and that of Jeroen Lichtenauer, who is defending this afternoon, 15:00u Aula TU Delft. And Gineke ten Holt is the third promovenda who is still working on the project (having started later).

TU Delft nieuwsbericht: Computer helpt dove kinderen met leren gebarentaal
The english press release: Computer helps deaf children learn sign language
In het kort op de TU Delft EWI website

Search google for Dutch news: here
Search google for English news: here
Search google for German news: here

Nederlands nieuws
TU Delta (goed achtergrondverhaal)
Onderwijs Nieuwsdienst
De Telegraaf
De Wereld Draait Door
Kennisnet / Speciaal onderwijs
Omroep West – Westonline (telefonisch interview)
Blik op het nieuws
Delft Nieuws
Gevoelloos (met vergelijk spraakherkenning)

Engelstalig nieuws
Science daily
eScience News

German / Deutsch
Pressetext (good story, email contact, revisions)
Innovations report

Me at the FG2008

I would almost forget, but I also presented some work at the FG2008 conference: Acceptability Ratings by Humans and Automatic Gesture Recognition for Variations in Sign Productions.

Abstract: In this study we compare human and machine acceptability judgments for extreme variations in sign productions. We gathered acceptability judgments of 26 signers and scores of three different Automatic Gesture Recognition (AGR) algorithms that could potentially be used for automatic acceptability judgments, in which case the correlation between human ratings and AGR scores may serve as an ‘acceptability performance’ measure. We found high human-human correlations, high AGR-AGR correlations, but low human-AGR correlations. Furthermore, in a comparison between acceptability and classification performance of the different AGR methods, classification performance was found to be an unreliable predictor of acceptability performance.

Snapshots of the three signs used in the experiment
Snapshots of the three signs used in the experiment.

Examples of three manipulations of the sign SAW
Examples of three manipulations of the sign SAW. We tested about 68 sign manipulations in total. These were run through the automatic recognition algorithms we had been working on and they were rated by human signers. The paper is about how humans and machines can be compared.

User Experience

I have worked as an interaction designer and usability specialist for many years. At the moment, the fashionable thing to be designing for is ‘user experience‘. Several colleagues here at TU Delft are trying to define user experience to help designers and researchers. Arnold Vermeeren organized a workshop on the definition of user experience at the last SIGCHI conference, and he gave me this link to some reading material: MAUSE. Towards the MAturation of Information Technology USability Evaluation. Paul Hekkert and Rick Schifferstein, also colleagues wrote a book called Product Experience which contains many interesting chapters but is a bit of an investment at $170.

Product Experience, the book

Arnold is also working on a nice graphic that contains the elements of User Experience. He is not the only one though, see this collection of images that try to capture user experience.

Emotion is the core
Utility/Usability, Personal Social Meaning, and Aesthetics are the first contributors
Finally, there is a timeline involved starting with anticipation and ending with reflection

But other people have other opinions. Here is a nice graph too. Here is a nice paper on it.

The link between user experience and gesture is that through gesture control one might expect an increased aesthetic appraisal. It might appeal to your senses because you like to move. Or because you like to move in ways that you are familiar with.

Een Mooi Gebaar 2007

The people with whom I am working in a project on Automatic Sign Language Recognition are organising a workshop. It is a national event, so the language is Dutch. We organised one workshop before, also called ‘Een Mooi Gebaar‘. The workshop is open to the public. All it takes is for you to register by sending an email to Anja van den Berg. Here is the program (pdf) and the full invitation in Dutch:

Geachte heer, geachte mevrouw, Hierbij willen we u graag uitnodigen voor de tweede workshop ‘Een Mooi Gebaar’, georganiseerd door de Nederlandse Stichting voor het Dove en Slechthorende Kind, de Technische Universiteit Delft en de Koninklijke Auris Groep. In deze tweede workshop zal het resultaat van het project ELo worden gepresenteerd. In dit project is gewerkt aan een Elektronische Leeromgeving voor het leren van gebarenschat door jonge dove en zwaar slechthorende kinderen.

Dit project is uitgevoerd in een samenwerkingsverband tussen NSDSK, de TU Delft en de Koninklijke Auris Groep en werd gesubsidieerd door het VSB fonds. In het project is een multimedia leeromgeving ontwikkeld om jonge dove en zwaar slechthorende kinderen effectief te helpen bij het leren van actieve en passieve gebarenschat. Het project heeft drie jaar gedraaid en we willen nu graag in de workshop de resultaten presenteren aan en bediscussiëren met het werkveld (onderzoekers, onderwijzers, hulpverleners, etc., op het gebied van gebarentaalonderwijs aan dove en slechthorende kinderen).

Naast sprekers vanuit het project hebben we dr. Hans van Balkom (Viataal) bereid gevonden om iets te vertellen over een andere interactieve leeromgeving voor gehandicapte kinderen en dr. Els van der Kooij (Radboud Universiteit Nijmegen) over haar onderzoek naar variatie in de productie van gebaren. De dagvoorzitter is prof. Don Bouwhuis van de TU Eindhoven, afdeling Mens, Techniek en Interactie. De workshop wordt gehouden op vrijdagmiddag 2 november 2007 in collegezaal D van de faculteit EWI van de TU Delft, Mekelweg 4, 2626 CD in Delft. De voertaal is Nederlands. Voor een tolk NGT kan worden gezorgd. Wanneer u hiervan gebruik wilt maken, dan graag aangeven bij het aanmelden. Het programma van de workshop is bijgevoegd. Indien u op onze uitnodiging wilt ingaan, verzoeken we u een e-mail te sturen naar mevrouw Anja van den Berg met vermelding deelname workshop Een Mooi Gebaar 2007. Aan de workshop zijn geen kosten verbonden. Voorafgaand aan de workshop worden koffie en broodjes aangeboden.

Een verslag van de eerste workshop ‘Een Mooi Gebaar’ kunt u hier vinden op.

Wij hopen u graag vrijdag 2 november 2007 te mogen begroeten. Met vriendelijke groeten, Dr. Emile Hendriks, TU Delft Dr. Connie Fortgens, Koninklijke Auris Groep Dr. Gerard Spaai, NSDSK.

Workshop on Visual Prosody in Language Communication

I will be at the MPI in Nijmegen Thursday May 10 en Friday May 11. There is an international Workshop on Visual Prosody in Language Communication, organized by Alexandra Jesse and others (program). I am giving a talk called when does sign recognition start? on friday around 10.30u.

Maybe I will see you there?

My patent on ‘electronic call assistants’

When I worked at KPN Telecom in 2001 we developed and patented an idea for a Personal Call Assistant: ELECTRONIC CALL ASSISTANTS WITH SHARED DATABASE (Wipo, Esp@ce)

Patent picture

A telephone exchange (1) arranged to communicate with communication units (2, 3, 4, 5, 7) and to provide a plurality of electronic call assistants to the communication units, a first electronic call assistant (12) being provided to a first communication unit (2) and a second electronic call assistant (14) to a second communication unit (4), the first electronic call assistant (12) having access to a distinct first database (36), the second electronic call assistant (14) having access to a distinct second database (38), wherein the first (12) and second (14) electronic call assistants share a common database (40).

Alex Vieira, who works at the EPO, pointed out to me that the patent had become available, since for a long time I think it was just kept ‘pending’. But now it appears to be filed worldwide (WO0150720), in Europe (EP1247392), and the Netherlands (NL1014528C).

Does anyone remember Wildfire? It seems to still be around here, and here, or listen to a demo here. It was the leading PCA at the time. They had filed original patents, to which we had to refer.

Unfortunately, the patent is not mine, it is owned by the company KPN Telecom. They did pay me a bonus for it: one silver US dollar. I thought it was a nice gesture.

Bio Info

You can read my story here or check my profile at LinkedIn

I graduated in 1998 on speech recognition in a multi-modal user interface for a Personal Digital Assistant (PDA), and continued to work at KPN on speech recognition, mainly for phone (voice) services. Back then it was a nice time to be at KPN Research, but their place, the dr. Neherlab in Leidschendam, is not in use anymore as a research facility. KPN Research was sold by its mother company KPN Telecom and became TNO Telecom in Delft. The commercial activities on speech recognition came to an almost complete stop at the time, apart from some small projects. The research activities were the core of a new startup within TNO Telecom called Dutchear.

I graduated at KPN Research with Jans Aasman, Oscar Rietkerk and Angelien Sanderman. From the TU Delft, Jans was my professor together with Klaas Robers, and Annelieke van der Sluijs tutored as well.

Later, I joined the shortlived Competence Center for Speech-driven Services (CCSD). When that Center was closed I joined Value Added Services at KPN Telecom. It was during the CCSD time that we developed and patented an idea for a Personal Call Assistant. Does anyone remember Wildfire? It seems to still be around here, and here, or listen to a demo here. It was the leading PCA at the time. They had filed original patents, to which we had to refer. ELECTRONIC CALL ASSISTANTS WITH SHARED DATABASE Alex Vieira, who works at the EPO pointed out to me that the patent had become available, since for a long time I think it was just kept ‘pending’.

Workshop Visual Prosody

On May 10-11, Alexandra Jesse and Elizabeth Johnson from the Max Planck Institute for Psycholinguistics in Nijmegen are organizing a Workshop on Visual Prosody in Language Communication.

I am invited to participate with a talk and enter discussions with fellow researchers. The list of participants is quite nice and I am proud to be amongst them.

Talking is visual too? (MPI)

I am a little worried about the title though, in particular the phrase ‘Visual Prosody’. It appears to suggest that the main role of visual information in language is prosodic, which at least for sign language and gestures is not the case in my opinion. But the abstract does mention other aspects of visual information in language, so it must be allright if I add my perspective.

The deadline for abstract submission is february 23, and the programme will be made available after that I guess. Update 2 April ’07: The program, my talk When Does Sign Recognition Start?. Registration is required but free.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén