Those who study them like to call them Pen Gestures. The best known example is perhaps the Graffiti on Palm, which a helpful chap managed to implement in a Flash version using his own pattern recognition for Flash. Sharon Oviatt is the great champion of multimodal HCI with pen and speech interfaces. A tell-tale sign of the trouble this technique has to tackle is given in the Palm UI. Graffiti is used to ‘type’, not to ‘command’. For commands, the UI provides tap-buttons.

Some Examples (source: InkGesture by Jumping Minds)

Pen gesturing may well be a form of semiotically challenged HCI. Graffiti is actually writing in camouflage. Tapping buttons equals mouse clicks equals practical actions. Symbolical pen strokes that convey commands or modulate them come closest to being a ‘movement intended to communicate’. But then to whom? If its to the computer, then what is the message? The message is that a certain command is to be executed. Nothing new. I see no a priori advantage for ‘pen gestures’ over clicking buttons or even a DOS-prompt. All that matters are the same old pro’s and con’s of UI means; of usability in a given task with given users.

With some leniency any and all of the above actions can be seen as a communicative act towards the computer. In this sense Human-Computer Interaction is always a communicative dialogue. But I don’t like it like that. I like to think there is a difference between talking and gesturing to addressees (be they computer or not) and using them as tools. And I’ll make this promise: At the first sign that my computer is actually interest in what I have to say, I will tell it all my dreams and ambitions for his future.