A Nice Gesture by Jeroen Arendsen

Various personal interests and public info, gesture, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Search results: "Wii"

Ninja Strike Airtime


Ninja Strike, a killer application for gesture recognition?

This is certainly an interesting development. Previously we have seen mobile phones using motion and acceleration sensors for gesture control (see here and here). There have also been applications where the camera was used to simply capture optical flow: something in front of the camera is moving/turning in direction A therefore the phone is moving/turning in A + 180 degrees (here). In this case the gesture recognition appears to go a step further and at least the hand appears to be extracted from the image. Or does it simply assume all movement is the hand? And then perhaps the position of the motion is categorized into left-middle-right? Maybe the velocity is calculated but I don’t think so.

Update: I do like the setup of how people can hold their phone with the camera in one hand, throw with the other and check their virtual throw on the display. The virtual throwing hand on the display is more or less in the same position as your physical hand, which I think is nice.

EyeSight is a techno start-up of 2004 from the Kingdom of Heaven (Tel Aviv) aspiring to use nothing but Air and a Camera to achieve a divine interaction between true techno-believers and their mobile phones. They prophetize that their technology will ‘offer users, including those who are less technologically-adept, a natural and intuitive way to input data, play games and use their mobile phone for new applications’. Heaven on Earth. Mind you, nothing is carved in stone these days. Besides, human nature and intuition are all too often deified these days anyway. Human nature is what usually gets us into trouble (not in the least in the Middle East).

Anyway, one of their angels called Amnon came to me in the night bearing the following message:

Hello Jeroen,
First Allow me to introduce myself. I’m Amnon Shenfeld, RND projects manager for eyeSight Mobile Technologies.
I’ve been following (and enjoying) your BLOG reports for a while, and I thought that the following news from my company, eyeSight Mobile Technologies, may make for an interesting post.
eyeSight has just launched “Ninja Strike”, an innovative mobile game featuring a unique touch free user interface technology we call eyePlay™. Allow me to provide some background information about eyeSight, eyePlay and Ninja Strike: I’m sure you are aware of the popularity and attention innovative user interfaces are getting since the introduction Apple’s IPhone and Nintendo’s Wii… My company’s vision is to bring this technology into the mobile market, and our first products are focused on changing the way mobile gamers play. Our new game, “Ninja Strike”, does exactly this.
You play a ninja warrior with Ninja Stars as your primary weapon. Your stars are thrown by making a throwing motion in front of the phone’s camera. Much like training in real life, during the game you will learn how to throw your weapon correctly, and improve your aim. Your enemies, the evil Kurai ninjas, will also gain strength as the game advances…
Looking forward to hear from you, I hope to see a new post in your blog soon, you’ve been quiet for a while… J
Amnon Shenfeld

Amnon, will you heed my calls? Have you answers to my burning questions above?

eyeSight
game demo

ZCam 3D Gesture Recognition

The ZCam, from 3DV Systems, is featuring in several recent movie demos on the YouTube. It is a camera with onboard technology (it emits infrared pulses and catches refections) that not only provides RGB values for a matrix of pixels, but also a Z-score, i.e. the depth of that pixel. It is reported to be quite accurate. It is also reported to become cheaper, and therefore more widely available to, for example, game developers.

ZCam
The ZCam, small but deep?

So far, there are demos shown of a squash game, a boxing game, and a flight simulator. But the website from the company has a more extensive video gallery.

Here is a nice review on Gizmodo, where they always keep an eye out for interesting gadgets like this. Crowd response has been positive so far but it remains to be seen whether this technology will be adopted in the market. It is at the moment a bit hard to say how well the technology works, what sort of drawbacks there are, etc.

But if it does work well than I think it has less disadvantages than stereo-cameras. I have some experience with a stereo-camera (two synchronized cameras with a software algorithm to integrate their output and obtain depth). This is expensive, costs processing power, costs time to configure, is unreliable (it should not be moved or even touched after configuration), and it relies on skin color segmentation to track faces and hands.

But what about the ZCam, what are users required to do or not to do, that is the question.

Medal of Honor Heroes 2 – Gesture Trailer

There is a new release in the game series Medal of Honor (see also Medal of Honor Vanguard), which again bets heavily on the Wii gesture control.

Trailer: Experience WWII with an all new Wii-mote control scheme!

GameTrailers commenter: For once a Wii control scheme that actually looks intuitive and intricate as opposed to gimmicky. Ill be picking this up tomorrow

Gestures are Sterile, Touching is not

At the renowned Fraunhofer institute they may have built a killer gesture app: Gesture control that lets surgeons control a 3D-display of a head (for example) during surgery while remaining sterile (touching buttons would break sterility, I guess).


Rotate the 3D image by gesturing (source)

Press Release: Non-contact image control

As if by magic, the three-dimensional CAT scan image rotates before the physician’s eyes – merely by pointing a finger. This form of non-contact control is ideal in an operating room, where it can deliver useful information without compromising the sterile work environment.

The physician leans back in a chair and studies the three-dimensional image floating before his eyes. After a little reflection, he raises a finger and points at a virtual button, likewise floating in the air. At the physician’s command, the CAT scan image rotates from right to left or up and down – precisely following the movement of his finger. In this way, he can easily detect any irregularities in the tissue structure. With another gesture, he can click on to the next image. Later, in the operating room, the surgeon can continue to refer to the scanner images. Using gesture control to rotate the images, he can look at the scan of the patient’s organs from the same perspective as he sees them on the operating table. There is no risk of contaminating his sterile gloves, because there is no mouse or keyboard involved.

But how does the system know which way the finger is pointing? “There are two cameras installed above the display that projects the three-dimensional image,” explains Wolfgang Schlaak, who heads the department at the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut HHI in Berlin that developed the display. “Since each camera sees the pointing finger from a different angle, image processing software can then identify its exact position in space.” The cameras record one hundred frames per minute. A third camera, integrated in the frame of the display, scans the user’s face and eyes at the same frequency. The associated software immediately identifies the inclination of the person’s head and the direction in which the eyes are focused, and generates the appropriate pair of stereoscopic images, one for the left eye and one for the right. If the person moves their head a couple of inches to the side, the system instantly adapts the images. “In this way, the user always sees a high-quality three-dimensional image on the display, even while moving about. This is essential in an operating theater, and allows the physician to act naturally when carrying out routine tasks,” says Schlaak. “The unique feature of this system is that it combines a 3-D display screen with a non-contact user interface.” The three-dimensional display costs significantly less than conventional 3-D screens of comparable quality. Schlaak is convinced that “this makes our gesture-controlled 3-D display an affordable option even for smaller medical practices.” The research team will be presenting its prototype at the MEDICA trade fair from November 14 to 17, 2007, in Düsseldorf (Hall 16, Stand D55). Schlaak hopes to be able to commercialize the system within a year or so.

Things like this are probably the best bet for the near future of gesture recognition. Niche applications that exploit some specific benefit of using gestures instead of (or besides) other, more mundane interface technology. The biggest hit in gesture land is without a doubt the Nintendo Wii, which exploits another unique selling point of gestures: a higher (or more representative) physical involvement leading to a better ‘experience’ of a game. It specifically targets gamers who are interested in fun and exercise in a social context. I doubt that hardcore gamers, intent on getting to higher levels of killing sprees, will be very keen on the Wii.

And so it will probably remain for the near future. Like with speech recognition, gesture recognition will have to find some nice niches to live in and multiply. Maybe one day, the general conditions will change (ubiquitous camera viewpoints? intention-aware machines?) and gesture can become the dominant form of HCI, driving buttons to niche applications. I wouldn’t bet on it right now, though.

Is Cube-Flopping Gesturing?

Fellow PhD student at the TU Delft, Miguel Bruns-Alonso created a nice video of his Music Cube (his graduation project, see paper). And then Jasper van Kuijk (another colleague) blogged it for usability. And here I come wandering wondering: whether moving this Cube in certain ways to control music playing can or should be considered gesturing.

 

Perhaps this is a highly irrelevant question. I am pretty sure Miguel could barely care less. But that’s me, always worrying about silly gesture stuff.

In a way the question is similar to a previously unanswered question Is Sketching Gesturing?

Like with sketching it is not the movement itself that matters. Rather, it is the effect that the movement causes that is important. Although the case of “shuffling” may be an exception because the “shaking” movement is fairly directly registered. Other commands are given by changing the side of the Cube that is up (playlists), or by pressing buttons (next, turn off), or turning the speaker-that-is-not-a-speaker (volume). These are fairly traditional ‘controlling’ movements, comparable to adjusting the volume or radiofrequency with a turn-knob (as in old radios).

I will leave aside the question whether such tangibility constitutes a more valuable or enjoyable interaction with our machines. Some believe that it does and who am I to disagree. Like it or not, take it or leave it, you choose for yourself.

What concerns me is whether such developments and other gesture recognition developments share certain characteristics. If so, then exchanging ideas between the areas may be a good idea. One of my bits of work is on discriminating fidgeting and gestures.

The question rises whether the Music Cube will allow people to pick it up and fidget with it without immediately launching commands. Can I just handle it without ‘touching the controls’? Like with other gesture recognition applications I want this Cube to allow my fidgeting. In that case rules for human behaviour regarding the difference between behaviour that is intended to communicate (or control) and behaviour that is just fidgeting would be useful. And why don’t we carry the thought experiment of the Music Cube further? If it has motion sensing, it should be able to do the sort of things that the Nintendo Wii can too. Why not trace gestures in the air to conjure up commands of all sorts? How about bowling with the Cube? Or better yet, playing a game of dice?

Gesture and Speech Recognition RSI

Gesture and speech recognition often promise the world a better, more natural way of interacting with computers. Often speech recognition is sold as a solution for RSI stricken computer users. And, for example, prof. Duane Varana, of the Australasian CRC for Interaction Design (ACID) believes his “gesture recognition device [unlike a mouse] will accommodate natural gestures by the user without risking RSI or strain injury”.

Gesturing: A more natural interaction style? (source)

So, it is a fairly tragic side effect of these technologies that they create new risks of physical injury. Using speech recognition may give you voice strain, which some describe as a serious medical condition affecting the vocal chords and caused by improper or excessive use of the voice. Software coders who suffer RSI and switch to speech recognition to code are mentioned as a risk group for voice strain.

Using gesture recognition, or specifically the Nintendo Wii, may cause aching backs, sore shoulders and even a Wii elbow. It comes from prolonged playing of virtual tennis or bowling when gamers appear to actually use neglected muscles for exensive periods of time… In comparison, gamers have previously been known to develop a Nintendo thumb from thumbing a controller’s buttons. I can only say: the Wii concept is working out. It is a workout for users, and it works out commercially as well. I even saw an add on Dutch national TV just the other day. The Wii is going mainstream.

As far as injuries are concerned: If you bowl or play tennis in reality for 8 hours in a row, do you think you will stay free of injury? Just warm up, play sensibly and not the whole night. Nonsense advice for gamers, I know, but do not complain afterward. A collection of Wii injuries (some real, some imanginary):

www.wiihaveaproblem.com, devoted to Wii trouble.
What injuries do Wii risk?
Bloated Black Eye
Broken TVs, and a hand cut on broken lamp (YouTube, possibly faked).

For more background see also: The Boomer effect: accommodating both aging-related disabilities and computer-related injuries.

Gesture and Speech Recognition RSI

Gesture and speech recognition often promise the world a better, more natural way of interacting with computers. Often speech recognition is sold as a solution for RSI stricken computer users. And, for example, prof. Duane Varana, of the Australasian CRC for Interaction Design (ACID) believes his “gesture recognition device [unlike a mouse] will accommodate natural gestures by the user without risking RSI or strain injury”.

Gesturing: A more natural interaction style? (source)

So, it is a fairly tragic side effect of these technologies that they create new risks of physical injury. Using speech recognition may give you voice strain, which some describe as a serious medical condition affecting the vocal chords and caused by improper or excessive use of the voice.

Software coders who suffer RSI and switch to speech recognition to code are mentioned as a risk group for voice strain. Using gesture recognition, or specifically the Nintendo Wii, may cause aching backs, sore shoulders and even a Wii elbow. It comes from prolonged playing of virtual tennis or bowling when gamers appear to actually use neglected muscles for exensive periods of time…

In comparison, gamers have previously been known to develop a Nintendo thumb from thumbing a controller’s buttons. I can only say: the Wii concept is working out. It is a workout for users, and it works out commercially as well. I even saw an add on Dutch national TV just the other day.

The Wii is going mainstream. As far as injuries are concerned: If you bowl or play tennis in reality for 8 hours in a row, do you think you will stay free of injury? Just warm up, play sensibly and not the whole night. Nonsense advice for gamers, I know, but do not complain afterward.

A collection of Wii injuries (some real, some imanginary): – www.wiihaveaproblem.com, devoted to Wii trouble. – What injuries do Wii risk? – Bloated Black EyeBroken TVs, and a hand cut on broken lamp (YouTube, possibly faked).

For more background see also: The Boomer effect: accommodating both aging-related disabilities and computer-related injuries.

Gesture Control, Just a Gadget?

A while ago (April ’05) BBC News already had a topic on project Audioclouds by researchers at the University of Glasgow. The aim is to control gadgets using movement and sound. Motion is sensed using accelerometers, not via cameras. The coverage by the BBC led to a discussion on engadget about the desirability of such gesture-controlled gadgets. Most of the obvious pros and cons are given, as well as more careful arguments. Since then the project moved on, featuring contributions to lots of HCI conferences mostly.

How would you feel about dialing a number by tracing it in the air? (PC World)

Gesture-controlled, or motion-controlled gadgets: DJammer, iPod, Nintendo Wii, Vodaphone’s Sharp V603SH handset, Pantech’s PH-S6500, LG Electronics’ SV360, Samsung Electronics’ SCH-S310, Antar,

Exergames blogs about physical gaming

Exergames is a nice-looking blog on exercise and physical gaming. They do not update much though. I guess they are too busy playing with their new Nintendo Wii‘s. Or perhaps they were seriously injured from the intense physical interaction, preventing them from typing? Wanna work out with a game? (source) By the way: Wouter has a nice post on another quirk of Nintendo Wii: The controls of enthusiastic gamers are flying out of their sweaty hands and breaking their expensive TVs or windows. Every rose has its thorn.

Teaching an iPod New Tricks?

Instead of moving to the beat of the music, some Apple genius also patented an idea to have iPod change the beat to your pace of activities. And they filed a couple more patents for good measure. With motion sensing and an accelerometer it could even be respond to .. well, you name the movement. With a little imagination you could have it do the same as a Nintendo Wii remote. Why on earth you would want to is beyond me though.

Will your iPod bring your slippers one day? Or just waggle if it likes the tune?

It won’t be long before my iPod is my intelligent trainer instead of my humble servant:
Me: Playing tunes on the iPod while jogging
iPod: Senses that I am at a faster pace than the beat and turns it up
Me: Hearing the beat go faster I get excited and start a run
iPod: Also quickens, then we run perfect synchrony for a while
Me: Getting tired, slowing down
iPod: Tries to motivate me by keeping the pace high for just a bit longer and shouting “half a minute, Jeroen!”
Me: I go for it and then slow to a crawl
iPod: Shifts the tune to a slower one and we are in harmony for a while
Me: Heartbeat is getting good again
iPod: Notices the heartbeat and starts secretly pumping up the pace again
Me: Follow iPods lead and put in a extra mile
iPod: Tells me “good job! now hit the shower”

Page 2 of 2

Powered by WordPress & Theme by Anders Norén