Highlights of the Sixense TrueMotion presentation at NVISION08. See the full length videos for more information.
Hmm, it looks quite good, but is it essentially different from the Nintendo Wii? However finegrained the input or robust the sensor mechanisms, there will always remain a matching process between the gestures (the physical actions) and your virtual actions in the game. And that is something you need to learn for every game. In fact, this learning process is a large part of the gaming experience, in my opinion. So, I am not sure that this is actually better than the Wii. But, if they can actually capitalise on their ‘far more accurate gesture-control system’ and create a good gaming and learning experience with it (improving your ‘golf gesture’ over the course of time, for example) then I believe it will succeed.
Jeff Bellinghausen of Sixense shows a magnet-based gesture control system . It works for the personal computer and lets you have a far more accurate gesture-control system in a game compared to the Nintendo Wii
Best Of Show Award & Best UI design at CEATEC 2008.
New remote controller concept from Panasonic R&D (San Jose Lab) featuring a dual click-pad, hand detection and on-screen user interface.
UI snapshots and award ceremony at CEATEC 2008.
This is again, like the Hitachi TV (here), a very good example of good gesture recognition combined with excellent interaction design and a good Graphical User Interface (GUI). The three elements need to be combined to get the right kind of gestural interaction, it would seem. On the iPhone it works that way as well: good touch gesture recognition, good interaction design (the way the gestures translate to computer actions) and a good GUI (which invites or ‘affords’ the right sort of gestures).
This looks like it is actually heading in the right direction. The gestures appear well implemented, as could be expected from the boys of GestureTek. And the use of the Canesta Vision chips (more here) appears to be very effective as well. There is a decent review of this Hitachi TV over here at Take a Plunge…
The TV uses single-chip-based 3-D sensors provided by Canesta and the software created by GestureTek.
The Canesta’s sensors in the TV will collect a 3-D image of everything in the room. This 3-D technology helps it to recognize your hand from a printed hand on your t shirt or in any other object in the room. It recognizes different people and your hand when you stick out your hand for controlling the TV.
The gestures are simple and culturally sensitive. Gesturetek the software makes it easier for the users to control the TV according to their movements. You will also have alternate methods to control the TV.
A user of the new Hitachi TV set can get the control bar with just a wave of the hand
Spin the wrist – activate scroll wheel
Swipe left or right – browse options
Two hands – switch to a different function
As you can see in this next video, they created a wonderful GUI, an interface to go with the gestures. You are not left alone gesturing in thin air, no, you get good feedback on the screen about your gestures. This greatly resembles the old Playstation EyeToy (see here), also made by GestureTek.
A Computer Vision based hand gesture recognition system that replaces the mouse with simple hand movements. It’s done at the School of Computing, Dublin City University, Ireland.
Sometimes the future of gesture recognition can become clearer by examining an application that will definitely NOT hit the market running. Why on earth would anyone prefer to wave their hands in the air and click on empty space with their index finger instead of feeling a solid mouse underneath your hands? I just don’t get it. If it’s supposed to be a technology showcase, then okay, they managed to get something up and running, bravo!
I think that generally speaking, people are enthusiastic about human-computer interaction if it feels good , because it’s usable (effective, efficient, economic), pleasing to the senses, or in some other way beneficial to their concerns. I imagine that this virtual ‘mousing’ is none of the above. Maybe if they changed it to a pistol gesture, where you shoot with your thumb, it would get slightly better. But I would have to be able to launch a quick barrage of shots, say 4 or 5 per second, for this to be of any use in a first person shooter game. There’s a nice challenge for you, guys 🙂
I noticed a flurry of gesture patents that mentioned a ‘portable mutlifunction device’. That’s patentspeak for iPhone. The patents were all from APPLE Inc. Well done Apple. That’s how you manage a patent portfolio. Philips and IBM used to be the masters in this line of completely covering an area with a barrage of patents. It will give Apple something to negotiate with in future business deals with other vendors.
Who will be able to argue with this patent portfolio? Who will be able to claim that the things Apple has patented were already invented elsewhere? Who will be able to maintain that gestures are not technical inventions but natural human communicative actions? Who will pay the lawyers to fight these fights?
Here it all is in a fashion that is easier to digest than sifting through 22 patents.
I think Apple has won this fight before it could even get started.
Control a Beamed Powerpoint Presentation with Gestures
These students appear to have created a gesture based application that we also considered about four years ago. I know IBM and Philips were interested in this sort of application. So, well done guys! And excellent presentation too. I think they managed to make the best of it, given a difficult application.
Why is a presentation system a difficult application? Well if someone is presenting he will usually gesture during talking. These gestures are directed at the audience and not at the presentation software. So, the first task of such a system is to discriminate between those gestures: what is for me and what is for the audience. Furthermore, a presenter may also be fidgeting during his talk which shouldn’t be interpreted as a gesture. Unfortunately, it is unclear whether these students treated these issues.
The things they did do seem to be designed well enough. I think I like the calibration they designed: It creates a connection between the user’s physical environment and the camera he must address. It grounds the interaction. The subsequent examples of the functionality they have built in is less impressive. The forward-back commands are okay, but the drawing and highlighting are not very valuable in my opinion. People in the audience can see that you are pointing at something so there is perhaps little need to do more. But maybe these are first steps which need a bit more maturity in their interaction design to become useful.
Ninja Strike, a killer application for gesture recognition?
This is certainly an interesting development. Previously we have seen mobile phones using motion and acceleration sensors for gesture control (see here and here). There have also been applications where the camera was used to simply capture optical flow: something in front of the camera is moving/turning in direction A therefore the phone is moving/turning in A + 180 degrees (here). In this case the gesture recognition appears to go a step further and at least the hand appears to be extracted from the image. Or does it simply assume all movement is the hand? And then perhaps the position of the motion is categorized into left-middle-right? Maybe the velocity is calculated but I don’t think so.
Update: I do like the setup of how people can hold their phone with the camera in one hand, throw with the other and check their virtual throw on the display. The virtual throwing hand on the display is more or less in the same position as your physical hand, which I think is nice.
EyeSight is a techno start-up of 2004 from the Kingdom of Heaven (Tel Aviv) aspiring to use nothing but Air and a Camera to achieve a divine interaction between true techno-believers and their mobile phones. They prophetize that their technology will ‘offer users, including those who are less technologically-adept, a natural and intuitive way to input data, play games and use their mobile phone for new applications’. Heaven on Earth. Mind you, nothing is carved in stone these days. Besides, human nature and intuition are all too often deified these days anyway. Human nature is what usually gets us into trouble (not in the least in the Middle East).
Anyway, one of their angels called Amnon came to me in the night bearing the following message:
First Allow me to introduce myself. I’m Amnon Shenfeld, RND projects manager for eyeSight Mobile Technologies.
I’ve been following (and enjoying) your BLOG reports for a while, and I thought that the following news from my company, eyeSight Mobile Technologies, may make for an interesting post.
eyeSight has just launched “Ninja Strike”, an innovative mobile game featuring a unique touch free user interface technology we call eyePlay™. Allow me to provide some background information about eyeSight, eyePlay and Ninja Strike: I’m sure you are aware of the popularity and attention innovative user interfaces are getting since the introduction Apple’s IPhone and Nintendo’s Wii… My company’s vision is to bring this technology into the mobile market, and our first products are focused on changing the way mobile gamers play. Our new game, “Ninja Strike”, does exactly this.
You play a ninja warrior with Ninja Stars as your primary weapon. Your stars are thrown by making a throwing motion in front of the phone’s camera. Much like training in real life, during the game you will learn how to throw your weapon correctly, and improve your aim. Your enemies, the evil Kurai ninjas, will also gain strength as the game advances…
Looking forward to hear from you, I hope to see a new post in your blog soon, you’ve been quiet for a while… J
Amnon, will you heed my calls? Have you answers to my burning questions above?
Here is another aspiring wannabee HCI star at the gesture firmament: the gesture watch.
Activate! The Gesture Watch has five infrared sensors, four of which sense any hand motion that occurs above the watch. If the user is wearing the watch on his left hand, he can move his right hand over the watch in an up or down, left or right, or circular motion. Different combinations of these movements communicate an action to the watch. (source)
Why do such applications receive so much credit in the various tech news sites and magazines? The only thing happening is that a couple of engineers have put together a neat device that can do a trick. It’s not commercially available, there are no real users yet, there is no positive market feedback. There is only a vague promise of solving a vague problem.
Discovery Channel: It won’t be long now before all electronic devices go “nano,” and shrink to the size of frosted mini wheat square. You won’t know whether to turn it on or eat it. But the real question is: How do you press those teeny buttons?
I know that writing an opening line can be hard, but this one has fallen straight from the sky on the willing imagination of Tracy Staedter (the reporter in question). Did she not notice the big display on the iPhone? People may not want tiny devices at all, because they need displays. And yes, they may also require decent buttons from their devices. In other words, the premises of the promises are promiscuous (sorry, couldn’t resist); reporters are trading in their objective reflection for a nice soundbite.
Just the thing we needed, really. I am going to throw my remote away as soon as I can get this little gem of technology: something that solves the giant problems we are having with TV remote controls (and replaces them with a whole new set of problems).
I think about twenty problem scenarios popped up simultaneously in my head fighting for priority. But I am just too lazy to type them all in. Instead I will just shrug this one off and save myself the calories.