Milford School pupils were inspired by ‘Strictly Come Dancing’ to design costumes for Femisapiens and then program dance routines for them using Go-Robo. Facilities supplied by eLC South Nottingham.
Will this be the future of girlie robots? Femisapien is definitely a cute robot from Wowwee with its endearing kisses (here). And with a little software and some creativity you can use Femisapien as your Barbie dressup doll 🙂
This video from the 2009 International Consumer Electronics Show shows how you can control your TV using hand gestures that are detected by the 3-D camera atop the TV. Softkinetics does the software, a Swiss company does the depth camera, and Orange Vallee will deploy it in its interactive TV network
Highlights of the Sixense TrueMotion presentation at NVISION08. See the full length videos for more information.
Hmm, it looks quite good, but is it essentially different from the Nintendo Wii? However finegrained the input or robust the sensor mechanisms, there will always remain a matching process between the gestures (the physical actions) and your virtual actions in the game. And that is something you need to learn for every game. In fact, this learning process is a large part of the gaming experience, in my opinion. So, I am not sure that this is actually better than the Wii. But, if they can actually capitalise on their ‘far more accurate gesture-control system’ and create a good gaming and learning experience with it (improving your ‘golf gesture’ over the course of time, for example) then I believe it will succeed.
Jeff Bellinghausen of Sixense shows a magnet-based gesture control system . It works for the personal computer and lets you have a far more accurate gesture-control system in a game compared to the Nintendo Wii
Canesta showed off a demo at the International Consumer Electronics show in Hitachi’s booth. Basically, the 3-D depth camera can detect your movements. Hitachi used this system to create a gesture-controlled TV. You don’t need a remote control.
Best Of Show Award & Best UI design at CEATEC 2008.
New remote controller concept from Panasonic R&D (San Jose Lab) featuring a dual click-pad, hand detection and on-screen user interface.
UI snapshots and award ceremony at CEATEC 2008.
This is again, like the Hitachi TV (here), a very good example of good gesture recognition combined with excellent interaction design and a good Graphical User Interface (GUI). The three elements need to be combined to get the right kind of gestural interaction, it would seem. On the iPhone it works that way as well: good touch gesture recognition, good interaction design (the way the gestures translate to computer actions) and a good GUI (which invites or ‘affords’ the right sort of gestures).
This looks like it is actually heading in the right direction. The gestures appear well implemented, as could be expected from the boys of GestureTek. And the use of the Canesta Vision chips (more here) appears to be very effective as well. There is a decent review of this Hitachi TV over here at Take a Plunge…
The TV uses single-chip-based 3-D sensors provided by Canesta and the software created by GestureTek.
The Canesta’s sensors in the TV will collect a 3-D image of everything in the room. This 3-D technology helps it to recognize your hand from a printed hand on your t shirt or in any other object in the room. It recognizes different people and your hand when you stick out your hand for controlling the TV.
The gestures are simple and culturally sensitive. Gesturetek the software makes it easier for the users to control the TV according to their movements. You will also have alternate methods to control the TV.
A user of the new Hitachi TV set can get the control bar with just a wave of the hand
Spin the wrist – activate scroll wheel
Swipe left or right – browse options
Two hands – switch to a different function
As you can see in this next video, they created a wonderful GUI, an interface to go with the gestures. You are not left alone gesturing in thin air, no, you get good feedback on the screen about your gestures. This greatly resembles the old Playstation EyeToy (see here), also made by GestureTek.
Ron Jans steekt middelvinger op. Dit gebeurde in de wedstrijd tegen Heerenveen.
And we have another case of a football coach giving the finger to the referee (see here and here for similar cases). The KNVB (the Dutch football organisation) is investigating the case (here). Undoubtedly, there will be some sort of reprimand or fine. But Jans has a lot of credit with the right people, so it will all blow over very rapidly. He has, after all, already apologised (here).
The funny thing about this case of flipping the bird is the way in which Jans tries to camouflage his insult. He follows it with some ‘I need to get warm’ arm flappings. Sadly, people are excellent at spotting gestures in a continuous stream of movement. Nobody will have had any trouble in seeing the finger. Movements that precede or follow a gesture do not hamper the perception of the gesture (see my own research on this, for example), nor can they serve as an effective cause for denial. Jans did not try to deny it and that would have been ridiculous.
Next time, Jans could take a hint from Jens Lehmann, who used a perfect finger camouflage and was able to deny it succesfully, while achieving his target.
A Computer Vision based hand gesture recognition system that replaces the mouse with simple hand movements. It’s done at the School of Computing, Dublin City University, Ireland.
Sometimes the future of gesture recognition can become clearer by examining an application that will definitely NOT hit the market running. Why on earth would anyone prefer to wave their hands in the air and click on empty space with their index finger instead of feeling a solid mouse underneath your hands? I just don’t get it. If it’s supposed to be a technology showcase, then okay, they managed to get something up and running, bravo!
I think that generally speaking, people are enthusiastic about human-computer interaction if it feels good , because it’s usable (effective, efficient, economic), pleasing to the senses, or in some other way beneficial to their concerns. I imagine that this virtual ‘mousing’ is none of the above. Maybe if they changed it to a pistol gesture, where you shoot with your thumb, it would get slightly better. But I would have to be able to launch a quick barrage of shots, say 4 or 5 per second, for this to be of any use in a first person shooter game. There’s a nice challenge for you, guys 🙂