A Nice Gesture by Jeroen Arendsen

Various personal interests and public info, gesture, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: Robot Ethics

Item over Zora en andere robots in de zorg bij CampusTV Utrecht

Naar aanleiding van een grote proef met de Zora robot (eigenlijk NAO met wat extra programmering door een Belgisch bedrijfje) was er een item op CampusTV van de Hogeschool Utrecht over robots in de zorg.

Ik was uitgenodigd als expert om commentaar te geven over robots in de zorg.

Zie Campustalk 07 Winter 2015-2016 https://youtu.be/qd8txYpq9GM (actie vanaf 3:30). Het verhaal van de verzamel-expert is trouwens ook leuk (aan het eind).

Robot Man: Noel Sharkey

I read a news item about robots on the Dutch news site nu.nl (here) about the ethics of letting robots take care of people, especially kids and elderly people. The news item was based on this article in ScienceDaily. Basically it is a warning by ‘Top robotics expert Professor Noel Sharkey’. I looked him up and he appears to be a man to get in contact with. He has, for example, called for a code of conduct for the use of robots in warfare (here).

Noel Sharkey

Noel Sharkey

According to his profile at the Guardian (for which he writes):

Noel Sharkey is a writer, broadcaster, and academic. He is professor of AI and Robotics and professor of public engagement at the University of Sheffield and currently holds a senior media fellowship from the Engineering and Physical Science Research Council. Currently his main interest is in ethcial issues surrounding the application of emerging technologies

I wholeheartedly agree with his views so far. He has a good grip on the current capabilities of machine vision and AI, neither of which I would trust when it comes to making important decisions about human life. At least when it comes to applications of speech and gesture recognition, with which I have had a lot of experience with, they simply make too many errors, they make unpredictable errors, and they have lousy error recovery and error handling strategies. So far, I only see evidence that these observations can be generalized to just about any application of machine vision, when it concerns the important stuff.

It reminds me of an anecdote Arend Harteveld (may he rest in peace, see here) once told me: Some engineers once built a neural network to automatically spot tanks in pictures of various environments. As usual with such NNs, they are trained with a set of pictures with negative examples (no tank in the picture) and positive examples (a tank in the picture). After having gone through the training the NN was tested on a separate set of pictures to see how it would perform. And by golly, it did a perfect job. Even if nothing but the barrel of the tank’s gun stuck out of the bushes, it would spot it. And if there wasn’t a tank in the picture the NN never made a mistake. I bet the generals were enthusiastic. A while later it occurred to someone else that there appeared to be a pattern to the pictures: the pictures with the tanks were all shot on a fairly sunny day (both in the training and testing pictures) and the pictures without tanks were taken on a fairly dreary day. The NN was not spotting tanks, it was just looking at the sky…

University of Sheffield

Powered by WordPress & Theme by Anders Norén