Karen Pine gave a Gesture lecture at the MPI Nijmegen, entitled ‘More than I can say…’ Why gestures are key to children’s development.
Abstract: My interest in gesture came from testing children in a cognitive domain and realizing that they knew far more than they could say. I found I could get a better idea about what they knew from looking at their gestures, rather than listening to their speech. Children’s early, emerging or implicit knowledge emerges in gesture before it appears in speech and I will show how my research went on to try and capture this. I will also address the role that gestures play in children’s language – both in helping them to access the mental lexicon and to understand speech input that requires pragmatic comprehension. Finally our current work with infants, the first longitudinal study of its kind, is looking at how gestural input affects language development and I will present some preliminary findings from this study.
It was an interesting lecture with some nice results. Some of my personal observations:
- Pine’s work seems to be strongly connected to Susan Goldin-Meadow‘s work together with Church, Alibali, and Singer: Gesture as a window of the mind of children. What do they know, what is their zone of proximal development (my interpretation), etc.
- Pine used the Noldus Observer to annotate the speech and gesture in the video (and not Elan).
- Pine also revisited Krauss’ lexical access hypothesis of why people gesture. She elicited tip-of-the-tongue states (ToTs) in a gesture allowed and a gesture prohibited condition. In the gesture allowed condition kids resolved more ToTs. Jan Peter de Ruiter, JP, mentioned it would be better to look at whether kids actually made gesture when they resolved ToTs or not. He found that gestures ocurred more often when a ToT was not resolved. He suggested, referring to his 2006 paper, that people gesture in ToTs because they wish to communicate that they are (still) working on it, and not to aid their memory search.
- Idea to test JP’s suggestion: ToT can be elicited with picture naming task, Pine found iconic gestures to be enactments, which fits JP’s suggestion that people are communicating, because it is complementary information. So, if you would elicit ToT with a mime of an object’s action, then you might expect the iconic gestures, if any, to be depictments instead of enactments.