A Neural Network Model for a View Independent Extraction of Reach-to-Grasp Action Features
The aim of this paper is to introduce a novel, biologically inspired approach to extract visual features relevant for controlling and understanding reach-to-grasp actions. One of the most relevant of such features has been found to be the grip-size defined as the index finger-tip - thumb-tip distance. For this reason, in this paper we focus on this feature. The human visual system is naturally able to recognize many hand configurations – e.g. gestures or different types of grasps – without being affected substantially by the (observer) viewpoint. The proposed computational model preserves this nice ability.
It is very likely that this ability may play a crucial role in action understanding within primates (and thus human beings). More specifically, a family of neurons in macaque’s ventral premotor area F5 have been discovered which are highly active in correlation with a series of grasp–like movements. This findings triggered a fierce debate about imitation and learning, and inspired several computational models among which the most detailed is due to Oztop and Arbib (MNS model). As a variant of the MNS model, in a previous paper, we proposed the MEP model which relies on an expected perception mechanism. However, both models assume the existence of a mechanism to extract visual features in a viewpoint independent way but neither of them faces the problem of how this mechanism can be achieved in a biologically plausible way. In this paper we propose a neural network model for the extraction of visual features in a viewpoint independent manner, which is based on the work by Poggio and Riesenhuber.
Unable to display preview. Download preview PDF.
- 2.Gallese, V.: A neuroscientific grasp of concepts: from control to representation. Phil. Trans. Royal Soc. London (2003)Google Scholar
- 7.Prevete, R., Santoro, R., Mariotti, F.: Biologically inspired visuo-motor control model based on a deflationary interpretation of mirror neurons. In: CogSci2005 - XXVII Annual Conference of the Cognitive Science Society, pp. 1779–1784 (2005)Google Scholar
- 9.Santello, M., Soechting, J.: Gradual molding of the hand to object contours. J. Neurophysiol. 79, 1307–1320 (1998)Google Scholar
- 10.Jeannerod, M.: Intersegmental coordination during reaching at natural visual objects. In: Attention and Performance. Hillsdale, pp. 153–168 (1981)Google Scholar
- 11.Santello, M., Flanders, M., Soechting, J.: Patterns of hand motion during grasping and the influence of sensory guidance. J. Neurophysiol. 22(4), 1426–1435 (2002)Google Scholar
- 12.Serre, T., Wolf, L., Poggio, T.: Object recognition with features inspired by visual cortex. In: CVPR (2), pp. 994–1000 (2005)Google Scholar
- 13.Bishop, C.: Neural Networks for Pattern Recognition. Hinton (1996)Google Scholar