One-Shot Learning with Feedback for Multi-layered Convolutional Network
This paper proposes an improved add-if-silent rule, which is suited for training intermediate layers of a multi-layered convolutional network, such as a neocognitron. By the add-if-silent rule, a new cell is generated if all postsynaptic cells are silent. The generated cell learns the activity of the presynaptic cells in one-shot, and its input connections will never be modified afterward. To use this learning rule for a convolutional network, it is required to decide at which retinotopic location this rule is to be applied. In the conventional add-if-silent rule, we chose the location where the activity of presynaptic cells is the largest. In the proposed new learning rule, a negative feedback is introduced from postsynaptic cells to presynaptic cells, and a new cell is generated at the location where the presynaptic activity fails to be suppressed by the feedback. We apply this learning rule to a neocognitron for hand-written digit recognition, and demonstrate the decrease in the recognition error.
Keywordsadd-if-silent one-shot learning negative feedback neocognitron convolutional network pattern recognition
Unable to display preview. Download preview PDF.