First Order Dynamic Instance Selection

  • Peter Géczy
  • Shiro Usui
  • Ján Chmúrny
Conference paper
Part of the Advances in Soft Computing book series (AINSC, volume 5)


Training of adaptable systems such as neural networks indispensably depends on the training exemplar set. The most promising training algorithms utilize dynamic instance selection. Dynamic instance selection technique is capable of selecting instances dynamically at each iteration of adaptation procedure. Adaptable system is thus at each iteration presented with appropriately selected set of learning instances that can vary in size and content. Variability of the selected exemplar set contributes to the speed of learning and lowers its computational cost. Benefit of dynamic instance selection can also be found in improved properties of trained adaptable systems.


Adaptable System Instance Selection Adaptation Procedure Superlinear Convergence Rate Instance Selection Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Géczy, P., Usui, S. (1998) Dynamic Sample Selection: Theory. IEICE Transactions on Fundamentals, E81-A(9), 1931–1939Google Scholar
  2. 2.
    Géczy,P., Usui S. (1998) Dynamic Sample Selection: Implementation. IEICE Transactions on Fundamentals,E81-A(9) 1940–1947Google Scholar
  3. 3.
    Géczy, P., Usui, S. (1998) Deterministic Approach to Dynamic Sample Selection. In Proceedings of ICONIP’98, Kitakyushu, 1612–1615Google Scholar
  4. 4.
    Géczy, P., Usui, S. (1997) A Novel Dynamic Sample Selection Algorithm for Accelerated Learning. Technical Report NC97–03, IEICE, 189–196Google Scholar
  5. 5.
    Géczy, P., Usui, S. (1997) Sample Selection Algorithm Utilizing Lipschitz Continuity Condition. In Proceedings of JNNS’97, Kanazawa, 190–191Google Scholar
  6. 6.
    Baum, E. B. (1991) Neural Net Algorithm that Learn in Polynomial Time for Examples and Queries. IEEE Trans.on Neural Networks, 2(1) , 5–19CrossRefGoogle Scholar
  7. 7.
    Battiti, R. (1994) Using Mutual Information for Selecting Features in Supervised Neural Net Learning. IEEE Trans.on Neural Networks, 5(4) , 537–550CrossRefGoogle Scholar
  8. 8.
    Cachin, C. (1994) Pedagogical Pattern Selection Strategies. Neural Networks, 7(1) , 175–181CrossRefGoogle Scholar
  9. 9.
    Fisher, R. A. (1936) The Use of Multiple Measurements in Taxonomic Problems. Annual Eugenics, 7(II) , 179–188Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Peter Géczy
    • 1
  • Shiro Usui
    • 1
  • Ján Chmúrny
    • 2
  1. 1.Toyohashi University of TechnologyToyohashiJapan
  2. 2.Military AcademyLiptovský MikulášSlovakia

Personalised recommendations