Novel Learning Tasks, Optimization, and Their Application

  • Guido Daniel
  • Jan Dienstuhl
  • Sebastian Engell
  • Sven Felske
  • Karl Goser
  • Ralf Klinkenberg
  • Katharina Morik
  • Oliver Ritthoff
  • Henner Schmidt-Traub
Part of the Natural Computing Series book series (NCS)

Summary

This chapter describes methods for learning and optimizing solutions to engineering problems. Where standard learning tasks do not fit the requirements of the applications, they are varied. In particular, the use of prior knowledge and unlabeled examples may ease a learning task. In contrast, learning target concepts that change over time and constructing new features are challenging learning tasks. The changes of the learning tasks that ease as well as those that challenge learning cover a broad range of application problems. Solutions to the new, nonstandard learning tasks are developed, integrating numerical optimization strategies, evolutionary algorithms, support vector machines, neural networks, and fuzzy controllers. A workbench for learning methods allows their systematic evaluation with respect to the engineering problems.

The chapter is structured as follows: First, extensions of standard learning tasks are defined, and methods for their solution are proposed and evaluated. Second, a problem in chemical engineering is presented in Section 8.2, namely the determination of suitable models for batch chromatography. This process is a standard separation method in the pharmaceutical industry. The critical model parameters of physical and grey-box models of the process are determined from measured concentration profiles. Mathematical parameter estimation and neural networks are used for this purpose. Third, from the field of electrical engineering, circuit design, device modeling, and quality assessment tasks are covered in Section 8.3. An evolutionary algorithm for the optimization of electronic circuit components is presented. Furthermore the construction of simulation models for electronic device development and circuit control is investigated. Different algorithms and hardware implementations of neural networks for microelectronics and process control are examined.

Keywords

Porosity Catheter Manifold Covariance Recombination 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. W. Aha and R. L. Bankert. A comparative evaluation of sequential feature selection algorithms. In D. Fisher and H.-J. Lenz, editors, Learning from Data, chapter 4, pages 199–206. Springer, New York, 1996.Google Scholar
  2. 2.
    J. Allan. Incremental relevance feedback for information filtering. In H. P. Frei, editor, Proceedings of the Nineteenth ACM Conference on Research and Development in Information Retrieval, pages 270–278. ACM Press, New York, 1996.Google Scholar
  3. 3.
    U. Altenhoener, M. Meurer, J. Strube, and H. Schmidt-Traub. Parameter estimation for the simulation of liquid chromatography. Journal of Chromatography A, 769: 59–69, 1997.CrossRefGoogle Scholar
  4. 4.
    K. Antreich, J. Eckmüller, H. Gräb, M. Pronath, F. Schenkel, R. Schwencker, and S. Zizala. WiCkeD: Analog circuit synthesis incorporating mismatch. In IEEE Custom Integrated Circuits Conference (CICC), May 2000.Google Scholar
  5. 5.
    T. Bäck. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. Oxford University Press, New York, 1996.Google Scholar
  6. 6.
    J. Bala, K. A. De Jong, J. Huang, H. Vafaie, and H. Wechsler. Hybrid learning using genetic algorithms and decision trees for pattern classification. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), volume 1, pages 719–724. Morgan Kaufmann, 1995.Google Scholar
  7. 7.
    M. Balabanovic. An adaptive web page recommendation service. In W. L. Johnson, editor, Proceedings of the First International Conference on Autonomous Agents, pages 378–385. ACM Press, New York, 1997.CrossRefGoogle Scholar
  8. 8.
    Y. Bard. Nonlinear Parameter Estimation. Academic Press, New York, 1974.MATHGoogle Scholar
  9. 9.
    T. Beielstein, J. Dienstuhl, C. Feist, and M. Pompl. Circuit design using evolutionary algorithms. IEEE World Congress on Computational Intelligence (CEC-2002), Hawai, May 2002. ( Accepted).Google Scholar
  10. 10.
    E. Bloedorn and R. S. Michalski. Data-driven constructive induction. IEEE Intelligent Systems,13(2):30–37, 1998. Special issue on Feature Transformation and Subset Selection.Google Scholar
  11. 11.
    A. L. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97: 245–271, 1997.MathSciNetMATHCrossRefGoogle Scholar
  12. 12.
    B. E. Boser, I. M. Guyon, and V. N. Vapnik. A traininig algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory (COLT), pages 144–152. ACM Press, Pittsburgh, PA, 1992.CrossRefGoogle Scholar
  13. 13.
    L. Breiman. Bagging predictors. Machine Learning, 13 (2): 30–37, 1996.Google Scholar
  14. 14.
    T. P. E. Broekaert et al. A monolithic 4-bit 2-gsps resonant tunneling analog-to-digital converter. IEEE Journal of Solid-State Circuits, 33 (9): 1342–1347, September 1998.CrossRefGoogle Scholar
  15. 15.
    P. Brown, G. Byrne, and A. Hindmarsh. Vode: A variable-coefficient ODE solver. SIAM Journal on Scientific and Statistical Computing, 10 (5): 1038–1051, 1989.MATHCrossRefGoogle Scholar
  16. 16.
    A. Bühlmeier and G. Manteuffel. Operant conditioning in robots. In O. Omidvar and P. P. van der Smagt, editors, Neural Systems for Robotics, pages 195–225. Academic Press, San Diego, CA, 1997.Google Scholar
  17. 17.
    A. Bühlmeier, G. Manteuffel, M. Rossmann, and K. Goser. Robot learning in analog neural hardware. In Proceedings of the Sixth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, volume 1112 of LNCS, pages 311–316. Springer, Berlin, 1996.Google Scholar
  18. 18.
    A. Bühlmeier, G. Manteuffel, M. Rossmann, and K. Goser. Application of a local learning rule in a wheelchair robot. In Third International Conference on Neural Networks and their Applications, pages 177–182. Marseille, March 1997.Google Scholar
  19. 19.
    A. Bühlmeier, M. Rossmann, K. Goser, and G. Manteuffel. Adaptive local navigation. In Proceedings of the 26th Göttingen Neurobiology Conference: New Neuroethology on the Move, volume 1, 1998.Google Scholar
  20. 20.
    A. Bühlmeier, M. Rossmann, G. Manteuffel, and K. Goser. Multistage STM in a multilayer hebbian learning architecture for local navigation. In Proceedings of the 8th International Conference on Artificial Neural Networks, volume 2, pages 1139–1144. Skövde, Sweden, September 1998.Google Scholar
  21. 21.
    A. Bühlmeier, P. Steiner, M. Rossmann, K. Goser, and G. Manteuffel. Hebbian multilayer network in a wheelchair robot. In Proceedings of the 7th International Conference on Artificial Neural Networks, Lausanne, Switzerland, volume 1327 of LNCS, pages 727–732. Springer, Berlin, 1997.Google Scholar
  22. 22.
    C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2 (2): 121–167, 1998.CrossRefGoogle Scholar
  23. 23.
    C. Burwick, M. Thomas, J. Dienstuhl, and K. F. Goser. Threshold-gates in arithmetic circuits. The 8 th IEEE International Conference on Electronics, Circuits and Systems, II: 909–912, September 2001.Google Scholar
  24. 24.
    S. Chen, C. F. N. Cowan, and P. M. Grant. Orthogonal least squares learning algorithm for radial basis function networks. IEEE Transactions on Neural Networks, 2 (2): 302–309, 1991.CrossRefGoogle Scholar
  25. 25.
    S. Chung and C. Wen. Longitudinal diffusion of liquid flowing through fixed and fluidized beds. AIChE Journal, 14: 857–866, 1968.CrossRefGoogle Scholar
  26. 26.
    A. R. Conn, P. K. Coulmann, R.A. Haring, G. L. Morill, C. Visweswariah, and C. W. Wu. Jiffy tune: Circuit optimization using time-domain sensitivities. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 17 (12): 1292–1309, 1998.CrossRefGoogle Scholar
  27. 27.
    G. Daniel. Parameter estimation for grey box models of batch chromatography. Technical report, Process Control Laboratory, University of Dortmund, December 2001.Google Scholar
  28. 28.
    M. Dash and H. Liu. Feature selection for classification. International Journal of Intelligent Data Analysis, 1 (3): 131–156, 1997.CrossRefGoogle Scholar
  29. 29.
    J. Dienstuhl, C. Burwick, M. Thomas, and K. F. Goser. Evolutionary circuit design algorithms for low-power CMOS. Kleinheubacher Berichte, September 2001.Google Scholar
  30. 30.
    G. Dünnebier, S. Engell, A. Epping, F. Hanisch, A. Juppke, K.-U. Klatt, and H. Schmidt-Traub. Model-based control of batch-chromatography. AIChE Journal, 47 (11): 2493–2502, 2001.CrossRefGoogle Scholar
  31. 31.
    R. Fletcher. Practical Methods of Optimization. Wiley, New York, 2nd edition, 1987.Google Scholar
  32. 32.
    A. Giordana and F. Neri. Search-intensive concept induction. Evolutionary Computation, 3 (4): 375–416, 1995.CrossRefGoogle Scholar
  33. 33.
    A. Giordana and L. Saitta. Integrating multiple learning strategies in first order logics. Machine Learning, 27 (2): 209–240, 1997.MATHCrossRefGoogle Scholar
  34. 34.
    P. Glösekötter, A. Kanstein, S. Jung, and K. Goser. Implementation of a RBF network based on possibilistic reasoning. In Proceedings of the 24th EUROMICRO Conference, pages 667–682. Västeras, Sweden, August 1998.Google Scholar
  35. 35.
    D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989.MATHGoogle Scholar
  36. 36.
    K. Goser, B. Hesse, and M. Thomas. Klassifikation und Prädiktion von Bauelemente–Parametern hinsichtlich der Qualität integrierter Elektroniksysteme. In H.–P. Schwefel et al., editor, Sonderforschungsbereich 531–Design und Management komplexer Prozesse und Systeme mit Methoden der Computational Intelligence–Arbeits–und Ergebnisbericht 1997–1998–1999, chapter Teilprojekt C6, pages 223 – 237. University of Dortmund, April 1999.Google Scholar
  37. 37.
    K. Goser, C. Pacha, A. Kanstein, and M. L. Rossmann. Aspects of systems and circuits for nanoelectronics. Proceedings of the IEEE, Special Issue on Namometer-Scale Science éi Technology, pages 558–573, April 1997.Google Scholar
  38. 38.
    T. Gu. Mathematical Modelling and Scale Up of Liquid Chromatography. Springer, New York, 1995.CrossRefGoogle Scholar
  39. 39.
    G. Guichon and S. G. Shirazi. Fundamentals of Preparative and Nonlinear Chromatography. Academic Press, Boston, MA, 1994.Google Scholar
  40. 40.
    B. Gustaysson and P. Olsson. Fourth order difference methods for hyperbolic ibvps. Journal of Computational Physics, 117: 300–317, 1995.MathSciNetCrossRefGoogle Scholar
  41. 41.
    M. A. Hall. Correlation-based feature selection for machine learning. PhD thesis, Department of Computer Science, University of Waikato, Hamilton, New Zealand, 1999.Google Scholar
  42. 42.
    D. P. Helmbold and P. M. Long. Tracking drifting concepts by minimizing disagreements. Machine Learning, 14 (1): 27–45, 1994.MATHGoogle Scholar
  43. 43.
    B. Hesse and K. Goser. Evaluating stress test data of IGBT-modules with different neural networks. In Fourth International Conference on Neural Networks and their Applications, pages 163–166. Marseille, France, March 1998.Google Scholar
  44. 44.
    J. H. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI, 1975.Google Scholar
  45. 45.
    The Mathworks Inc. Matlab Optimization Toolbox, User’s Guide. The Math-works Inc., 1998.Google Scholar
  46. 46.
    C. Z. Janikow. A knowledge-intensive genetic algorithm for supervised learning. Machine Learning, 13 (4): 189–228, 1993.CrossRefGoogle Scholar
  47. 47.
    T. Joachims. Making large-scale SVM learning practical. In B. Schölkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods–Support Vector Learning, chapter 11, pages 169–184. MIT Press, Cambridge, MA, 1999.Google Scholar
  48. 48.
    T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML-99). Bled, Slovenia, 1999.Google Scholar
  49. 49.
    T. Joachims. Estimating the generalization performance of a SVM efficiently. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000), pages 431–438. Morgan Kaufmann, San Francisco, CA, 2000.Google Scholar
  50. 50.
    T. Joachims. The maximum-margin approach to learning text classifiers: methods, theory, and algorithms PhD thesis, Artificial Intelligence Unit, Department of Computer Science, University of Dortmund, February 2001.Google Scholar
  51. 51.
    G. John, R. Kohavi, and K. Pfleger. Irrelevant features and the subset selection problem. In Proceedings of the Eleventh International Conference on Machine Learning (ICML-94), pages 121–129. Morgan Kaufmann, San Mateo, CA, 1994.Google Scholar
  52. 52.
    S. Jung. Capacitive CMOS fingerprint sensor with on-chip parallel signal processing. PhD thesis, University of Dortmund, 2000.Google Scholar
  53. 53.
    S. Jung, C. Hierold, T. Scheiter, P. W. von Basse, R. Thewes, K. Goser, and W. Weber. Intelligent CMOS fingerprint sensors. In Transducers, pages 966969. Sendai, Japan, 1999.Google Scholar
  54. 54.
    S. Jung, R. Thewes, T. Scheiter, K. Goser, and W. Weber. A low-power and high-performance CMOS fingerprint sensing and encoding architecture. In Proceedings of the 24th European Solid-State Circuits Conference, pages 324–327. The Hague, September 1998.Google Scholar
  55. 55.
    S. Jung, R. Thewes, T. Scheiter, K. Goser, and W. Weber. CMOS fingerprint sensor with automatic local contrast adjustment and pixel-parallel encoding logic. In VLSI Circuits Symposium, pages 161–164. Kyoto, 1999.Google Scholar
  56. 56.
    S. Jung, R. Thewes, T. Scheiter, K. F. Goser, and W. Weber. A low-power and high-performance cmos fingerprint sensing and encoding architecture. IEEE Journal of Solid-State Circuits, 1999.Google Scholar
  57. 57.
    K. Kaczmarski and D. Antos. Fast finite difference method for solving multi-component adsorption-chromatography models. Computers éi Chemical Engineering, 20 (11): 1271–1276, 1996.CrossRefGoogle Scholar
  58. 58.
    A. Kanstein. Possibilistic neural networks for process modelling. PhD thesis, University of Dortmund, 1998.Google Scholar
  59. 59.
    A. Kanstein. Possibilistic Neural Networks for Process Modelling. VDE Verlag, Berlin, 1999.Google Scholar
  60. 60.
    A. Kanstein, M. Thomas, and K. Goser. Possibilistic reasoning in a computational neural network. In IEEE-International Conference on Neural Networks, pages 2541–2546. Houston, TX, June 1997.Google Scholar
  61. 61.
    J.-U. Kietz. Some lower bounds for the computational complexity of inductive logic programming. Arbeitspapiere der GMD 718, German National Research Center for Computer Science (GMD), St. Augustin, Germany, December 1992.Google Scholar
  62. 62.
    J.-U. Kietz. Induktive Analyse relationaler Daten. PhD thesis, Technical University of Berlin, October 1996.Google Scholar
  63. 63.
    K. Kira and L. Rendell. The feature selection problem: Traditional methods and a new algorithm. In Proceedings of the Tenth National Conference on Artificial Intelligence, pages 129–134. AAAI Press, Menlo Park, CA, 1992.Google Scholar
  64. 64.
    K.-U. Klatt, G. Dünnebier, S. Engell, and F. Hanisch. Model-based optimization and control of chromatographic processes. Computers ht Chemical Engineering, 48: 1119–1126, 2000.CrossRefGoogle Scholar
  65. 65.
    R. Klinkenberg. Using labeled and unlabeled data to learn drifting concepts. In M. Kubat and K. Morik, editors, Workshop notes of the IJCAI-01 Workshop on Learning from Temporal and Spatial Data, pages 16–24. AAAI Press, Menlo Park, CA, 2001. Held in conjunction with the International Joint Conference on Artificial Intelligence (IJCAI).Google Scholar
  66. 66.
    R. Klinkenberg. Informed parameter setting for support vector machines: Using additional user knowledge in classification tasks. Technical Report CI-126/02, Collaborative Research Center 531, University of Dortmund, 2002.Google Scholar
  67. 67.
    R. Klinkenberg. Transductive learning of drifting concepts. Technical Report CI-125/02, Collaborative Research Center 531, University of Dortmund, 2002.Google Scholar
  68. 68.
    R. Klinkenberg and T. Joachims. Detecting concept drift with support vector machines. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000), pages 487–494. Morgan Kaufmann, San Francisco, CA, 2000.Google Scholar
  69. 69.
    R. Klinkenberg and I. Renz. Adaptive information filtering: Learning in the presence of concept drifts. In M. Sahami, M. Craven, T. Joachims, and A. McCallum, editors, Workshop Notes of the Workshop Learning for Text Categorization held at the Fifteenth International Conference on Machine Learning (ICML-98), pages 33–40. AAAI Press, Menlo Park, CA, 1998.Google Scholar
  70. 70.
    R. Kohavi and G. H. John. Wrappers for feature subset selection. Artificial Intelligence Journal, Special Issue on Relevance, 97 (1–2): 273–324, 1997.MATHGoogle Scholar
  71. 71.
    R. Kohavi, D. Sommerfield, and J. Dougherty. Data mining using MLC++: A machine learning library in C++. In Tools with Artificial Intelligence,pages 234–245. IEEE Computer Society Press, Los Alamitos, CA, 1996. http://www.sgi.com/tech/mic/.Google Scholar
  72. 72.
    M. Kubat and K. Morik, editors. Workshop notes of the IJCAI-01 Workshop on Learning from Temporal and Spatial Data. AAAI Press, Menlo Park, CA, 2001. Held in conjunction with the International Joint Conference on Artificial Intelligence (IJCAI).Google Scholar
  73. 73.
    A. Kuh, T. Petsche, and R. L. Rivest. Learning time-varying concepts. In Advances in Neural Information Processing Systems (NIPS), volume 3, pages 183–189. Morgan Kaufmann, San Mateo, CA, 1991.Google Scholar
  74. 74.
    C. Lanquillon. Information filtering in changing domains. In T. Joachims, A. McCallum, M. Sahami, and L. Ungar, editors, Working Notes of the Workshop Machine Learning for Information Filtering held at the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI-99), pages 41–48. Stockholm, August 1999.Google Scholar
  75. 75.
    C. Lanquillon. Partially supervised text classification: Combining labeled and unlabeled documents using an EM-like scheme. In R. López de Mântaras and E. Plaza, editors, Proceedings of the 11th Conference on Machine Learning (ECML 2000), volume 1810 of LNCS, pages 229–237. Springer, Berlin, 2000.Google Scholar
  76. 76.
    N. Lavrac, D. Gamberger, and P. D. Turney. A relevancy filter for constructive induction. IEEE Intelligent Systems,13(2):50–56, 1998. Special issue on Feature Transformation and Subset Selection.Google Scholar
  77. 77.
    A. Lunts and V. Brailovskiy. Evaluation of attributes obtained in statistical decision rules. Engineering Cybernetics, 3: 98–109, 1967.Google Scholar
  78. 78.
    C. Matheus. The need for constructive induction. In Proceedings of the Eighth International Workshop on Machine Learning, pages 173–177. Morgan Kaufmann, San Mateo, CA, 1991.Google Scholar
  79. 79.
    S. Michaelis. Verkehrslastprognose durch Wissensentdeckung. Informatik Spektrum, 24 (4): 218–227, August 2001.CrossRefGoogle Scholar
  80. 80.
    M. Mitchell. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA, 1996.Google Scholar
  81. 81.
    T. Mitchell, R. Caruana, D. Freitag, J. McDermott, and D. Zabowski. Experience with a learning personal assistant. Communications of the ACM, 37 (7): 81–91, July 1994.CrossRefGoogle Scholar
  82. 82.
    T. M. Mitchell. Machine Learning. McGraw Hill, New York, 1996.MATHGoogle Scholar
  83. 83.
    K. Morik. The representation race–preprocessing for handling time phenomena. In R. López de Mântaras and E. Plaza, editors, Proceedings of the 11th European Conference on Machine Learning (ECML 2000), volume 1810 of Lecture Notes in Artificial Intelligence (LNAI), pages 4–19. Springer, Berlin, 2000.Google Scholar
  84. 84.
    K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning,39(2/3):103134, 2000.Google Scholar
  85. 85.
    W. Nye, D. C. Riley, A. Sangiovanni-Vincentelli, and A. L. Tits. Delight.Spice: An optimization-based system for the design of integrated circuits. IEEE Transactions on Computer-Aided Design, 7 (4): 501–519, 1998.CrossRefGoogle Scholar
  86. 86.
    R. Otte. Selbstorganisierende Merkmalsarten zur multivarianten Datenanalyse komplexer technischer Prozesse. PhD thesis, University of Dortmund, 1998.Google Scholar
  87. 87.
    R. Otte and K. Goser. Neue Formen der Prozessvisualisierung und Prozessanalyse. atp–Automatisierungstechnische Praxis, 39: 28–39, December 1997.Google Scholar
  88. 88.
    R. Otte and K. Goser. New approaches of process visualisation and analysis in power plants. In Workshop on Self-Organizing Maps, pages 44–50. Helsinki, 1997.Google Scholar
  89. 89.
    R. Otte and K. F. Goser. Selbstorganisierende neuronale Karten zur multivarianten Prozessanalyse. VGB Kraftwerkstechnik, 78 (7): 52–57, 1998.Google Scholar
  90. 90.
    C. Pacha, K. Goser, A. Brennemann, and W. Prost. A threshold logic full adder based on resonant tunneling transistors. In European Solid-State Circuits Conference (ESSCIRC’98), pages 427–431. The Hague, September 1998.Google Scholar
  91. 91.
    G. Paliouras. The scalability of machine learning algorithms Master’s thesis, Department of Computer Science, University of Manchester, UK, 1993.Google Scholar
  92. 92.
    G. D. Plotkin. A note on inductive generalization. In B. Meltzer and D. Michie, editors, Machine Intelligence, chapter 8, pages 153–163. American Elsevier, New York, NY, 1970.Google Scholar
  93. 93.
    W. Punch, E. Goodman, P. Hovland, and R. Enbody. Further research on feature selection and classification using genetic algorithms. In Proceedings of the Fifth International Conference on Genetic Algorithms, pages 557–564. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
  94. 94.
    J. R. Quinlan. C4.5: Programs for Machine Learning. Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
  95. 95.
    O. Ritthoff, R. Klinkenberg, S. Fischer, and I. Mierswa. A hybrid approach to feature selection and generation using an evolutionary algorithm. Technical Report CI-127/02, Collaborative Research Center 531, University of Dortmund, 2002.Google Scholar
  96. 96.
    O. Ritthoff, R. Klinkenberg, S. Fischer, I. Mierswa, and S. Felske. YALE: Yet another machine learning environment. In R. Klinkenberg, S. Rüping, A. Fick, N. Henze, C. Herzog, R. Molitor, and O. Schröder, editors, LLWA 01 - Tagungsband der GI- Workshop- Woche Lernen - Lehren - Wissen - Adaptivität, pages 84–92. Technical Report No. 763, Department of Computer Science, University of Dortmund, October 2001.Google Scholar
  97. 97.
    I. Rojas, M. Anguita, F. J. Pelayo, P. Gloesekoetter, and A. Prieto. CMOS implementation of RBF neural networks using adaptive gaussian radial functions. In Proceedings of the 6th International Conference on Microelectronics for Neural Networks, Evolutionary Algorithms and Fuzzy Systems, pages 195200. September, 1997.Google Scholar
  98. 98.
    I. Rojas, J. Ortega, F. J. Pelayo, and A. Prieto. Statistical analysis of the main parameters in the fuzzy inference process. Fuzzy Sets and Systems Elsevier, 102: 157–173, 1999.MathSciNetMATHCrossRefGoogle Scholar
  99. 99.
    I. Rojas, F. Pelayo, J. Ortega, A. Prieto, B. Prieto, and C. Prieto. Compact CMOS fuzzy controllers using the normalized product of adaptive membership functions. Electronics Letters, 33 (3): 221–223, January 1997.CrossRefGoogle Scholar
  100. 100.
    I. Rojas, F. J. Pelayo, M. Anguita, J. L. Bernier, and A. Prieto. Implementation of adaptable and hierachical fuzzy T-norm. Electronics Letters, 35 (24): 2150–2152, November 1999.CrossRefGoogle Scholar
  101. 101.
    I. Rojas, F. J. Pelayo, M. Anguita, and A. Prieto. Development of a configurable and hierechical multiple-input fuzzy T-norm. Proceedings of the 6th International Conference on Microelectronics for Neural Networks, Evolutionary Algorithms and Fuzzy Systems, pages 297–304, September 1997.Google Scholar
  102. 102.
    I. Rojas, H. Pomares, J. Gonzalez, P. Glösekötter, J. Dienstuhl, and K. Goser. Multideme evolutionary algorithm based approach to the generation of fuzzy systems. Proceedings of the 10th IEEE International Conference on Fuzzy Systems,December 2001. (In print).Google Scholar
  103. 103.
    I. Rojas, H. Pomares, J. Gonzalez, P. Glösekötter, J. Dienstuhl, and K. Goser. A new sequential learning algorithm using pseudo-Gaussian functions for neuro-fuzzy systems. Proceedings of the 10th IEEE International Conference on Fuzzy Systems,December 2001. (In print).Google Scholar
  104. 104.
    I. Rojas, H. Pomares, J. Ortega, and A. Prieto. Self-organized fuzzy system generation from training examples. IEEE Transactions on Fuzzy Systems, 8 (1): 23–36, February 2000.CrossRefGoogle Scholar
  105. 105.
    I. Rojas, O. Valenzuela, M. Anguita, and A. Prieto. Analysis of the operators involved in the definition of the implication functions and in the fuzzy interference process. International Journal of Approximate Reasoning, 19: 367–389, 1998.MathSciNetMATHCrossRefGoogle Scholar
  106. 106.
    J. P. Ros. Learning Boolean functions with genetic algorithms: A PAC analysis. In L. D. Whitley, editor, Foundations of Genetic Algorithms 2, pages 257–275. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
  107. 107.
    M. Rossmann. Hardwarearchitekturen für künstliche biomorphe neuronale Netze. PhD thesis, University of Dortmund, 2001.Google Scholar
  108. 108.
    M. Rossmann, A. Buehimeier, G. Manteuffel, and K. F. Goser. Dynamic hebbian learning strategies for VLSI-systems. Neurocomputing, 1 (28): 157–164, 1999.CrossRefGoogle Scholar
  109. 109.
    M. Rossmann, A. Bühlmeier, G. Manteuffel, and K. Goser. Short-and longterm-dynamics in a stochastic pulse stream neuron implemented in FPGA. In Proceedings of the 7th International Conference on Artificial Neural Networks, Lausanne, Switzerland, volume 1327 of LNCS, pages 1241–1246. Springer, Berlin, 1997.Google Scholar
  110. 110.
    M. Rossmann, A. Bühlmeier, G. Manteuffel, and K. Goser. Dynamic hebbian learning strategies for VLSI. In Fourth International Conference on Neural Networks and their Applications, pages 191–194. Marseille, March 1998.Google Scholar
  111. 111.
    M. Rossmann, A. Bühlmeier, G. Manteuffel, and K. Goser. Endeavour: System for the analysis of dynamic large scale neural networks in real-time. In BMBFFachtagung, Göttingen, Germany, May 1998.Google Scholar
  112. 112.
    M. Rossmann, A. Bühlmeier, G. Manteuffel, and K. Goser. VLSI chip for high speed computation of large scale artificial neural networks based on the modified Hebbian learning algorithm. In Proceedings of the 26th Göttingen Neurobiology Conference: New Neuroethology on the Move, volume 1, 1998.Google Scholar
  113. 113.
    M. Rossmann, C. Burwick, A. Bühlmeier, G. Manteuffel, and K. Goser. Dynamic Hebbian learning in system architecture based on high speed chip. In Proceedings of the Sixth International Conference on Microelectronics for Neural Networks, pages 251–256. Dresden, September 1997.Google Scholar
  114. 114.
    M. Rossmann, C. Burwick, A. Bühlmeier, G. Manteuffel, and K. Goser. Neural dynamics in real–time for large scale biomorphic neural networks. In Proceedings of the 8th International Conference on Artificial Neural Networks, volume 1, pages 481–486. Skövde, Sweden, September 1998. ISBN 3–540–76263–9.Google Scholar
  115. 115.
    M. Rossmann, T. Jost, K. Goser, A. Bühlmeier, and G. Manteuffel. Exponential Hebbian on-line learning implemented in FPGAs. In Proceedings of the Sixth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, volume 1112 of LNCS, pages 767–772. Springer, Berlin, July 1996.Google Scholar
  116. 116.
    S. Rüping. mySVM-Manual. Artificial Intelligence Unit, Department of Computer Science, University of Dortmund, 2000. http://www-ai.cs.uni-dortmund.de/SOFTWARE/MYSVM/.Google Scholar
  117. 117.
    D. Ruthven. Principles of Adsorption and Adsorption Processes. Wiley, New York, 1989.Google Scholar
  118. 118.
    G. Salton and C. Buckley. Term weighting approaches in automatic text retrieval. Information Processing and Management, 24 (5): 513–523, 1988.CrossRefGoogle Scholar
  119. 119.
    R. E. Schapire. A brief introduction to boosting. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI-99), pages 1401–1406. Morgan Kaufmann, San Mateo, CA, 1999.Google Scholar
  120. 120.
    A. J. Smola and B. Schölkopf. A tutorial on support vector regression. NeuroCOLT2 Technical Report NC-TR-98–030, Royal Holloway College, University of London, UK, 1998.Google Scholar
  121. 121.
    H. Surmann. Automatisierter Entwurf von Fuzzy Systemen, volume 8, no. 452. VDI Verlag, Düsseldorf, 1995.Google Scholar
  122. 122.
    C. Taylor, G. Nakhaeizadeh, and C. Lanquillon. Structural change and classification. In G. Nakhaeizadeh, I. Bruha, and C. Taylor, editors, Workshop Notes on Dynamically Changing Domains–Theory Revision and Context Dependence Issues held at the Ninth European Conference on Machine Learning (ECML-97), volume 1224 of LNCS, pages 67–78. Springer, Berlin, April 1997.Google Scholar
  123. 123.
    M. Thomas. Design und Analyse integrierter Schaltungen mit Evolutionären Algorithmen. PhD thesis, University of Dortmund, 2001.Google Scholar
  124. 124.
    M. Thomas, C. Burwick, and K. F. Goser. Circuit analysis and design using evolutionary algorithms Transaction of the Collaborative Research Center on Computational Intelligence (DFG-SFB 531, University of Dortmund), (CI-58/00), 2000.Google Scholar
  125. 125.
    A. P. Ungering. Integrationsgerechter Entwurf von Fuzzy-Reglern. Number 210 in Series 20. VDI Verlag, Düsseldorf, 1996.Google Scholar
  126. 126.
    H. Vafaie and K. A. De Jong. Genetic Algorithms as a Tool for Feature Selection in Machine Learning. IEEE Computer Society Press, Los Alamitos, CA, 1992.Google Scholar
  127. 127.
    L. G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11): 1134–1142, 1984.MATHCrossRefGoogle Scholar
  128. 128.
    C. van Rijsbergen. A theoretical basis for the use of co-occurrence data in information retrieval. Journal of Documentation, 33 (2): 106–119, June 1977.CrossRefGoogle Scholar
  129. 129.
    V. N. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer, New York, 1982.Google Scholar
  130. 130.
    V. N. Vapnik. Statistical Learning Theory. Wiley, Chichester, UK, 1998.MATHGoogle Scholar
  131. 131.
    G. Widmer and M. Kubat. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23 (2): 69–101, 1996.Google Scholar
  132. 132.
    I. H. Witten and E. Frank. Data mining: Practical machine learning tools and techniques with Java implementations. Morgan Kaufmann, San Francisco, CA, 2000. http://www.cs.waikato.ac.nz/ ml/weka/.Google Scholar
  133. 133.
    J. Wnek and M. S. Michalski. Hypothesis-driven constructive induction in AQ17-HCI: A method and experiments. Machine Learning, 14 (1): 139–168, 1994.MATHCrossRefGoogle Scholar
  134. 134.
    J. Yang and V. Honavar. Feature subset selection using a genetic algorithm. In J. R. Koza, K. Deb, M. Dorigo, D. B. Fogel, M. Garzon, H. Iba, and R. L. Riolo, editors, Genetic Programming 1997: Proceedings of the Second Annual Conference, pages 380–385. Morgan Kaufmann, San Mateo, CA, 1997.Google Scholar
  135. 135.
    R. Zimmermann and W. Fichtner. Low power logic styles: CMOS versus pass-transistor logic. IEEE Journal of Solid-State Circuits, 32 (7): 1079–1090, July 1997.CrossRefGoogle Scholar
  136. 136.
    E. Zitzler. Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach. IEEE Transactions on Evolutionary Computation, 3 (4): 257–271, 1999.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Guido Daniel
    • 1
  • Jan Dienstuhl
    • 2
  • Sebastian Engell
    • 1
  • Sven Felske
    • 4
  • Karl Goser
    • 2
  • Ralf Klinkenberg
    • 3
  • Katharina Morik
    • 3
  • Oliver Ritthoff
    • 3
  • Henner Schmidt-Traub
    • 4
  1. 1.Department of Chemical Engineering, Process ControlUniversity of DortmundDortmundGermany
  2. 2.Faculty of Electrical Engineering & Information Technology, MicroelectronicsUniversity of DortmundDortmundGermany
  3. 3.Department of Computer Science, Informatik VIIIUniversity of DortmundDortmundGermany
  4. 4.Department of Chemical Engineering, Plant DesignUniversity of DortmundDortmundGermany

Personalised recommendations