Hierarchical Architectures for Reasoning

  • R. C. Lacher
  • K. D. Nguyen
Part of the The Springer International Series In Engineering and Computer Science book series (SECS, volume 292)


This chapter has a threefold purpose: (1) to introduce a general framework for parallel/distributed computation, the computational network; (2) to expose in detail a symbolic example of a computational network, related to expert systems, called an expert network; and (3) to describe and investigate how an expert network can be realized as a neural network possessing a hierarchical symbolic/sub-symbolic architectural organization.


Expert System Computational Network Hierarchical Architecture Certainty Factor Biological Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    B. G. Buchanan and E. H. Shortliffe. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, MA., 1984.Google Scholar
  2. [2]
    P. M. Churchland and P. S. Churchland. Could a machine think? Scientific American, 262:32–39, 1990.Google Scholar
  3. [3]
    C. H. Dagli and R. Stacey. A prototype expert system for selecting control charts. Int. J. Prod. Res., 26:987–996, 1988.CrossRefGoogle Scholar
  4. [4]
    S. P. Eberhardt, T. Daud, D. A. Kerns, T. X. Brown, and A. P. Thakoor. Competitive neural architecture for hardware solution to the assignment problem. Neural Networks, 4:431–442, 1991.CrossRefGoogle Scholar
  5. [5]
    S. P. Eberhardt, T. Duong, and A. P. Thakoor. Design of parallel hardware neural network systems from custom analog VLSI building block chips. In Proceedings IJCNN 89-Washington, DC, volume 2, pages 183–190, Piscataway, NJ, 1989. IEEE.Google Scholar
  6. [6]
    S. P. Eberhardt, T. Duong, and A. P. Thakoor. A VLSI building block chip for hardware neural network implementations. In Proceedings Third Annual Parallel Processing Symposium, volume 1, pages 257–267, Fuller-ton, CA, 1989. IEEE Orange County Computer Society.Google Scholar
  7. [7]
    R. C. Eberhart and R. W. Dobbins. Neural Network PC Tools. Academic Press, San Diego, 1990.Google Scholar
  8. [8]
    S. E. Fahlman and C. Lebiere. The cascade correlation learning architecture. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 2, pages 524–532. Morgan Kaufmann, New York, 1990.Google Scholar
  9. [9]
    W.-Z. Fang, S. I. Hruska, and R. C. Lacher. Expert networks: An empirical study of expert network backpropagation learning. 1994. in preparation.Google Scholar
  10. [10]
    W. J. Freeman. Simulation of chaotic EEG patterns with a dynamic model of the olfactory system. Biological Cybernetics, 56:139–150, 1987.CrossRefGoogle Scholar
  11. [11]
    L.-M. Fu and L.-C. Fu. Mapping rule-based systems into neural architecture. In Knowledge Based Systems, volume 3, pages 48–56. 1990.CrossRefGoogle Scholar
  12. [12]
    K.-I. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.CrossRefGoogle Scholar
  13. [13]
    S. I. Gallant. Connectionist expert systems. Communications of the Association for Computing Machinery, 24:152–169, 1988.Google Scholar
  14. [14]
    J. Giarratano and G. Riley. Expert Systems: Principles and Practice. PWS-KENT, Boston, 1989.Google Scholar
  15. [15]
    J. Giarratano and G. Riley. Expert Systems: Principles and Practice. PWS-KENT, Boston, 1994. Second Edition.Google Scholar
  16. [16]
    L. O. Hall and S. G. Romaniuk. Fuzznet: Toward a fuzzy connectionist expert system development tool. In Proceedings IJCNN 90 — Washington, DC), volume 2, pages 483–486, 1990.Google Scholar
  17. [17]
    P. Harmon. G2: Gensym’s real-time expert system. Intelligent Software Strategies, 9:1–14, 1993.Google Scholar
  18. [18]
    J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley, New York, 1991.Google Scholar
  19. [19]
    M. W. Hirsch. Convergent activation dynamics in continuous time networks. Neural Networks, 2:331–349, 1989.CrossRefGoogle Scholar
  20. [20]
    J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings National Academy of Sciences, USA, 79:1554–2558, 1982.CrossRefMathSciNetGoogle Scholar
  21. [21]
    J. J. Hopfield. Neurons with graded responses have collective computational properties like those of two-state neurons. Proceedings National Academy of Sciences, USA, 81:3088–3092, 1984.CrossRefGoogle Scholar
  22. [22]
    K. Hornik, M. Stinchcomb, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366, 1989.CrossRefGoogle Scholar
  23. [23]
    S. I. Hruska and D. C. Kuncicky. Application of two-stage learning to an expert network for control chart selection. In C. Dagli, S. Kumara, and Y. Shin, editors, Intelligent Engineering Systems Through Artificial Neural Networks, pages 915–920. ASME Press, New York, 1991.Google Scholar
  24. [24]
    S. I. Hruska and D. C. Kuncicky. Automated knowledge refinement for control chart selection. Heuristics, 1992.Google Scholar
  25. [25]
    S. I. Hruska, D. C. Kuncicky, and R. C. Lacher. Hybrid learning in expert networks. In Proceedings IJCNN 91 — Seattle, volume 2, pages 117–120. IEEE 91CH3049-4, July 1991.Google Scholar
  26. [26]
    S. I. Hruska, D. C. Kuncicky, and R. C. Lacher. Resuscitation of certainty factors in expert networks. In Proceedings IJCNN 91 — Singapore, pages 1653–1657. IEEE 91CH3065-0, November 1991.Google Scholar
  27. [27]
    B. Kosko. Neural Networks and Fuzzy Systems. Prentice Hall, Englewood Cliffs, NJ, 1992.MATHGoogle Scholar
  28. [28]
    D. C. Kuncicky. The transmission of knowledge between neural networks and expert systems. In WNN-AIND 91 (Proceedings of the First Workshop on Neural Networks), pages 311–319. Auburn University, 1990.Google Scholar
  29. [29]
    D. C. Kuncicky. Isomorphism of Reasoning Systems with Applications to Autonomous Knowledge Acquisition. PhD thesis, Florida State University, Tallahassee, FL., 1991. R. C. Lacher, Major Professor.Google Scholar
  30. [30]
    D. C. Kuncicky, S. I. Hruska, and R. C. Lacher. Shaping the behavior of neural networks. In WNN-AIND 91 (Proceedings of the Second Workshop on Neural Networks), pages 173–180. Auburn University, SPIE Volume 1515, 1991.Google Scholar
  31. [31]
    D. C. Kuncicky, S. I. Hruska, and R. C. Lacher. Hybrid systems: The equivalence of expert system and neural network inference. International Journal of Expert Systems, 4:281–297, 1992.Google Scholar
  32. [32]
    R. C. Lacher. Node error assignment in expert networks. In A. Kandel and G. Langholz, editors, Hybrid Architectures for Intelligent Systems, pages 29–48. CRC Press, London, 1992.Google Scholar
  33. [33]
    R. C. Lacher. The symbolic/sub-symbolic interface: Hierarchical network organizations for reasoning. In R. Sun, editor, Integrating Neural and Symbolic Processes. AAAI-92 Workshop, 1992.Google Scholar
  34. [34]
    R. C. Lacher. Expert networks: Paradigmatic conflict, technological rapprochement. Minds and Machines, 3:53–71, 1993.CrossRefGoogle Scholar
  35. [35]
    R. C. Lacher, S. I. Hruska, and D. C. Kuncicky. Expert networks: A neural network connection to symbolic reasoning systems. In M. B. Fishman, editor, Proceedings FLAIRS 91, pages 12–16, St. Petersburg, FL, 1991. Florida AJ Research Society.Google Scholar
  36. [36]
    R. C. Lacher, S. I. Hruska, and D. C. Kuncicky. Backpropagation learning in expert networks. IEEE Transactions on Neural Networks, 3:62–72, 1992.CrossRefGoogle Scholar
  37. [37]
    J. J. Mahoney and R. J. Mooney. Combining connectionist and symbolic learning to refine certainty-factor rule-bases. Connection Science, 5:339–364, 1993.CrossRefGoogle Scholar
  38. [38]
    J. J. Mahoney and R. J. Mooney. Modifying network architectures for certainty-factor rule-base revision. In Proceedings International Symposium on Integrating Knowledge and Neural Heuristics 1994, pages 75–84, Gainesville, FL 32609-3476, 1994. University of Florida DOCE.Google Scholar
  39. [39]
    K. Narita and R. C. Lacher. The FEN learning architecture. In Proceedings IJCNN 93 — Nagoya, pages 1901–1905, Washington, DC, 1993. Institute of Electrical and Electronic Engineers.Google Scholar
  40. [40]
    K. D. Nguyen, K. S. Gibbs, R. C. Lacher, and S. I. Hruska. A connection machine based knowledge refinement tool. In M. B. Fishman, editor, FLAIRS 92, pages 283–286, St. Petersburg, 1992. Florida Artificial Intelligence Research Symposium.Google Scholar
  41. [41]
    T. Oi. Chaos dynamics executes inductive inference. Biological Cybernetics, 57:47–56, 1987.MATHCrossRefMathSciNetGoogle Scholar
  42. [42]
    R. Ratliff. Continuous vs discrete information processing: Modelling the accumulation of partial information. Psychological Review, 95:238–255, 1988.CrossRefGoogle Scholar
  43. [43]
    R. R. Rocker. An event-driven approach to artificial neural networks. Master’s thesis, Florida State University, Tallahassee, FL., 1991. S. I. Hruska, Major Professor.Google Scholar
  44. [44]
    D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing. MIT Press, Cambridge, MA., 1986.Google Scholar
  45. [45]
    M. J. Salzgeber, J. L. Franke, and S. I. Hruska. Managing uncertainty in clips: A system level approach. In M. B. Fishman, editor, FLAIRS 93, pages 142–146, St. Petersburg, 1993. Florida Artificial Intelligence Research Symposium.Google Scholar
  46. [46]
    J. R. Searle. Is the brain’s mind a computer program? Scientific American, 262:26–31, 1990.Google Scholar
  47. [47]
    T. J. Sejnowski and C. R. Rosenberg. Parallel networks that learn to pronounce english text. Complex Systems, 1:145–168, 1987.MATHGoogle Scholar
  48. [48]
    E. H. Shortliffe. Computer-Based Medical Consultations: MYCIN. Else-vier, New York, 1976.Google Scholar
  49. [49]
    E. H. Shortliffe and B. G. Buchanan. A model of inexact reasoning in medicine. In Rule-Based Expert Systems, pages 233–262. Addison-Wesley, New York, 1985.Google Scholar
  50. [50]
    P. K. Simpson. Artificial Neural Systems. Pergamon Press, New York, 1990.Google Scholar
  51. [51]
    C. Skarda and W. J. Freeman. How brains make chaos in order to make sense of the world. Behavioral and Brain Sciences, 10:161–195, 1987.CrossRefGoogle Scholar
  52. [52]
    R. Sun. A discrete neural network model for conceptual representation and reasoning. In Proceedings of the Eleventh Annual Conference of the Cognitive Science Society, Hillsdale, N.J., 1989. Erlbaum.Google Scholar
  53. [53]
    R. Sun. On variable binding in connectionist networks. Connection Science, 4:93–124, 1992.CrossRefGoogle Scholar
  54. [54]
    V. S. Sunderam. Heterogeneous environments for network concurrent computing. Journal of Future Generation Computer Systems, 8:191–203, 1992.CrossRefGoogle Scholar
  55. [55]
    V. S. Sunderam and G. A. Geist. Network based concurrent computing on the pvm system. Journal of Concurrency: Practice and Experience, 4:293–311, 1992.CrossRefGoogle Scholar
  56. [56]
    G. G. Towell, J. W. Shavlik, and M. O. Noordewier. Refinement of approximate domain theories by knowledge-based neural networks. In Proceedings AAAI-90, pages 861–866, New York, 1990. Morgan Kauf-mann.Google Scholar
  57. [57]
    P. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, Cambridge, MA, 1974.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • R. C. Lacher
    • 1
  • K. D. Nguyen
    • 1
  1. 1.Department of Computer ScienceFlorida State UniversityTallahassee

Personalised recommendations