Advertisement

Parallel Processing in Artificial Intelligence

  • Scott E. Fahlman
Part of the The Kluwer International Series in Engineering and Computer Science book series (SECS, volume 26)

Abstract

Intelligence, whether in a machine or in a living creature, is a mixture of many abilities. Our current artificial intelligence (AI) technology does a good job of emulating some aspects of human intelligence, generally those things that, when they are done by people, seem to be serial and conscious. AI is very far from being able to match other human abilities, generally those things that seem to happen “in a flash” and without any feeling of sustained mental effort. We are left with an unbalanced technology that is powerful enough to be of real commercial value, but that is very far from exhibiting intelligence in any broad, human-like sense of the word. It is ironic that AI’s successes have come in emulating the specialized performance of human experts, and yet we cannot begin to approach the common sense of a five-year-old child or the sensory abilities and physical coordination of a rat.

Keywords

Parallel Processing Boltzmann Machine Hopfield Network Artificial Intelligence System Instruction Stream 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    R. H. Halstead, Jr., “Implementation of Multilisp: Lisp on a Multiprocessor,” 1984 ACM Symp. on Lisp and Functional Programming, ACM, 1984.Google Scholar
  2. 2.
    L. D. Erman, F. Hayes-Roth, V. R. Lesser, and D. R. Reddy, “The Hearsay-II Speech Understanding System.” Computing Surveys 12, June 1980.Google Scholar
  3. 3.
    L. D. Erman, P. E. London, and S. F. Fickas, “The Design and an Example Use of Hearsay-III,” Proc., IJCAI-81, 1981.Google Scholar
  4. 4.
    P. M. Kogge, “Function-Based Computing and Parallelism: A Review,” Parallel Computing 2(3): 243–254, November 1985.MATHCrossRefGoogle Scholar
  5. 5.
    C. L. Forgy, OPS5 User’s Manual Tech. Rep. CMU-CS-81–135, Computer Science Dept., Carnegie-Mellon University, Pittsburgh, PA, 1981.Google Scholar
  6. 6.
    C. L. Forgy, “Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem,” Artificial Intelligence 19, September 1982.Google Scholar
  7. 7.
    S. J. Stolfo, “Five Parallel Algorithms for Production System Execution on the DADO Machine,” Proc. AAAI-84, 1984.Google Scholar
  8. 8.
    C. L. Forgy, A. Gupta, A. Newell, and R. Wedig, “Initial Assessment of Architectures for Production Systems,” Proc, AAAI-84, 1984.Google Scholar
  9. 9.
    S. E. Fahlman, NETL: A System for Representing and Using Real-World Knowledge. Cambridge, MA: MIT Press, 1979.MATHGoogle Scholar
  10. 10.
    J. A. Feldman, “Connectionist Models and Their Applications: Introduction,” Cognitive Science, (Special Issue) 9:1, 1985.Google Scholar
  11. 11.
    S. E. Fahlman, “Three Flavors of Parallelism,” Proc. Fourth Nat’l Conf. Can. Soc. for Computational Studies of Intelligence, Saskatoon, Saskatchewan, May 1982.Google Scholar
  12. 12.
    S. E. Fahlman, “Design Sketch for a Million-Element NETL Machine,” Proc. Nat’l Conf. on Artificial Intelligence, Stanford, CA, August 1980.Google Scholar
  13. 13.
    W. D. Hillis, The Connection Machine. Cambridge MA: MIT Press, 1985.Google Scholar
  14. 14.
    J. A. Feldman, and D. H. Ballard, “Connectionist Models and Their Properties,” Cognitive Science 6:205–254, 1982.CrossRefGoogle Scholar
  15. 15.
    J. J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proc. Nat’l Acad. Sci. U.S.A. 79:2554–2558, 1982.MathSciNetCrossRefGoogle Scholar
  16. 16.
    J. J. Hopfield, and D. W. Tank, “Neural Computation of Decisions in Optimization Problems,” Biological Cybernetics 52:141–152, 1985.MathSciNetMATHGoogle Scholar
  17. 17.
    F. Rosenblatt, Principles of Neurodynamics. New York: Spartan Books, 1962.MATHGoogle Scholar
  18. 18.
    M. Minsky, and S. Papert, Perceptrons. Cambridge, MA: MIT Press, 1969.MATHGoogle Scholar
  19. 19.
    S. E. Fahlman, G. E. Hinton, and T. J. Sejnowski, “Massively Parallel Architectures for A.I.: Netl, Thistle, and Boltzmann Machines,” Proc. Nat’l Conf. on Artificial Intelligence, Washington DC, Aug. 1983.Google Scholar
  20. 20.
    D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, “A Learning Algorithm for Boltzmann Machines.” Cognitive Science 9:147–169, 1985.CrossRefGoogle Scholar
  21. 21.
    D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation.” In D. E. Rumelhart, J. L. McClelland, and the PDP research group (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, MA: Bradford Books, 1986.Google Scholar

Copyright information

© Kluwer Academic Publishers 1988

Authors and Affiliations

  • Scott E. Fahlman

There are no affiliations available

Personalised recommendations