The First Law of Robotics

(A Call to Arms)
  • Daniel Weld
  • Oren Etzioni
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4324)


Even before the advent of Artificial Intelligence, science fiction writer Isaac Asimov recognized that an agent must place the protection of humans from harm at a higher priority than obeying human orders. Inspired by Asimov, we pose the following fundamental questions: (1) How should one formalize the rich, but informal, notion of “harm”? (2) How can an agent avoid performing harmful actions, and do so in a computationally tractable manner? (3) How should an agent resolve conflict between its goals and the need to avoid harm? (4) When should an agent prevent a human from harming herself? While we address some of these questions in technical detail, the primary goal of this paper is to focus attention on Asimov’s concern: society will reject autonomous agents unless we have some credible means of making them safe!


Knowledge Representation Tractable Manner Logical Sentence Safety Violation Cleanup Goal 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Allen, J.: Planning as temporal reasoning. In: Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, pp. 3–14 (1991)Google Scholar
  2. 2.
    Asimov, I.: Runaround. Astounding Science Fiction (1942)Google Scholar
  3. 3.
    Barrett, A., Weld, D.: Characterizing subgoal interactions for planning. In: Proc. 13th Int. Joint Conf. on A.I., pp. 1388–1393 (1993)Google Scholar
  4. 4.
    Chapman, D.: Planning for conjunctive goals. Artificial Intelligence 32(3), 333–377 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Davis, E.: Representations of Commonsense Knowledge. Morgan Kaufmann Publishers, Inc., San Mateo (1990)Google Scholar
  6. 6.
    Dean, T., Firby, J., Miller, D.: Hierarchical planning involving deadlines, travel times, and resources. Computational Intelligence 4(4), 381–398 (1988)CrossRefGoogle Scholar
  7. 7.
    Drummond, M.: Situated control rules. In: Proceedings of the First International Conference on Knowledge Representation and Reasoning (1989)Google Scholar
  8. 8.
    Etzioni, O.: Embedding decision-analytic control in a learning architecture. Artificial Intelligence 49(1-3), 129–160 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Etzioni, O.: Intelligence without robots (a reply to brooks). AI Magazine 14(4) (1993), Available via anonymous FTP from pub/etzioni/softbots/ at
  10. 10.
    Etzioni, O., Lesh, N., Segal, R.: Building softbots for UNIX (preliminary report). Technical Report 93-09-01, Department of Computer Science, University of Washington, Seattle, Washington (1993), Available via anonymous FTP from pub/etzioni/softbots/ at
  11. 11.
    Fox, M., Smith, S.: Isis — a knowldges-based system for factory scheduling. Expert Systems 1(1), 25–49 (1984)CrossRefGoogle Scholar
  12. 12.
    Haddawy, P., Hanks, S.: Representations for decision-theoretic planning: Utility functions for dealine goals. In: Proc. 3rd Int. Conf. on Principles of Knowledge Representation and Reasoning (1992)Google Scholar
  13. 13.
    Hammond, K., Converse, T., Grass, J.: The stabilization of environments. Artificial Intelligence (to appear)Google Scholar
  14. 14.
    Korf, R.: Planning as search: A quantitative approach. Artificial Intelligence 33(1), 65–88 (1987)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Leveson, N.G.: Software safety: Why, what, and how. ACM Computing Surveys 18(2), 125–163 (1986)CrossRefGoogle Scholar
  16. 16.
    Levesque, H., Brachman, R.: A fundamental tradeoff in knowledge representation. In: Brachman, R., Levesque, H. (eds.) Readings in Knowledge Representation, pp. 42–70. Morgan Kaufmann, San Mateo (1985)Google Scholar
  17. 17.
    Luria, M.: Knowledge Intensive Planning. PhD thesis, UC Berkeley (1988), Available as technical report UCB/CSD 88/433Google Scholar
  18. 18.
    McAllester, D., Rosenblitt, D.: Systematic nonlinear planning. In: Proc. 9th Nat. Conf. on A.I., pp. 634–639 (1991)Google Scholar
  19. 19.
    Pednault, E.: Synthesizing plans that contain actions with context-dependent effects. Computational Intelligence 4(4), 356–372 (1988)CrossRefGoogle Scholar
  20. 20.
    Pednault, E.: ADL: Exploring the middle ground between STRIPS and the situation calculus. In: Proc. 1st Int. Conf. on Principles of Knowledge Representation and Reasoning, pp. 324–332 (1989)Google Scholar
  21. 21.
    Penberthy, J., Weld, D.: UCPOP: A sound, complete, partial order planner for ADL. In: Proc. 3rd Int. Conf. on Principles of Knowledge Representation and Reasoning, pp. 103–114 (1992), Available via FTP from pub/ai/ at
  22. 22.
    Penberthy, J., Weld, D.: Temporal planning with continuous change. In: Proc. 12th Nat. Conf. on A.I. (1994)Google Scholar
  23. 23.
    Pollack, M.: The uses of plans. Artificial Intelligence 57(1) (1992)Google Scholar
  24. 24.
    Russell, S., Wefald, E.: Do the Right Thing. MIT Press, Cambridge (1991)Google Scholar
  25. 25.
    Shoham, Y.: Reasoning about Change: Time and Causation from the Standpoint of Artificial Intelligence. MIT Press, Cambridge (1988)Google Scholar
  26. 26.
    Tate, A.: Generating project networks. In: Proc. 5th Int. Joint Conf. on A.I., pp. 888–893 (1977)Google Scholar
  27. 27.
    Wellman, M., Doyle, J.: Modular utility representation for decision theoretic planning. In: Proc. 1st Int. Conf. on A.I. Planning Systems, pp. 236–242 (1992)Google Scholar
  28. 28.
    Wilkins, D.E.: Practical Planning. Morgan Kaufmann, San Mateo (1988)Google Scholar
  29. 29.
    Williamson, M., Hanks, S.: Optimal planning with a goal-directed utility model. In: Proc. 2nd Int. Conf. on A.I. Planning Systems (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Daniel Weld
    • 1
  • Oren Etzioni
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of WashingtonSeattle

Personalised recommendations