Skip to main content

A Theoretical Comparison of Models

  • Chapter
  • First Online:
  • 911 Accesses

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

Abstract

We have seen in the previous chapter that Markov Decision Processes can be consid- ered an “ideal” approach to the implementation of intelligent agents. Even though assigning utilities to states and probabilities to transitions between states might be regarded as a questionable way to solve the problem of preference, there are many situations in which this is acceptable. Once we have accepted that the problem is cor- rectly formulated in terms of the probabilities of actions having particular effects, and certain states having higher rewards than others, the MDP solution algorithms yield MEU-optimal policies. By this we mean mappings of states into actions that tell the agent what to do in each state, based on the probable outcomes of every possible action.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gerardo I. Simari .

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Simari, G.I., Parsons, S.D. (2011). A Theoretical Comparison of Models. In: Markov Decision Processes and the Belief-Desire-Intention Model. SpringerBriefs in Computer Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-1472-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-1472-8_4

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-1471-1

  • Online ISBN: 978-1-4614-1472-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics