• List Price: $146.00
  • Save: $79.75 (55%)
Rented from apex_media
To Rent, select Shipping State from options above
Due Date: Aug 16, 2014
FREE return shipping at the end of the semester. Access codes and supplements are not guaranteed with rentals.
Qty:1
  • List Price: $146.00
  • Save: $42.39 (29%)
In Stock.
Ships from and sold by Amazon.com.
Gift-wrap available.
Add to Cart
Want it Monday, April 21? Order within and choose Two-Day Shipping at checkout. Details
Trade in your item
Get a $27.89
Gift Card.
Have one to sell?
Flip to back Flip to front
Listen Playing... Paused   You're listening to a sample of the Audible audio edition.
Learn more
See all 2 images

Markov Decision Processes: Discrete Stochastic Dynamic Programming Paperback

ISBN-13: 978-0471727828 ISBN-10: 0471727822 Edition: 1st

See all 2 formats and editions Hide other formats and editions
Amazon Price New from Used from Collectible from
Hardcover
"Please retry"
$289.78
Paperback
"Please retry"
$103.61
$103.60 $103.61

Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student



Frequently Bought Together

Markov Decision Processes: Discrete Stochastic Dynamic Programming + Dynamic Programming and Optimal Control (2 Vol Set)
Price for both: $231.39

Buy the selected items together

Customers Who Bought This Item Also Bought

NO_CONTENT_IN_FEATURE
NO_CONTENT_IN_FEATURE

Product Details

  • Paperback: 684 pages
  • Publisher: Wiley-Interscience; 1 edition (March 3, 2005)
  • Language: English
  • ISBN-10: 0471727822
  • ISBN-13: 978-0471727828
  • Product Dimensions: 9.2 x 6.3 x 1.2 inches
  • Shipping Weight: 1.9 pounds (View shipping rates and policies)
  • Average Customer Review: 5.0 out of 5 stars  See all reviews (3 customer reviews)
  • Amazon Best Sellers Rank: #544,642 in Books (See Top 100 in Books)

Editorial Reviews

From the Publisher

An up-to-date, unified and rigorous treatment of theoretical, computational and applied research on Markov decision process models. Concentrates on infinite-horizon discrete-time models. Discusses arbitrary state spaces, finite-horizon and continuous-time discrete-state models. Also covers modified policy iteration, multichain models with average reward criterion and sensitive optimality. Features a wealth of figures which illustrate examples and an extensive bibliography. --This text refers to an out of print or unavailable edition of this title.

About the Author

Martin L. Puterman, PhD, is Advisory Board Professor of Operations and Director of the Centre for Operations Excellence at The University of British Columbia in Vancouver, Canada.

More About the Author

Discover books, learn about writers, read author blogs, and more.

Customer Reviews

5.0 out of 5 stars
5 star
3
4 star
0
3 star
0
2 star
0
1 star
0
See all 3 customer reviews
Share your thoughts with other customers

Most Helpful Customer Reviews

17 of 19 people found the following review helpful By Warren B. Powell on December 15, 2007
Format: Paperback
For anyone looking for an introduction to classic discrete state, discrete action Markov decision processes this is the last in a long line of books on this theory, and the only book you will need. The presentation covers this elegant theory very thoroughly, including all the major problem classes (finite and infinite horizon, discounted reward, average reward). The presentation is rigorous, and while it will be best appreciated by doctoral students and the research community, most of the presentation can be easily understood by a masters audience with a strong background in probability.

Discrete state, discrete action models have seen limited applications because of the well-known "curse of dimensionality." This field has perhaps been best known for its ability to identify theoretical properties of models and algorithms (the book has a nice presentation of monotone policies, for example). Practical algorithms for dynamic programs typically require the approximation techniques that have evolved under names such as neuro-dynamic programming (Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3)), reinforcement learning (Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning)), or approximate dynamic programming (Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)).

Warren B. Powell
Professor
Princeton University
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
4 of 4 people found the following review helpful By W. Ghost on June 4, 2007
Format: Paperback
Anyone working with Markov Decision Processes should have this book. It has detailed explanations of several algorithms for MDPs: linear programming, value iteration and policy iteration for finite and infinite horizon; total-reward and average reward criteria, and there's one last chapter on continuous-time MDPs (SMDPs).

However, it does not cover some new ideas like partitioning and some faster approximated algorithms. But still, it is a great book!

You may also be interested in Bertsekas' "Dynamic Programming and Optimal Control", which covers similar material, but using a somewhat different approach
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again
0 of 2 people found the following review helpful By Jessye Bemley on November 1, 2012
Format: Paperback Verified Purchase
I was happy to find this book for a good price. Book was sent in a timely manner in great condition. Now I have what I need to do my research.
Comment Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback. If this review is inappropriate, please let us know.
Sorry, we failed to record your vote. Please try again

Product Images from Customers

Search
ARRAY(0xa14f50a8)