Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.

  • Apple
  • Android
  • Windows Phone
  • Android

To get the free app, enter your email address or mobile phone number.

Markov Decision Processes: Discrete Stochastic Dynamic Programming 1st Edition

5 out of 5 stars 5 customer reviews
ISBN-13: 978-0471727828
ISBN-10: 0471727822
Why is ISBN important?
ISBN
This bar-code number lets you verify that you're getting exactly the right version or edition of a book. The 13-digit and 10-digit formats both work.
Scan an ISBN with your phone
Use the Amazon App to scan ISBNs and compare prices.
Sell yours for a Gift Card
We'll buy it for $60.35
Learn More
Trade in now
Have one to sell? Sell on Amazon
Buy new
$109.52
Only 12 left in stock (more on the way).
Ships from and sold by Amazon.com. Gift-wrap available.
List Price: $151.00 Save: $41.48 (27%)
35 New from $105.45
Qty:1
Markov Decision Processes... has been added to your Cart
More Buying Choices
35 New from $105.45 29 Used from $104.19
Free Two-Day Shipping for College Students with Amazon Student Free%20Two-Day%20Shipping%20for%20College%20Students%20with%20Amazon%20Student


Save Up to 90% on Textbooks Textbooks
$109.52 FREE Shipping. Only 12 left in stock (more on the way). Ships from and sold by Amazon.com. Gift-wrap available.

Frequently Bought Together

  • Markov Decision Processes: Discrete Stochastic Dynamic Programming
  • +
  • Dynamic Programming and Optimal Control (2 Vol Set)
  • +
  • Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2nd Edition (Wiley Series in Probability and Statistics)
Total price: $375.07
Buy the selected items together

Editorial Reviews

From the Publisher

An up-to-date, unified and rigorous treatment of theoretical, computational and applied research on Markov decision process models. Concentrates on infinite-horizon discrete-time models. Discusses arbitrary state spaces, finite-horizon and continuous-time discrete-state models. Also covers modified policy iteration, multichain models with average reward criterion and sensitive optimality. Features a wealth of figures which illustrate examples and an extensive bibliography. --This text refers to an out of print or unavailable edition of this title.

From the Back Cover

The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists.

"This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter is excellent, not only from a historical perspective, but because it is valuable for researchers in acquiring a good perspective of the MDP research potential."
--Zentralblatt fur Mathematik

." . . it is of great value to advanced-level students, researchers, and professional practitioners of this field to have now a complete volume (with more than 600 pages) devoted to this topic. . . . Markov Decision Processes: Discrete Stochastic Dynamic Programming represents an up-to-date, unified, and rigorous treatment of theoretical and computational aspects of discrete-time Markov decision processes."
--Journal of the American Statistical Association

NO_CONTENT_IN_FEATURE



Product Details

  • Paperback: 684 pages
  • Publisher: Wiley-Interscience; 1 edition (March 3, 2005)
  • Language: English
  • ISBN-10: 0471727822
  • ISBN-13: 978-0471727828
  • Product Dimensions: 6.1 x 1.4 x 9.2 inches
  • Shipping Weight: 1.9 pounds (View shipping rates and policies)
  • Average Customer Review: 5.0 out of 5 stars  See all reviews (5 customer reviews)
  • Amazon Best Sellers Rank: #525,159 in Books (See Top 100 in Books)

More About the Author

Discover books, learn about writers, read author blogs, and more.

Customer Reviews

5 star
100%
4 star
0%
3 star
0%
2 star
0%
1 star
0%
See all 5 customer reviews
Share your thoughts with other customers

Top Customer Reviews

Format: Paperback
For anyone looking for an introduction to classic discrete state, discrete action Markov decision processes this is the last in a long line of books on this theory, and the only book you will need. The presentation covers this elegant theory very thoroughly, including all the major problem classes (finite and infinite horizon, discounted reward, average reward). The presentation is rigorous, and while it will be best appreciated by doctoral students and the research community, most of the presentation can be easily understood by a masters audience with a strong background in probability.

Discrete state, discrete action models have seen limited applications because of the well-known "curse of dimensionality." This field has perhaps been best known for its ability to identify theoretical properties of models and algorithms (the book has a nice presentation of monotone policies, for example). Practical algorithms for dynamic programs typically require the approximation techniques that have evolved under names such as neuro-dynamic programming (Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3)), reinforcement learning (Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning)), or approximate dynamic programming (Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)).

Warren B. Powell
Professor
Princeton University
Comment 25 of 27 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Paperback
Anyone working with Markov Decision Processes should have this book. It has detailed explanations of several algorithms for MDPs: linear programming, value iteration and policy iteration for finite and infinite horizon; total-reward and average reward criteria, and there's one last chapter on continuous-time MDPs (SMDPs).

However, it does not cover some new ideas like partitioning and some faster approximated algorithms. But still, it is a great book!

You may also be interested in Bertsekas' "Dynamic Programming and Optimal Control", which covers similar material, but using a somewhat different approach
Comment 4 of 4 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Hardcover Verified Purchase
Bible of MDP's. I got a pretty good deal on it as well...
Comment 0 of 3 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
By Bob on October 7, 2015
Format: Paperback Verified Purchase
very new
Comment 0 of 3 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse
Format: Paperback Verified Purchase
I was happy to find this book for a good price. Book was sent in a timely manner in great condition. Now I have what I need to do my research.
Comment 0 of 7 people found this helpful. Was this review helpful to you? Yes No Sending feedback...
Thank you for your feedback.
Sorry, we failed to record your vote. Please try again
Report abuse

Set up an Amazon Giveaway

Amazon Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers. Learn more
Markov Decision Processes: Discrete Stochastic Dynamic Programming
This item: Markov Decision Processes: Discrete Stochastic Dynamic Programming
Price: $109.52
Ships from and sold by Amazon.com