Buy new:
$21.49$21.49
FREE delivery: Sunday, Nov 6 on orders over $25.00 shipped by Amazon.
Ships from: Amazon.com Sold by: Amazon.com
Buy used:: $14.61
Other Sellers on Amazon
& FREE Shipping
100% positive over last 12 months
+ $5.11 shipping
96% positive over last 12 months
+ $5.37 shipping
99% positive over last 12 months
Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Learn more
Read instantly on your browser with Kindle Cloud Reader.
Using your mobile phone camera - scan the code below and download the Kindle app.
Follow the Authors
OK
Algorithms to Live By: The Computer Science of Human Decisions Hardcover – April 19, 2016
| Brian Christian (Author) Find all the books, read about the author, and more. See search results for this author |
| Tom Griffiths (Author) Find all the books, read about the author, and more. See search results for this author |
| Price | New from | Used from |
|
Audible Audiobook, Unabridged
"Please retry" |
$0.00
| Free with your Audible trial | |
|
MP3 CD, Audiobook, MP3 Audio, Unabridged
"Please retry" | $10.03 | $9.34 |
- Kindle
$0.00 Read with Kindle Unlimited to also enjoy access to over 1 million more titles $12.99 to buy -
Audiobook
$0.00 Free with your Audible trial - Hardcover
$21.4958 Used from $7.97 33 New from $16.84 1 Collectible from $42.27 - Paperback
$11.60 - $12.5952 Used from $3.01 39 New from $11.24 - MP3 CD
$9.34 - $10.035 Used from $9.34 9 New from $10.03
Explore your book, then jump right back to where you left off with Page Flip.
View high quality images that let you zoom in to take a closer look.
Enjoy features only possible in digital – start reading right away, carry your library with you, adjust the font, create shareable notes and highlights, and more.
Discover additional details about the events, people, and places in your book, with Wikipedia integration.
Enhance your purchase
An exploration of how computer algorithms can be applied to our everyday lives to solve common decision-making problems and illuminate the workings of the human mind.
What should we do, or leave undone, in a day or a lifetime? How much messiness should we accept? What balance of the new and familiar is the most fulfilling? These may seem like uniquely human quandaries, but they are not. Computers, like us, confront limited space and time, so computer scientists have been grappling with similar problems for decades. And the solutions they’ve found have much to teach us.
In a dazzlingly interdisciplinary work, Brian Christian and Tom Griffiths show how algorithms developed for computers also untangle very human questions. They explain how to have better hunches and when to leave things to chance, how to deal with overwhelming choices and how best to connect with others. From finding a spouse to finding a parking spot, from organizing one’s inbox to peering into the future, Algorithms to Live By transforms the wisdom of computer science into strategies for human living.
- Print length368 pages
- LanguageEnglish
- PublisherHenry Holt and Co.
- Publication dateApril 19, 2016
- Dimensions6.43 x 1.33 x 9.58 inches
- ISBN-101627790365
- ISBN-13978-1627790369
Frequently bought together

Customers who viewed this item also viewed
Exploration in itself has value, since trying new things increases our chances of finding the best. So taking the future into account, rather than focusing just on the present, drives us toward novelty.Highlighted by 2,770 Kindle readers
“To try and fail is at least to learn; to fail to try is to suffer the inestimable loss of what might have been.”Highlighted by 2,426 Kindle readers
This is the first and most fundamental insight of sorting theory. Scale hurts.Highlighted by 1,777 Kindle readers
Editorial Reviews
Review
“A remarkable book... A solid, research-based book that’s applicable to real life. The algorithms the authors discuss are, in fact, more applicable to real-life problems than I’d have ever predicted.... It’s well worth the time to find a copy of Algorithms to Live By and dig deeper.”
―Forbes
“By the end of the book, I was convinced. Not because I endorse the idea of living like some hyper-rational Vulcan, but because computing algorithms could be a surprisingly useful way to embrace the messy compromises of real, non-Vulcan life.”
―The Guardian (UK)
“I absolutely reveled in this book... It's the perfect antidote to the argument you often hear from young math students: ‘What's the point? I'll never use this in real life!’... The whole business, whether it's the relative simplicity of the 37% rule or the mind-twisting possibilities of game theory, is both potentially practical and highly enjoyable as presented here. Recommended.”
―Popular Science (UK)
“An entertaining, intelligently presented book... Craftily programmed to build from one good idea to the next... The value of being aware of algorithmic thinking―of the thornier details of ‘human algorithm design,’ as Christian and Griffiths put it―is not just better problem solving, but also greater insight into the human mind. And who doesn’t want to know how we tick?”
―Kirkus Reviews
“Compelling and entertaining, Algorithms to Live By is packed with practical advice about how to use time, space, and effort more efficiently. And it’s a fascinating exploration of the workings of computer science and the human mind. Whether you want to optimize your to-do list, organize your closet, or understand human memory, this is a great read.”
―Charles Duhigg, author of The Power of Habit
“In this remarkably lucid, fascinating, and compulsively readable book, Christian and Griffiths show how much we can learn from computers. We’ve all heard about the power of algorithms―but Algorithms to Live By actually explains, brilliantly, how they work, and how we can take advantage of them to make better decisions in our own lives.”
―Alison Gopnik, coauthor of The Scientist in the Crib
“I’ve been waiting for a book to come along that merges computational models with human psychology―and Christian and Griffiths have succeeded beyond all expectations. This is a wonderful book, written so that anyone can understand the computer science that runs our world―and more importantly, what it means to our lives.”
―David Eagleman, author of Incognito: The Secret Lives of the Brain
About the Author
Tom Griffiths is a professor of psychology and cognitive science at UC Berkeley, where he directs the Computational Cognitive Science Lab. He has received widespread recognition for his scientific work, including awards from the American Psychological Association and the Sloan Foundation.
Excerpt. © Reprinted by permission. All rights reserved.
Algorithms to Live By
The Computer Science of Human Decisions
By Brian Christian, Tom GriffithsHenry Holt and Company
Copyright © 2016 Brian Christian and Tom GriffithsAll rights reserved.
ISBN: 978-1-62779-036-9
Contents
Title Page,Copyright Notice,
Dedication,
Introduction,
Algorithms to Live By,
1 Optimal Stopping Optimal Stopping When to Stop Looking,
2 Explore/Exploit The Latest vs. the Greatest,
3 Sorting Making Order,
4 Caching Forget About It,
5 Scheduling First Things First,
6 Bayes's Rule Predicting the Future,
7 Overfitting When to Think Less,
8 Relaxation Let It Slide,
9 Randomness When to Leave It to Chance,
10 Networking How We Connect,
11 Game Theory The Minds of Others,
Conclusion,
Computational Kindness,
Notes,
Bibliography,
Index,
Acknowledgments,
Also by Brian Christian,
About the Authors,
Copyright,
CHAPTER 1
Optimal Stopping
When to Stop Looking
Though all Christians start a wedding invitation by solemnly declaring their marriage is due to special Divine arrangement, I, as a philosopher, would like to talk in greater detail about this ... — JOHANNES KEPLER
If you prefer Mr. Martin to every other person; if you think him the most agreeable man you have ever been in company with, why should you hesitate? — JANE AUSTEN, EMMA
It's such a common phenomenon that college guidance counselors even have a slang term for it: the "turkey drop." High-school sweethearts come home for Thanksgiving of their freshman year of college and, four days later, return to campus single.
An angst-ridden Brian went to his own college guidance counselor his freshman year. His high-school girlfriend had gone to a different college several states away, and they struggled with the distance. They also struggled with a stranger and more philosophical question: how good a relationship did they have? They had no real benchmark of other relationships by which to judge it. Brian's counselor recognized theirs as a classic freshman-year dilemma, and was surprisingly nonchalant in her advice: "Gather data."
The nature of serial monogamy, writ large, is that its practitioners are confronted with a fundamental, unavoidable problem. When have you met enough people to know who your best match is? And what if acquiring the data costs you that very match? It seems the ultimate Catch-22 of the heart.
As we have seen, this Catch-22, this angsty freshman cri de coeur, is what mathematicians call an "optimal stopping" problem, and it may actually have an answer: 37%.
Of course, it all depends on the assumptions you're willing to make about love.
The Secretary Problem
In any optimal stopping problem, the crucial dilemma is not which option to pick, but how many options to even consider. These problems turn out to have implications not only for lovers and renters, but also for drivers, homeowners, burglars, and beyond.
The 37% Rule derives from optimal stopping's most famous puzzle, which has come to be known as the "secretary problem." Its setup is much like the apartment hunter's dilemma that we considered earlier. Imagine you're interviewing a set of applicants for a position as a secretary, and your goal is to maximize the chance of hiring the single best applicant in the pool. While you have no idea how to assign scores to individual applicants, you can easily judge which one you prefer. (A mathematician might say you have access only to the ordinal numbers — the relative ranks of the applicants compared to each other — but not to the cardinal numbers, their ratings on some kind of general scale.) You interview the applicants in random order, one at a time. You can decide to offer the job to an applicant at any point and they are guaranteed to accept, terminating the search. But if you pass over an applicant, deciding not to hire them, they are gone forever.
The secretary problem is widely considered to have made its first appearance in print — sans explicit mention of secretaries — in the February 1960 issue of Scientific American, as one of several puzzles posed in Martin Gardner's beloved column on recreational mathematics. But the origins of the problem are surprisingly mysterious. Our own initial search yielded little but speculation, before turning into unexpectedly physical detective work: a road trip down to the archive of Gardner's papers at Stanford, to haul out boxes of his midcentury correspondence. Reading paper correspondence is a bit like eavesdropping on someone who's on the phone: you're only hearing one side of the exchange, and must infer the other. In our case, we only had the replies to what was apparently Gardner's own search for the problem's origins fiftysome years ago. The more we read, the more tangled and unclear the story became.
Harvard mathematician Frederick Mosteller recalled hearing about the problem in 1955 from his colleague Andrew Gleason, who had heard about it from somebody else. Leo Moser wrote from the University of Alberta to say that he read about the problem in "some notes" by R. E. Gaskell of Boeing, who himself credited a colleague. Roger Pinkham of Rutgers wrote that he first heard of the problem in 1955 from Duke University mathematician J. Shoenfield, "and I believe he said that he had heard the problem from someone at Michigan."
"Someone at Michigan" was almost certainly someone named Merrill Flood. Though he is largely unheard of outside mathematics, Flood's influence on computer science is almost impossible to avoid. He's credited with popularizing the traveling salesman problem (which we discuss in more detail in chapter 8), devising the prisoner's dilemma (which we discuss in chapter 11), and even with possibly coining the term "software." It's Flood who made the first known discovery of the 37% Rule, in 1958, and he claims to have been considering the problem since 1949 — but he himself points back to several other mathematicians.
Suffice it to say that wherever it came from, the secretary problem proved to be a near-perfect mathematical puzzle: simple to explain, devilish to solve, succinct in its answer, and intriguing in its implications. As a result, it moved like wildfire through the mathematical circles of the 1950s, spreading by word of mouth, and thanks to Gardner's column in 1960 came to grip the imagination of the public at large. By the 1980s the problem and its variations had produced so much analysis that it had come to be discussed in papers as a subfield unto itself.
As for secretaries — it's charming to watch each culture put its own anthropological spin on formal systems. We think of chess, for instance, as medieval European in its imagery, but in fact its origins are in eighth-century India; it was heavy-handedly "Europeanized" in the fifteenth century, as its shahs became kings, its viziers turned to queens, and its elephants became bishops. Likewise, optimal stopping problems have had a number of incarnations, each reflecting the predominating concerns of its time. In the nineteenth century such problems were typified by baroque lotteries and by women choosing male suitors; in the early twentieth century by holidaying motorists searching for hotels and by male suitors choosing women; and in the paper-pushing, male-dominated mid-twentieth century, by male bosses choosing female assistants. The first explicit mention of it by name as the "secretary problem" appears to be in a 1964 paper, and somewhere along the way the name stuck.
Whence 37%?
In your search for a secretary, there are two ways you can fail: stopping early and stopping late. When you stop too early, you leave the best applicant undiscovered. When you stop too late, you hold out for a better applicant who doesn't exist. The optimal strategy will clearly require finding the right balance between the two, walking the tightrope between looking too much and not enough.
If your aim is finding the very best applicant, settling for nothing less, it's clear that as you go through the interview process you shouldn't even consider hiring somebody who isn't the best you've seen so far. However, simply being the best yet isn't enough for an offer; the very first applicant, for example, will of course be the best yet by definition. More generally, it stands to reason that the rate at which we encounter "best yet" applicants will go down as we proceed in our interviews. For instance, the second applicant has a 50/50 chance of being the best we've yet seen, but the fifth applicant only has a 1-in-5 chance of being the best so far, the sixth has a 1-in-6 chance, and so on. As a result, best-yet applicants will become steadily more impressive as the search continues (by definition, again, they're better than all those who came before) — but they will also become more and more infrequent.
Okay, so we know that taking the first best-yet applicant we encounter (a.k.a. the first applicant, period) is rash. If there are a hundred applicants, it also seems hasty to make an offer to the next one who's best-yet, just because she was better than the first. So how do we proceed?
Intuitively, there are a few potential strategies. For instance, making an offer the third time an applicant trumps everyone seen so far — or maybe the fourth time. Or perhaps taking the next best-yet applicant to come along after a long "drought" — a long streak of poor ones.
But as it happens, neither of these relatively sensible strategies comes out on top. Instead, the optimal solution takes the form of what we'll call the Look-Then-Leap Rule: You set a predetermined amount of time for "looking" — that is, exploring your options, gathering data — in which you categorically don't choose anyone, no matter how impressive. After that point, you enter the "leap" phase, prepared to instantly commit to anyone who outshines the best applicant you saw in the look phase.
We can see how the Look-Then-Leap Rule emerges by considering how the secretary problem plays out in the smallest applicant pools. With just one applicant the problem is easy to solve — hire her! With two applicants, you have a 50/50 chance of success no matter what you do. You can hire the first applicant (who'll turn out to be the best half the time), or dismiss the first and by default hire the second (who is also best half the time).
Add a third applicant, and all of a sudden things get interesting. The odds if we hire at random are one-third, or 33%. With two applicants we could do no better than chance; with three, can we? It turns out we can, and it all comes down to what we do with the second interviewee. When we see the first applicant, we have no information — she'll always appear to be the best yet. When we see the third applicant, we have no agency — we have to make an offer to the final applicant, since we've dismissed the others. But when we see the second applicant, we have a little bit of both: we know whether she's better or worse than the first, and we have the freedom to either hire or dismiss her. What happens when we just hire her if she's better than the first applicant, and dismiss her if she's not? This turns out to be the best possible strategy when facing three applicants; using this approach it's possible, surprisingly, to do just as well in the three-applicant problem as with two, choosing the best applicant exactly half the time.
Enumerating these scenarios for four applicants tells us that we should still begin to leap as soon as the second applicant; with five applicants in the pool, we shouldn't leap before the third.
As the applicant pool grows, the exact place to draw the line between looking and leaping settles to 37% of the pool, yielding the 37% Rule: look at the first 37% of the applicants, choosing none, then be ready to leap for anyone better than all those you've seen so far.
As it turns out, following this optimal strategy ultimately gives us a 37% chance of hiring the best applicant; it's one of the problem's curious mathematical symmetries that the strategy itself and its chance of success work out to the very same number. The table above shows the optimal strategy for the secretary problem with different numbers of applicants, demonstrating how the chance of success — like the point to switch from looking to leaping — converges on 37% as the number of applicants increases.
A 63% failure rate, when following the best possible strategy, is a sobering fact. Even when we act optimally in the secretary problem, we will still fail most of the time — that is, we won't end up with the single best applicant in the pool. This is bad news for those of us who would frame romance as a search for "the one." But here's the silver lining. Intuition would suggest that our chances of picking the single best applicant should steadily decrease as the applicant pool grows. If we were hiring at random, for instance, then in a pool of a hundred applicants we'd have a 1% chance of success, and in a pool of a million applicants we'd have a 0.0001% chance. Yet remarkably, the math of the secretary problem doesn't change. If you're stopping optimally, your chance of finding the single best applicant in a pool of a hundred is 37%. And in a pool of a million, believe it or not, your chance is still 37%. Thus the bigger the applicant pool gets, the more valuable knowing the optimal algorithm becomes. It's true that you're unlikely to find the needle the majority of the time, but optimal stopping is your best defense against the haystack, no matter how large.
Lover's Leap
The passion between the sexes has appeared in every age to be so nearly the same that it may always be considered, in algebraic language, as a given quantity. — THOMAS MALTHUS
I married the first man I ever kissed. When I tell this to my children they just about throw up. — BARBARA BUSH
Before he became a professor of operations research at Carnegie Mellon, Michael Trick was a graduate student, looking for love. "It hit me that the problem has been studied: it is the Secretary Problem! I had a position to fill [and] a series of applicants, and my goal was to pick the best applicant for the position." So he ran the numbers. He didn't know how many women he could expect to meet in his lifetime, but there's a certain flexibility in the 37% Rule: it can be applied to either the number of applicants or the time over which one is searching. Assuming that his search would run from ages eighteen to forty, the 37% Rule gave age 26.1 years as the point at which to switch from looking to leaping. A number that, as it happened, was exactly Trick's age at the time. So when he found a woman who was a better match than all those he had dated so far, he knew exactly what to do. He leapt. "I didn't know if she was Perfect (the assumptions of the model don't allow me to determine that), but there was no doubt that she met the qualifications for this step of the algorithm. So I proposed," he writes.
"And she turned me down."
Mathematicians have been having trouble with love since at least the seventeenth century. The legendary astronomer Johannes Kepler is today perhaps best remembered for discovering that planetary orbits are elliptical and for being a crucial part of the "Copernican Revolution" that included Galileo and Newton and upended humanity's sense of its place in the heavens. But Kepler had terrestrial concerns, too. After the death of his first wife in 1611, Kepler embarked on a long and arduous quest to remarry, ultimately courting a total of eleven women. Of the first four, Kepler liked the fourth the best ("because of her tall build and athletic body") but did not cease his search. "It would have been settled," Kepler wrote, "had not both love and reason forced a fifth woman on me. This one won me over with love, humble loyalty, economy of household, diligence, and the love she gave the stepchildren."
"However," he wrote, "I continued."
Kepler's friends and relations went on making introductions for him, and he kept on looking, but halfheartedly. His thoughts remained with number five. After eleven courtships in total, he decided he would search no further. "While preparing to travel to Regensburg, I returned to the fifth woman, declared myself, and was accepted." Kepler and Susanna Reuttinger were wed and had six children together, along with the children from Kepler's first marriage. Biographies describe the rest of Kepler's domestic life as a particularly peaceful and joyous time.
Both Kepler and Trick — in opposite ways — experienced firsthand some of the ways that the secretary problem oversimplifies the search for love. In the classical secretary problem, applicants always accept the position, preventing the rejection experienced by Trick. And they cannot be "recalled" once passed over, contrary to the strategy followed by Kepler.
In the decades since the secretary problem was first introduced, a wide range of variants on the scenario have been studied, with strategies for optimal stopping worked out under a number of different conditions. The possibility of rejection, for instance, has a straightforward mathematical solution: propose early and often. If you have, say, a 50/50 chance of being rejected, then the same kind of mathematical analysis that yielded the 37% Rule says you should start making offers after just a quarter of your search. If turned down, keep making offers to every best-yet person you see until somebody accepts. With such a strategy, your chance of overall success — that is, proposing and being accepted by the best applicant in the pool — will also be 25%. Not such terrible odds, perhaps, for a scenario that combines the obstacle of rejection with the general difficulty of establishing one's standards in the first place.
(Continues...)Excerpted from Algorithms to Live By by Brian Christian, Tom Griffiths. Copyright © 2016 Brian Christian and Tom Griffiths. Excerpted by permission of Henry Holt and Company.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Product details
- Publisher : Henry Holt and Co.; 1st edition (April 19, 2016)
- Language : English
- Hardcover : 368 pages
- ISBN-10 : 1627790365
- ISBN-13 : 978-1627790369
- Item Weight : 1.2 pounds
- Dimensions : 6.43 x 1.33 x 9.58 inches
- Best Sellers Rank: #148,766 in Books (See Top 100 in Books)
- #252 in Business Decision Making
- #408 in Decision-Making & Problem Solving
- #583 in Cognitive Psychology (Books)
- Customer Reviews:
About the authors

Brian Christian is the author of the acclaimed bestsellers "The Most Human Human," a New York Times editors’ choice and a New Yorker favorite book of the year, and "Algorithms to Live By" (with Tom Griffiths), a #1 Audible bestseller, Amazon best science book of the year and MIT Technology Review best book of the year.
Christian’s writing has appeared in The New Yorker, The Atlantic, Wired, and The Wall Street Journal, as well as peer-reviewed journals such as Cognitive Science. He has been featured on The Daily Show and Radiolab, and has lectured at Google, Facebook, Microsoft, the Santa Fe Institute, and the London School of Economics. His work has won several awards, including publication in Best American Science & Nature Writing, and has been translated into nineteen languages.
Christian holds degrees in computer science, philosophy, and poetry from Brown University and the University of Washington. A Visiting Scholar at the University of California, Berkeley, he lives in San Francisco.

Tom Griffiths is a professor of psychology and computer science at Princeton, where he directs the Computational Cognitive Science Lab. He has published scientific papers on topics ranging from cognitive psychology to cultural evolution, and has received awards from the National Academy of Sciences, the Sloan Foundation, the American Psychological Association, and the Psychonomic Society, among others. He lives in Princeton, New Jersey.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviewed in the United States on January 27, 2019
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Within the “Caching” chapter the authors make much about human memory and Media headlines both fading away very rapidly as time goes by. They feel this forgetting pattern is part of an underlying universal principle. It may be, but when you look at their own graphs (pg. 101) on the subject they omit to emphasize that the graphs have completely different scale on the x-axis. What human beings forget in a matter of hours, the Media moves on in a matter of days. It is a very different time scale that diminishes the insight associated with this principle besides the obvious: yes indeed individuals and even societies have a limited memory (on their own different respective scale).
They are often not in tune with the information age. For instance the algorithm that dominates the first half of the book is the 37% rule that you should stop gathering data regarding decisions after researching 37% of the data you were considering exploring. This virtually applies to everything. If you were planning to date 10 different people before getting married in order to “shop around” you apparently have enough info after dating the first 4. If you are planning to rent an apartment the same is true (you have enough info to make a good choice after interviewing the first 4 applicants out of 10). If you are planning to sell a house you can accept an offer after passing on the first few of them. If you are recruiting and hiring a secretary the same principle holds up.
However, with the online world we have so much more information than the world the author describes. Regarding mating with numerous online websites one has so much more information and choices than what the 37% rule would suggest. The same is true if you are hiring a secretary, you can just advertise on an online platform receive a 100 resumes in a few days. Filter those resumes, interview just a few candidates, select the best one and be done with it. This recruiting renders the 37% rule irrelevant (you don’t need to interview 37 candidates out of 100 since you already have a lot of info on all of them before interviewing them).
Also, absent from the math the authors convey is the concept of supply and demand. When selling a house, this transaction is dominated by the local supply and demand. For instance, anyone who has sold a home during the housing crisis most probably did not have the luxury to wait out for better offers such as the 37% rule would suggest. In general, waiting for a better offer does not work well in real estate. A house number of days on the market is a measure of how stale a prospective home sale gets. Waiting for better offers (37% rule) typically does not work. That is why sellers remove their homes from the market to give them a fresh reset.
Also absent from the authors’ calculations are moral considerations as they state: “if you are a skilled burglar and have a 90% chance of pulling off each robbery (and a 10% chance of losing it all [by being caught] ), then retire after 90/10 = 9 robberies. Cool math but not exactly “Algorithms to Live By” as the title suggests.
On other occasions, they do not support or explain the underlying math at all. Such is the case for the Gittin Index they cover on page 39 to 42. The latter is associated with counterintuitive results that remain confounding.
Other algorithms appear flawed. This includes the Upper Confidence Bound algorithm that supposedly guarantees minimal regret. I am unclear how that would be the case because by selecting such an option you also take the maximum risk. That’s what condo flippers did during the housing crisis Leveraging gets you up on the Upper Confidence Bound… but also the Lower one.
The authors cover the most important subject Bayesian statistics within chapter 6. However, their treatment of the subject focuses a lot more into challenging technical considerations like the probability distribution of the a priori events (Normal, Power, Erlang, etc.) rather than on explaining the basics of Bayes theorem. Without establishing a good foundation explaining Bayes theorem any insights regarding a priori events distributions are rather obfuscating. For a better coverage of Bayesian statistics Nate Silver’s “The Signal and the Noise” is a lot more edifying.
Several of their chapters’ subjects and titles use confusing play on words that make them sound like they are relevant to your daily life but they really are not. The chapter on “Relaxation” has nothing to do with relaxation. It describes mathematicians removing technical math constraints from very challenging problems in order to being able to solve them. The chapter on “Randomness” has also little to do with a layperson’s meaning of randomness. Instead, it deals with technical math concepts regarding sampling, Monte Carlo simulation, and randomized algorithms. Those represent another set of math strategies to solve what would be otherwise unresolvable problems.
The book is not all bad.
The chapter on “Overfitting” is excellent. That’s even though it is still aimed at the math geek crowd and provides little in terms of “Algorithms to Live By.” This book is truly very mistitled and mispecified in terms of audience target. In this chapter, they warn against developing models associated with higher degree polynomials to better fit the curve of a given data set. This is not just with higher degree polynomials but often any model that has a lot of variables that fit the history of the data really well. Such complex models with many variables often do a worse job of predicting given new data vs. much simpler models that do not fit the learning sample of the model as well. Their referring to cross-validation to test for overfitting, regularization to preempt overfitting, and stepwise methods to build streamline models consists of interesting arcane math technicalities. None of them have much relevance in your daily life decisions.
The chapter on “Game Theory” is also excellent. Their treatment of Game Theory is very good. Additionally, their explanation of a specific Game Theory situation: Information Cascade is truly fascinating and for once most relevant. It explains a whole lot about group behavior, asset bubbles, and related financial crises. What others have often described as the “madness of crowds” to explain bubbles may be better explained by information cascades. During the most recent financial crisis, each relevant party may have followed their own rational economic interest. But, the whole economic sector was plagued by negative equilibria that lead to inevitable disasters. This is a characteristic of information cascades as described within the book in the section “Information Cascades: The Tragic Rationality of Bubbles.”
My rating reflects that there are only two excellent chapters out of 11, and most of the math content is not really relevant to your daily life. If you have not heard of the 37% rule, there is a good reason for that; it is obsolete.
Optimal stopping - how many people out of 100 possible candidates should one interview for a given position (including that of spouse)? 37%, Why? Read the book.
The Explore/Exploit dichotomy - Should one ask the question "What's new" or "What's best"? Your answer may depend on your time horizon. As your time horizon shortens, "what's best" may be the better question. The book explains why. The book also looks at the multi-armed bandit as an example of the explore/exploit dichotomy. What's a multi-armed bandit? Think of the one-armed bandit in Vegas and multiply its arms. Mathematicians do so. Their conclusions may be useful. The trials of music critics also fit into the explore/exploit dichotomy. The authors explain why music critics find exploration a chore.
Sorting - libraries are the metaphor for computer sorting. Human memory also requires sorting. Maybe the decline in memory as humans age may be due to the amount of information through which it must sort and not due to declining faculties. A five-year old has a lot less information to go through than a seventy-five year old. The authors consider sorting techniques with email, Yelp, and other common uses. There is much useful information.
Caching - when is forgetting necessary? According to the authors, the first computer cache was developed for a supercomputer in 1962 ub Manchester, England. I wonder how "super" that computer was? Caching allows some information to be stored for repetitive use and uncached information to be kept in the background.
Scheduling - many scheduling problems have "intractable" solutions. The authors suggest different solutions based on algorithms such as precedence constraints, earliest due date (one I personally use frequently, which I couple with a personal likely to get me in the most trouble the quickest test) and shortest processing time. The scheduling problem has received substantial effort from mathematicians.
Bayes's Rule - how to use statistical inference to make useful predictions. Couple a well-defined problem with a range of prior outcomes and one can make accurate guesses. A .300 hitter comes to the plate against the same pitcher who has already struck the batter out twice and it may be a fair guess that the hitter is due for a hit.
Overfitting - don't overthink and over complicate a problem. The authors advise against practicing the idolatry of data. A more complex theorem may well lead to less accuracy rather than more. On the level of incentive compensation, the authors quote Steve Jobs for being careful that you include only those elements in your incentive package that matter; you will get what you measure.
Relaxatrion - the perfect is the enemy of the good. To get any useful answer from your mathematical model, it may be necessary to relax some of your constraints (insisting that your model never allow the traveling salesman to re-enter the same city twice may preclude any answer at all in a time period of less than the remaining life of the universe).
Randomness - mathematicians sometimes realize that the best answer comes from sampling and not from strict calculations. This may explain why I get so many survey requests. Algorithms for prime numbers use this technique. And, apparently, thousands of years ago the Greeks were already looking for prime numbers.
Networking - here the authors examine the "Byzantine generals" problem, which plays a part in explaining how computers communicate with each other.
Game Theory - Alan Turing investigated the "halting problem" in the 1930s. What if you give your computer a problem and it just keeps going? Rock, paper, scissors is a game with which most are familiar. It, too, is part of game theory. When a game seems to have no satisfactory answer, maybe it's time to change the game. What happens when you have an "information cascade"?
If any ot this interests you, I believe that you will enjoy the book. I recommend it highly.
Top reviews from other countries
You are then given guidance to develop strategies for living in happiness by using a more LOGICAL approach to spot danger and take positive action to prevent jeopardy by considering if what you are doing is meaningful and worthwhile and brings LONG-TERM happiness.
This ‘ME TOO’ book is not for everyone because it asks you to examine and challenge traditional ‘taboos’ and what is euphemistically known as ‘conventional wisdom’ – and then having the COURAGE to take the required actions to set your life in order and gain your liberty and FREEDOM.
Five stars
It covers approaches to searching, and when to stop looking for improvements over what you already have. It discuses sorting, and tradeoffs between time spent keeping things in order, and time spent finding them later. It covers scheduling, and how the best order to do things in depends very much on what you are trying to optimise. It finishes with game theory, explaining why some situations lead to poor outcomes for all, and how understanding this can help you know how to change the situation to get better outcomes. And it does all this, and more, with a light touch that makes it very readable.













