134 of 148 people found the following review helpful
on June 8, 2007
This is the glib, anecdotal book built around a basic, almost stereotypic Harvard Business Review five-level model, this one focusing on various levels of use of analytical methods, systems and processes. At the lowest level, there is almost nothing going on in terms of analytics and, at the highest level, analytics are systematic, widespread and strategic. You can figure the middle three levels. In my experience, there would be some use in providing a zero-level or even negative-level use of analytics, those firms operating in the "data free" zone. They would provide some humor and color, not just useful references.
As to the subtitle, "The new science of winning," to be clear, "competing" and "winning" are not synonymous or even necessarily linked. Competing is not necessarily about winning and winning isn't as important as remaining competitive in the long run. Winning isn't everything and it is not the only thing.
The anecdotes tend towards Harrah's, the Boston Red Sox and several less-than-mainstream firms, along with a few data-crazed firms, e.g., Google. More and more detailed examples of the first-rate use of analytics by top competitors in the corporate world would have been welcome. Personally, Harrah's use of analytics to maximize gambling revenues strikes me as exploiting people's addictions. As to the Red Sox, at least they finally won a Series. As to data, the authors seem to think that 'data' is a singular noun, which leaves me somewhat perplexed as to the analytics applied to editing the text.
The book is shorter than the listed 240 pages. The anecdotes tend to be repetitive, the analytics more descriptive than analytic, and the five-level model gets driven home right away and then driven in repeatedly. We can probably all agree that the information age provides the capacity to mine data, to analyze it thoroughly, to disseminate it approporiately and widely, to use it strategically, and to provide the essential leadership to hire the people, structure the organization, and put the entire system in place in the first place.
"Competing" was not as boring as I expected it to be and not as informative as a I wanted it to be.
416 of 484 people found the following review helpful
on July 12, 2007
This book is, for the most part, a disappointing mix of fallacy, circularity, inconsistency, banality and utopian promises. If you've read books such as N. Taleb's "Fooled by Randomness", P. Rosenzweig's "The Halo Effect", or, for the classically educated, D. Fischer's comprehensive "Historians' Fallacies" (1970), you can easily while away a few lazy hours spotting the bad reasoning throughout this book. I'll give a few examples in a minute or two.
The effect is more disappointing than infuriating because, unlike many other business authors, the authors aren't claiming to have some unique insights or to have discovered some new principle of strategy; their aims are refreshingly modest. About the best I can say for it is (a) if you never read the January 23, 2006 Business Week cover story "Math Will Rock Your World" (which, as of this writing, was available for free online) you can learn that sophisticated mathematical tools are being used in business, and that the market value of math Ph.D.s is increasing, and (b) if you did read that article and don't know much else about these tools, you can learn a little bit of terminology/jargon from the text boxes scattered throughout the book, and maybe a little bit about the political problems of implementing them (@145-146). As other reviewers have pointed out, the book won't teach you how to use or implement such tools. (The authors are forthright about this, e.g. @22.) Unfortunately, the authors also don't give any concrete illustration, with formulas or pictures or even an extended analogy, of how any such tool is used; they merely assert the tools' efficacy.
Or rather, -- and this is where the trouble begins -- they don't merely assert, they *emphatically* assert, as in the book's rhapsodic concluding paragraph about what the future looks like for analytic competitors (@186): "They'll get the best customers and charge them exactly the price that the customer is willing to pay ... They'll have the most efficient and effective marketing campaigns and promotions. Their customer service will excel ... Their supply chains will be ultraefficient, and they'll have neither excess inventory nor stock-outs," etc., a prophetic vision of near-Biblical proportions (cf. Dvorim a/k/a Deuteronomy, Chapter 11). (However, I was stumped by one item in this catalogue of blessings for the faithful: "They'll have the best people or [sic] the best players in the industry" -- what's the difference?)
Having treated of utopian promises, here are a few examples of the other flaws I mentioned:
A. FALLACY (and related sins): The most obvious ones in the book are: (i) confusing causation with correlation, (ii) attempting to lead the reader into such confusion, and (iii) "post hoc, propter hoc" (if Y comes after X, Y must have been caused by X).
(i): At page 178, the authors discuss "direct discovery technologies" that mine data and would "let managers go directly to the cause of variances in results or performance. This would be a form of predictive analytics, since it would employ a model of how the business is supposed to perform, and would pinpoint factors that are out of range in the causal model of business performance."
First we need to deal with a textual ambiguity: the meaning of "supposed" in this context. If "supposed to" is normative -- i.e. meaning "is desired to" -- then to call technology "predictive" when it uses such a model is quite a stretch. So does "supposed to" have a more neutral meaning, like "is anticipated to"? I'll assume that this fits the context better.
Now let's get to the real problem: The model is looking at results and performance -- i.e., the past. As statistical programs are wont to do, the model can identify correlations; and let's assume that it will make predictions based on the observed correlations (there are some commercial software packages that promise this). That is quite different from divining causes, which nonetheless is what the authors have twice asserted in this passage. I leave aside the question of predictive value based on past results; read Taleb or your mutual fund prospectus ("Past results are no guarantee of future performance").
(ii) At pp. 46-47, the authors describe correlations between "low performance" in using analytics and financial underperformance, and "high performance" in using analytics and financial overperformance. The ratings of analytics and financial performance are based on self-evaluations, not objective measures. This is the "halo effect" in spades, as most recently described in Rosenzweig's book -- happy (profitable) companies are happy about everything, and unhappy (less profitable) companies blame themselves about everything. More to the point, though: the companies in these two groups make up an aggregate of only 29% of their sample. They say nothing about the middle 71%. For all we know, "high performance" in analytics also correlates well with mediocre financial performance.
(iii) At pp. 18-19, the authors tell a cautionary tale about the Red Sox manager who defied the quants in the 2003 American League Championship Series against the Yankees: Red Sox analysts "had demonstrated conclusively" that pitcher Pedro Martinez became much easier to hit against after about 7 innings or 105 pitches, and warned the manager that "by no means should Martinez be left in the game after that point." However, "in the fifth [sic] and deciding game of the series," the manager allowed Martinez to continue pitching into the 8th inning. The result? "[T]he Yankees shelled Martinez. The Yanks won the ALCS, but [the manager] lost his job. It's a powerful story of what can go happen if frontline managers and employees don't go along with the analytical program." Sounds like a sportscaster channeling the Borg.
Even if we take this story at face value, one has to wonder, was that all there was to it? Does the Red Sox' losing the series after Martinez pitched into the 8th inning mean that his pitching was the cause? Was there bad fielding involved, for example? Or did the Yankees' adrenalin have anything to do with it? And what was the score when Martinez was removed?
Thoughts like these moved me to look up the box score of the game. First of all, Martinez didn't pitch in the fifth game -- probably what the authors were referring to was the 7th game. In that game, it's true, Martinez gave up 3 runs in the 8th inning. But what was the result? The Yankees only TIED the game, 5-5, to that point. They didn't win until the bottom of the 11th inning, when they scored one more run (off the third Red Sox pitcher brought in after Martinez). By the way, the game was in New York, so do you think the home crowd's energy might have been a factor? "Post hoc, propter hoc": it don't come any better than this.
B. CIRCULARITY: E.g.: At pp. 48-49, one of the 5 characteristics of analytic capabilities possessed by companies "that compete successfully on analytics" is that such capabilities are "better than the competition [sic]." I guess that's why they "compete successfully." BTW, two others in the list of five are that such capabilities are "hard to duplicate" and "unique" (@48). Same cannot be said of items in this list.
The discussion about the ideal characteristics of executives in "analytic competitors" (@135-136) hints at a more substantive circularity. One such characteristic an exec should possess is he or she should be a "passionate believer in analytical and fact-based decision making". However, when describing how "analytical leadership emerge[s]" (@136-137), the authors can only adduce cases in which the leaders (i) found a company on the principle of using analytics from the get-go, (ii) come in as a new senior exec bringing with them the idea of using analytics, or (iii) are a younger generation in a family-owned business. The authors don't mention anyone who "saw the light" and became a convert. So companies whose leaders are passionate about analytics will use analytics.
C. INCONSISTENCY: E.g.: The "most analytically sophisticated and successful" companies use analytics, inter alia, to support "a distinctive strategic capability" (@23). "Having a distinctive capability means that *the organization* views this aspect of its business as what sets it apart from competitors" (@24; emphasis added). However, "not all businesses have a distinctive capability" -- e.g., Kmart, USAirways and GM don't, because "to *an outside observer* they don't do anything substantially better than their competitors" (id., next paragraph; emphasis added.)
D. BANALITY: Parts of the book (esp. Chapter 6, a five-step "road map to enhanced analytical capabilities"), sound like a MadLibs that could just have easily been filled in with strategic planning, Six Sigma, or dozens of other management fads through the decades. E.g., a "Stage 4" company is defined as "analytics are respected and widely practiced but are not driving the company's strategy" (@ 125); "It is important to specify the financial outcomes desired from an analytical initiative to help measure its success," @ 127; "Assuming that an organization already has sufficient management support and an understanding of its desired outcomes, analytical orientation, and decision-making processes, its next step is to begin defining priorities," @id.
Finally, the whole enterprise of "analytics" has a certain banality too, through no fault of the authors of this book: it's one more in a string of dreary revivals of Taylorism on steroids, albeit this time with 21st-Century pharmaceutical know-how -- and with far greater potential to invade personal privacy. Some of its practitioners think it would be a good idea to, say, deny jobs to people simply on the basis of low credit scores, since people with low credit scores can be assumed to have lots of other problems too (reported without any explicit endorsement or disapproval by the authors @ 26). That such an "analytical" criterion might compound those folks' problems and low credit scores is not worth a mention. Here is the point at which the authors' omissions and gaffes stop being silly, and where banality stops being benign. It is more than a disappointment that you won't find ethics discussed in this book.
7 of 7 people found the following review helpful
Mark Twain once said something to the effect that it isn't what you don't know that gets you into trouble, it's what you know for certain that isn't so that will get you. Too many businesses are run on assumptions, guesses, and inertia. What we are doing now worked in the past so lets keep doing it. Shareholders lose a lot of money when their businesses are run with that kind of thinking.
This book is about fact-based decision making. It is really more of an introduction to the subject than a detailed text, but it is still quite useful for those wanting to learn the basics of the subject. The first five chapters discuss what analytics are, how you compete using them, and the growth path from wondering what an analytic competitor is through the fives steps to becoming one. They also discuss what it means when using internal data that you completely control, and what it means when you do it using data you control and supplier or customer data that you do not control.
The last four chapters take on the practical side of implementing a road map to becoming an analytic competitor. I particularly enjoyed the chapter emphasizing that all your plans will fail if you don't have the right people. Systems alone won't do it. The next chapter discusses the kinds of systems you need. The last chapter discusses the future of analytics.
For the right audience, this is a fascinating book. The stories about businesses succeeding by using analytics or getting themselves into serious trouble by ignoring them are all good and entertaining. Be careful, though. Some of the stories talk about instances (such as the Red Sox losing the World Series by letting the pitcher go beyond his statistical maximum pitching range) rather than trends and large numbers of events. Statistics don't work on instances. That is, at any given moment a coin might come up heads or tails. Just because there have been ten heads flips in a row does not mean you should take less than 50-50 odds on the next flip. It is still 50-50. That pitcher might have won, might have lost that game and it would have become part of the statistical information. However, for the stats to become powerful, you would have to be able to make a strong prediction over a series of games that he pitched. That is, if he goes beyond X pitches in 10 games he will lose about 8 of them. That means he still wins two (or one or three) and you don't know when in the series the wins will come.
The idea that very small observations can be exploited for big advantage is very important in today's ever more competitive business climate. For example Harrah's learned that moving the odds on slot machines one-tenth of one percent in their favor did not affect customer play at all, but netted them at extra $80 million (company wide). Marriott's hotel management system improves hotel performance by a couple percent. Remember that these improvements incur little cost, so most of the improvement flows quickly to the bottom line.
I thought that might get your attention. Read it so you can learn and profit from it.
Reviewed by Craig Matteson, Ann Arbor, MI