Profile for Gaetan Lion > Reviews

Browse

Gaetan Lion's Profile

Customer Reviews: 559
Top Reviewer Ranking: 1,946
Helpful Votes: 17245




Community Features
Review Discussion Boards
Top Reviewers

Guidelines: Learn more about the ins and outs of Your Profile.

Reviews Written by
Gaetan Lion RSS Feed
(REAL NAME)   

Show:  
Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11-20
pixel
Bird Dream: Adventures at the Extremes of Human Flight
Bird Dream: Adventures at the Extremes of Human Flight
by Matt Higgins
Edition: Hardcover
Price: $20.17
69 used & new from $4.77

5.0 out of 5 stars Excellent reportage on the subject, September 8, 2014
Vine Customer Review of Free Product (What's this?)
This is an excellent reportage on the history of the developments that have lead to wingsuit flying. Interestingly, the first development of something close to a modern wingsuit took place in 1935 when Clem Sohn in the U.S. flew an early wingsuit very successfully. But, just a few jumps later in 1937 he will kill himself when the lines in his parachute get tangled up and he crashes to the ground. Unfortunately, such tragic casualty will become a most common occurrence in this sport's history. More than half a century later, people will track the spread of the development of BASE jumping and wingsuits by simply maintaining a database of all the worldwide related deaths in those sports. This way they will find out that the sport had spread to Australia, New Zealand, South Africa, and pretty much all over the world.

The development and the incremental skill set that lead to wingsuit flying starts with skydiving, BASE jumping, and finally wingsuit flying. Each incremental step gets increasingly challenging and dangerous. And, at each step practitioners always have to push the limits often beyond the breaking point (number of casualties in those sports is amazing). The skysurfers, BASE jumpers, and wingsuit flyers not only fly through the air but do acrobatics they have learned from gymnastics, trampoline, and diving.

Higgins does a very good job at depicting some of the main characters associated with the radical development of the wingsuit.

It goes without saying that the practitioners of such dangerous sports are different from the rest of us. Within Chapter 4, Higgins educates us on the underlying neuroscience that discloses the neurotransmitter and hormonal wiring of these daredevils is really different. A specific gene, the dopamine D4 receptor gene, plays a key role on how much dopamine your neurological system generates associated with different experiences. With daredevils, their system releases low levels of dopamine (hormone associated with pleasure, satisfaction, and other positive sensations). Thus, in every day life they often feel lethargic, bored, or frustrated. They need heightened levels of excitement and stimulation that they experience through extreme risk taking. The latter becomes addictive. And, related risk-taking disciplines will often dominate their career, and their lives. As stated, it will often take away their lives. These people are not interested in long lifespan. They don't buy long-term care insurance. And, they don't qualify for life insurance. Jeb Corliss, the leading character throughout the book, the most famous BASE jumper, and a leading wingsuiter will state: "My biggest fear is dying of old age... I am okay with dying... I know it's going to happen."

The next frontier is to land in a wingsuit without a parachute. That would be replicating the real freedom of a bird. The physics are extremely challenging. To fly a wingsuit and maintain its forward momentum, you need to fly at least 70 mph. Wingsuits actually drop quite rapidly. Thus, landing a wingsuit without a parachute to slow it down is extremely challenging and dangerous.

There are now three competing strategies to land the first wingsuit. The first one proposed by Gary Connery, a stuntman, is to use thousands of cardboard boxes to absorb the shock of the fall. As a stuntman, he has done that jumping off buildings 150 feet high and reaching falling speeds of 60 to 70 mph, close to the minimum wingsuit speed. Jeb Corliss, the most famous wingsuiter proposes to build the equivalent of the landing portion of an alpine ski jump which consists in creating a huge man made hill that would have the same slope as a regular wingsuit approaching the ground (between a 33% to 45% degree angle). Such a project would cost several millions of dollars. So, the fundraising is another challenge. Another wingsuiter is looking into replicating this feat of engineering by simply looking into snow covered mountains and finding a slope with the right slope angle. This one indicates that he would need a team of people to dig him out of the snow after he lands.

Jeb Corliss, the one going after the alpine ski jump strategy, nearly dies in a wingsuit crash. So, he is out of the race to land a wingsuit without a parachute. At the same time Gary Connery (cardboard strategy) with the assistance of the best wingsuit designer develops a new wingsuit that flies slower than other ones. It can fly as slowly as 60 mph corresponding to a descent speed of 22 mph. Gary feels that this slower wingsuit should allow him to land in cardboard boxes. After some amazing organizational setbacks including nearly two months of rain that prevent him from setting up his 18,600 boxes in a field with a large team of volunteers he will eventually succeed. He will ultimately land at a speed of 69 mph with a downward speed of less than 10 mph. For a wingsuit this combination of slow speed and a 7-to-1 glide ratio (7 feet forward for only one foot downward) was probably a record. He will come out of his jump without any injury whatsoever.


Smartcuts: How Hackers, Innovators, and Icons Accelerate Success
Smartcuts: How Hackers, Innovators, and Icons Accelerate Success
by Shane Snow
Edition: Hardcover
Price: $16.71
44 used & new from $9.74

54 of 55 people found the following review helpful
5.0 out of 5 stars Smartcuts is very smart & fun, August 28, 2014
Vine Customer Review of Free Product (What's this?)
This book serves as an original career and entrepreneurship guide in the 21st century (which was not the intent of the author). The main thesis of Shane Snow is that luck does not just happen. Using surfing metaphors, Snow indicates that the ones who catch the wave (luck) are the ones who were ready all along looking for it. These remarkable individuals were bound to catch a good wave (luck) sooner or later. It was just a matter of time. And, the way they went about it; they did not waste much time doing it. They did not pay their dues for decades. They spent no time in stagnant situations. They kept moving forward and often laterally (typically a lot faster than the rest of us). This does not mean they did not work very hard. They did. It does not mean they cheated and cut corners. To the contrary, they maintained superior ethical standards. Snow defines smart cuts as short cuts with integrity. These individuals worked smarter, more creatively, and better understood how to take their next step. Most often, they were guided by a life-passion, an interest, a focus that kept them honed like a heat-missile towards their target.

Snow outlines nine foundational smart cuts principles that can accelerate anyone’s career or one’s company growth. They all make perfect sense, are intuitive, not controversial, and not far-fetched. Snow does not make anything up. Every single of his smart cuts principle is well supported by research and documented by many examples.

The smart-cut thinking is an offshoot of “lateral thinking” as defined and developed by Edward de Bono. And, Snow gives de Bono his due credit for the concept. However, while I have read most of de Bono’s books, and did find them interesting; I find Snow’s book far more insightful.

Each chapter describes thoroughly one of the smart-cut strategies on a stand-alone basis. Of course, they overlap a bit and work well simultaneously. But, it is amazing how powerful each one of those strategies is on a stand-alone basis.

There are numerous passages within the book that are pretty fascinating. The contrast between the careers of US Presidents and US senators is amazing. The Presidents are often outstanding smartcutters with a surprisingly short career in Federal office before acceding to the Presidency (Eisenhower, Carter, Reagan, Clinton, Bush Jr., and Obama among others). Meanwhile, the senators are for the most part stagnant plotters; and, very few of them ever make it to President. Snow even makes the case that some of the Presidents who paid their dues with a lifelong career in politics were some of the worse Presidents (example: Andrew Johnson). Good Presidents mainly acquired leadership credentials outside the field of (national) politics. Meaning, paying your dues career-long is no guarantee of mastery once you get there.

Another interesting fact is that companies who switch fields are often very successful. Moving laterally often causes one to accelerate. The IPhone was developed not by a telecommunication company, but Apple a PC company. Start-ups that “pivot” once or twice raise 2.5 more money, have 3.6 times faster user growth, and are 52% less likely to plateau prematurely.

In another section, you learn about a team of hospital surgeons who learn how to synchronize their surgeries and patient treatments inspired by the exactitude, speed, and efficiency of a racing car formula I team of outstanding mechanics working at the races. Quoting the author: “Before long, the hospital had reduced its worst … errors by 66%.” As an extra, the formula I racers, mechanics, and hospital doctors became very good friends and participated together in fund raisers for various charities.

The whole “Rapid Feedback” strategy (chapter 3) is really interesting. It details the comedians learning processes at The Second City in Chicago. It also shares research on how we learn from mistakes and feedback. Much research show that we actually learn more from the mistakes of others rather than our own. This is because we readily attribute the mistake of others to humans. Meanwhile, we attribute our own mistakes to external circumstances beyond our control so as to protect our own ego. Apparently, what differentiates some masters in whatever discipline from others is their ability to withstand, or even their eagerness to solicit negative criticism. They find negative criticism far more actionable to facilitate their progress.

“Waves” (chapter 5) is at the essence of the book. That’s where Snow goes all out with surfing metaphors that he effortlessly transfers into a multitude of real life and career related examples. He quotes a professional surfer stating: “Being able to pick and read good waves is almost more important than surfing well.” You can see how you could plug in this concept effectively in many situations. There are a couple of specific gems in this chapter that will stay with you. One of them is the amazing power of pattern recognition. If you analyze deliberate trends, use criteria, observe the facts, etc… amateurs undertaking this kind of trend analysis will invariably outsmart experts’ intuition in just about any field. Snow mentions a few weird examples such as the ability to recognize the difficulty level of a professional basketball shots; or the ability to pick out Louis Vuitton fake bags vs authentic ones. Thus, “you can be right the first time” without years of apprenticeship. This will be music to the ears of all the data guys out there (not just the Big Data one). “Deliberate pattern spotting can compensate for experience” as stated by the author. Another gem is that you don’t need to be the first to do something and be successful. Research showed that 47% of first (company) movers failed. By contrast, early leaders-companies that took control of a product’s market share after the first movers pioneered them had only an 8% failure rate. Fast followers benefit from the free-rider effect. Examples: Google beat out Overture in search engine. Facebook beat out Myspace in social networks.

“10 x Thinking” (chapter 9) will turn you into an Elon Musk fan if you are not already. This chapter outlines the genius, perseverance, and sheer bravado Musk demonstrated in pursuing his most daring venture: SpaceX. The concept here is that to revolutionize a field you can’t go for just marginal improvements (10% better, etc…). You have to go for the big swing, 10 x better, or 10 x cheaper, etc… So, it is called 10 x thinking. And, Musk after many failures did just that with SpaceX. His company is literally 10 times more cost effective and 10 times faster in terms of project turnaround time than the former best in the aerospace business: NASA. As a result, SpaceX is now a very viable commercial entity swamped with contracts from all over the world to launch satellites transport resources back and forth to the Space Station, etc… A counterintuitive thought is that sometimes the 10 x improvements are easier than the + 10% one. This is because the former are challenging high-hanging fruits no one dares to go for. While the latter ones are low-hanging fruits crowded with competitors. And, this runs into the N-Effect. The more competitors in a given field the weaker is the individual performance. They found that test takers (SAT, ACT, etc…) perform much better when in a smaller class room with fewer test takers than when in a much larger class room with many test takers.

There is a lot more to the book than what I covered. But, my review should give you a good idea if this book is for you. If you got that far in reading my review, it most probably is.


A Farewell to Alms: A Brief Economic History of the World (Princeton Economic History of the Western World)
A Farewell to Alms: A Brief Economic History of the World (Princeton Economic History of the Western World)
by Gregory Clark
Edition: Paperback
Price: $17.46
99 used & new from $5.88

1 of 1 people found the following review helpful
3.0 out of 5 stars Fascinating, but maybe not the whole story., August 7, 2014
Verified Purchase(What's this?)
Gregory Clark's book is really successful on numerous counts. It engages and sustains the reader's interest thanks to a very lively style that turns a dry academic subject into a page turner. Clark has gathered an immense amount of sociodemographic data going back up to 3,000 years. His main theory is interestingly controversial. The main cause of the Industrial Revolution was that the rich in England had a much higher fertility rate than lower classes. And, they literally propagated throughout society a new set of values supporting the onset of modern capitalism. Those values included discipline, work ethic, education (literacy), thrift, patience (deferred gratification). They passed on those traits both genetically and behaviorally (by example). Clark has been much criticized for throwing into it a genetic component. But, he has defended his thesis extensively referring to numerous contemporary twin studies supporting that many behavioral outcomes (education and career) have a strong inherited component.

Clark also addresses in passing the consequences of the Industrial Revolution on inequality. The latter [inequality] has become the hot topic in economic debates several years after Clark wrote this book. Clark's, data-supported, findings entirely contradict Thomas Piketty's premise (as expressed in Capital in the Twenty-First Century) that capital grows faster than income and leads to rising inequality. Clark instead (within Chapter 14) demonstrates that capital has not grown any faster than income over a very long period of time. His Figure 14-4 on pg. 280 shows that capital's share of income actually steadily declined from 1750 to 2000 in England. Over the same period, he shows that labor share of income increased rapidly leading to a reduction in inequality over the reviewed period.

Clark takes his theory as the exclusive cause of the Industrial Revolution. At times, he may have dismissed other theoretical causes too quickly. For instance, he advances that modern institutions did not play much of a role in fostering the Industrial Revolution because he felt that the institutional environment was already well established in Medieval England vs. Modern England. He supports this statement by observing that the income tax rate and Public Debt/GDP ratio were both a lot lower during Medieval England than currently. Thus, he deducts the institutional environment was actually superior in Medieval England. However, higher tax rates and Public Debt levels in modern times are the difference between a complex and fully developed Government and a far more limited or nearly totally absent one (no Government = no taxes = no public debt). Using this rational, and deriving anything about the quality of current social institutions is just not accurate.

Clark's excluding all other theories entirely is the main weakness of this book and that is a frequent occurrence in the social sciences. One social scientist will come up with an explanatory theory and will in turn make great efforts to demonstrate that their own causal explanation is the only possible one. I find this approach less than optimal. And, I wish they would adopt more readily a Factor Analysis approach where they could assess the relative influence of numerous causal factors instead of a single one. Such a study could for instance find that the Industrial Revolution was in part due to Clark's theory, but also emergence of supporting institutions, England's access to new energy, etc... This would make for a more nuanced, encompassing, and defensible multi-faceted theory. However, this approach within the social sciences is rarely taken (I can't think of a single example).

To his credit, Clark does cover, and rebut numerous competing theories of the Industrial Revolution. And, he does it well (except for the mentioned example regarding quality of institutions). Some of the protagonists have in turn criticized him back. And, Clark has responded to those attacks in a short paper titled: "In Defense of the Malthusian Interpretation of History." The latter is strongly recommended reading as it makes for an excellent supplement to the book.

There is one specific theory that Clark may have short changed; And, that is the one from Ken Pomeranz as expressed in The Great Divergence: China, Europe, and the Making of the Modern World Economy. According to Pomeranz, the Industrial Revolution occurred in England because of its early access to an abundant source of industrial energy: coal and its access to massive food and other resources from the U.S. This allowed England to successfully shift its economic focus from agriculture to industry (manufacturing, railroads, etc...) and forge ahead leading the Industrial Revolution. This seems to make much sense. Maybe the Industrial Revolution can be well explained by: 10% Clark's theory + 10% Pomeranz + 80% numerous other theories and unknown factors.
Comment Comments (2) | Permalink | Most recent comment: Aug 23, 2014 9:41 AM PDT


The Son Also Rises: Surnames and the History of Social Mobility (The Princeton Economic History of the Western World)
The Son Also Rises: Surnames and the History of Social Mobility (The Princeton Economic History of the Western World)
by Gregory Clark
Edition: Hardcover
Price: $20.92
75 used & new from $15.95

3 of 4 people found the following review helpful
2.0 out of 5 stars No math no Law of Social Mobility, July 30, 2014
Verified Purchase(What's this?)
Based on his own extensive data gathering covering centuries, the author derives that there is a law of social mobility associated with an intergenerational correlation or persistence rate (the same thing per his own definition) of 0.75 of the status of a son vs the one of his father: Social Status of a Son = 0.75(Social Status of Father). This denotes a much lower level of social mobility (or much higher persistence rate) than any other economists had derived for OECD countries. It also entailed that social mobility is nearly fixed regardless of eras or societies. The author advances that his findings contrast with other economists because the latter had studied only one narrow aspect of status independently such as income or wealth. Meanwhile, he had studied a much broader measure of social status. And, the author had focused on the propagation of social status through surnames. Meanwhile, other economists studied the general population. For the time being, not questioning Clark's rational but simply looking at his own calculations of social mobility or persistence rate I found numerous issues.

Clark states that although Sweden, U.K, and U.S. have very different conventional social mobility measures, they have nearly identical and much lower social mobility measures or high persistence rates by his own broader measures.

Before investigating the math, let's clarify what we should look at. We are interested in observing how social status for a specific surname reverses to the Average (or the Mean) for the total population. So, the dependent variable is: Social Status of a Son - Average Social Status. And, the independent variable is: Social Status of his Father - Average Social Status. Clark's narrative and calculations appear to state: Social Status of a Son = 0.75(Social Status of Father). But, such a function would eventually have a son from an elite surnamed clan inevitably fall to the lowest status in society. Indeed, over just the next 6 generations the last heir in that surnamed group would have a social status equal to only 0.18 the one of the original ancestor. Indeed, 0.75^6 = 0.18. That would put the most recent generation in a destitute state with little in common with its earlier predecessors.

It makes a lot more sense to look at Social Status above the Average, so the calculation's meaning now is that the most recent generation is not nearly as privileged as the original one, but is still above the Average (instead of destitute). Additionally, Clark's formula structure does not work for surnames that start with Social Status much below Average. Those would never revert back to the Average, but instead drop quickly asymptotically towards a Social Status near zero at the absolute bottom. My formula structure works in both cases (for starting Social Status above or below the Average).

Given the above framework, I revisited the calculations of social mobility for Sweden, US, Medieval England, and Modern England. His average persistence rates are: Sweden 0.77, US 0.75, Medieval England 0.90, and Modern England 0.78. So, based on his own calculations we can see that his Law of Social Mobility at 0.75 holds up very well in three cases out of four (except for Medieval England that is much higher at 0.90). My own calculations using his own data generated the following estimates: Sweden 0.60, US 0.82, Medieval England 0.85, and Modern England 0.59. Those figures are very different. While Clark could advance that the European more abundant government support for public education at all levels, health care, and overall safety net had no impact on social mobility as the latter was no different in Sweden vs the UK and the US; my calculations indicate just the opposite. And, that is that social mobility with much lesser Government support is much lower (higher persistence rate) in the US vs Sweden and the UK. Actually, the US persistence rate is not far off Medieval England's. That's a pretty different finding using the same data set.

Just to illustrate how our calculations diverge, let's look at a precise example. On pages 94-96, he shows that the wealth of the Rich surnames in one generation in 1860 was 187 times greater than the average wealth. And, four generation later it was still 4 times greater than average. He associates this regression-to-the-mean with a persistence rate of 0.71. Instead, I calculate it as follows: the starting point above the Mean is 187 - 1 = 186. After 4 generations, the end point above the Mean is 4 -1 = 3. And, the persistence rate over the next 4 generations is: (3/186)^(1/4) = 0.356. Indeed, 186(0.356)^4 = 3. Meanwhile, using Clark's persistence rate you get: 186(0.71)^4 = 47. I since learned from B. Foley that Clark's calculation actually works out if you log the mentioned variables because he uses the log(wealth) in this case. So, I should take back this criticism. However, it opens up another. When you log such variables, it somehow greatly artificially boosts the perseverance rate coefficient. In this case, as demonstrated it doubles it. So, if you want to prove that social mobility is lower than anyone else was thinking (and perseverance rates are higher), just log the variables and that will do the trick. But, this is just a mathematical artifact. This is not robust social science. What is also obfuscating is to associate and compare such high coefficients on log basis with many other coefficients on other social status dimensions where you did not use logged variables. This is an explicit Apples and Oranges situation that just leads to much noise and no signal. Log variables have a different meaning than nominal ones. They represent the % change in a variable. In this case, the log(wealth) of the Son = 0.75 log(wealth) of the Father means that the Son's wealth change in % represents 0.75 of the Father's wealth change in %. This is a very different concept than his overall intergenerational correlation that he uses on all the other variables (education, occupation, probate, etc...).

Chapter 12 `The Law of Social Mobility and Family Dynamics' is also associated with dissonant calculations. On page 214, he shows the hypothetical paths of the social status of above average families vs below average ones over the past 10 generations and the next 10 prospective ones. The paths for the two are symmetrical. And, they show increase in social status as well as decrease in social status. However, his function as described: Son Social Status = 0.75(Father Social Status) could never accommodate such directional changes. This function would instead have the social status of both families inevitably fall towards the very bottom. Their respective social status could never increase. To increase his "0.75" coefficient needs to be greater than 1. Even my more flexible equation would have both families regress to the Mean. The above average one would regress downward. The below average one would regress upward. Yet, the respective paths would not change direction. They could not all of a sudden regress away from the Mean. For my function to cause social status to regress away from the Mean, I also would need the "0.75" coefficient to be greater than 1.

Within the same chapter 12, looking at his historical data knowing the social status of a given group in one generation gives you no information regarding the subsequent generations. You can see that on the graphs on pages 218 and 219. The graph on page 218 shows an above average group steadily increases their social status over the next 5 generations (moving from 3 to 8 times the average social status). Meanwhile, a very similar group experiences a steady decline in social status from 4 down to 2 times the average over the next 4 generations. And, the time periods very much overlap. So, if you know a group had a social status much above the average around the early 1700 you have no way of knowing whether this group's social status will increase or decrease over the next several generations. Thus, Clark advances that his model is highly predictive (the past is highly predictive of the future of social status), meanwhile his own data set suggests the opposite. The past does not tell you any information whether prospective social status will increase or decrease in the next generations.

The above raises the issue of when does an above average group experiences an inflection point from an increasing trend to a decreasing trend in social status. On page 222, two graphs show that for various upper class groups that inflection point can greatly vary from 16 to 64 times the average social status. So, if a group is at 16 times, is it only early on its path to amassing more wealth and status? Or, is it at its apex? And, it will inevitably regress to the Mean in future generations? You actually have no way to tell. That's what the data conveys. On page 226 and 227, Clark looks at similar trends for China. Now, for some reason China has a lot lower and constant inflection point at 8 times. In this case, if the future is like the past one could say that a Chinese group at 8 times the average social status is quite likely to revert downward to the Mean going forward. But, if a Chinese group is at 4 times which way is it going up or down? There is no way to tell.

Now, getting away from calculations and graphs let's revisit some of his rational. Clark states that his derived social mobility is so much lower because he looks at a broader measure of social status vs other economists who just focused on one single dimension at a time like wealth, or income, or education. But, Clark does not do what he preaches. He like the other economists he criticizes focuses on a single aspect of social status at a time. He never ever combines two or more dimensions to create a broader measure of social status. This would have entailed creating principal components within a Principal Component Analysis framework or factors within a Factor Analysis one. But, he does not go anywhere near those methodologies that would have facilitated his creation of a broader social status measure.

However, on page 110 and 111 Clark comes up with a second argument of why his social mobility measure results in lower social mobility. It is simply because he looked at subgroups (surnames). And, he indicates the resulting lower social mobility directionally would have been similar if he had used different subgroups such as race, religion, nationality of origin, etc... as long as those categorical dimensions do not correlate with the error term of the original regression. But, to reduce the error term in the original regression those dimensions (race, religion, etc...) should correlate with the error term (in other words explain the error term). Otherwise, I don't know how they would reduce the error term. Also, his explanation entails that by using subgroups that do reduce the error term of the original regression it would automatically increase the regression coefficient of his function. And, that's how he gets a 0.75 meanwhile other economists typically got much lower coefficients. I am not sure that is correct. Let's take the simple example of stock returns. A stock index has a given stock return and volatility. It is the aggregate of the stock returns of the stock in the index, and a resulting volatility associated with the interaction of the volatility of each stock return and their respective correlation with each other. Clark's rational would suggest that by looking at a single sector, you could develop a model with a lower standard error than if you modeled the index. And, given that you have reduced the standard error it would automatically have resulted in measuring a more accurate and higher stock return for this specific sector. But, we know that to be false. Some sectors will have higher or lower returns than the index. But, their weighted average return will be exactly the index's. However, in most cases a sector will have a much higher volatility of return than the index because it is so much less diversified. This analogy contradicts Clark's second argument of why his social mobility is much lower than other economists.

Another trap Clark may have fallen into. Autoregressive models (Son = 0.75(Father)) can work very well at predicting over a single period. They can work very well whenever a trend does not change sign (no inflection point). But, they can't handle inflection points. Even when they can predict very well, they fall into statistical fallacies (Unit Root and stationary issues) that entail that the model is no better than just a trend (counting periods 1, 2, 3, ...). In summary, his model is no better than observing that during some time periods the trends in social status went in a certain direction. But, it does not provide any information regarding why that trend shifted direction in the past or present, and what will it be in the future.
Comment Comments (2) | Permalink | Most recent comment: Jul 31, 2014 10:02 PM PDT


Affordable Care Act For Dummies (For Dummies (Health & Fitness))
Affordable Care Act For Dummies (For Dummies (Health & Fitness))
by Lisa Yagoda
Edition: Paperback
Price: $13.59
35 used & new from $9.10

2 of 4 people found the following review helpful
3.0 out of 5 stars Mediocre, there are better and faster way to get info on the ACA, July 5, 2014
Vine Customer Review of Free Product (What's this?)
Note this is not a regular for Dummies book, as it is only 130 pages long, about a third of the length of a regular for Dummies book. This is a relief as you may not want an over 300 page reference book on the ACA. Also, this book is sponsored by AARP. This is ironic since senior citizens (over 65) are covered by Medicare and are least affected by the ACA; that's except for a couple of favorable developments including the increasing phased in drug coverage referred to as the closing of the doughnut hole.

The book does cover the ACA reasonably extensively. However, one could advance that the information is not presented most efficiently. A simple table outlining the different % coverage level of the Bronze, Silver, Gold, and Platinum plan would have imparted more information and faster than how the author presents this topic through long-winded narratives. The same is true for the explanation related to the mentioned doughnut hole.

You most probably can gather the vast majority of the information imparted by this book within a good fact sheet on the ACA of 5 pages or less that would include the suggested tables.


R For Dummies
R For Dummies
by Andrie De Vries
Edition: Paperback
Price: $20.15
55 used & new from $16.15

3.0 out of 5 stars "Quick, easy way to master R?!", June 23, 2014
Verified Purchase(What's this?)
This review is from: R For Dummies (Paperback)
It is probably the nature of R as much as the structure of the book. But, there is nothing quick and easy in mastering R by studying this book. I have spent over 30 hours studying this book, including taking 25 pages of notes. And, I have little sense or confidence in how R works. I have studied other software programs in far less time and could get a pretty good handle on the functioning of those programs (Excel, Crystall Ball, @Risk, Access, XLStat, etc...). But, as mentioned my experience with R and R for Dummies has been very different.

As stated, my slow learning curve is certainly due to the nature of R. R is not a software application (like Excel). It is a full-fledged computer program like C++, Python, etc... (A recent conversation with a colleague suggests that R is actually relatively simple and is a lot more like Python than C++ that is much more code intensive). R is also comparable with what we could call a hybrid (halfway in between a software application and a computer program) like SAS. In other words, any of those programs take a lot longer to learn. They don't have user friendly windows and menu driven commands. They instead rely on codes, syntax, etc... You have to graduate from being a software end user to becoming a computer coder. It is not an easy transition.

Additionally, I intuitively feel like the structure of the book, its emphasis, and the way it has presented the material is also not optimized to speed up your learning curve. My sentiment is reflected in the title of this review and the ambivalent rating I give this book. From this standpoint, I don't feel this book lives up to its "quick and easy" way to learn R advertisement (see back cover).

Nevertheless, this book is certainly not all bad. It may prove an excellent resource, an excellent reference. It undoubtedly covers a huge amount of ground on many key topics including data manipulation, statistics, regressions, and graphics. It also provides a very good appendix regarding R packages, where to find them, how to add them to your R capabilities, etc...

For my part, based on my firsthand experience I have to question whether I went about this the right way. At this stage, my 30 hour investment does not seem fruitful. As others have mentioned there is a ton of information on R available for free. Maybe a better way to learn R was not as I attempted having a somewhat in depth view of all the wonderful things that R can do; but, instead focus on one specific thing you really want to do with R that you can't do with Excel (using abundant free information on any specific topic you are looking for). And, then develop proficiency into this one thing. Once you have expertise in this one thing add on by learning adjacent concepts to your first objective. Before you know it, you may have developed expertise in half a dozen things that you can't readily do in Excel.

Another strategy is to try another book. The following book seems intriguing Learn R in a Day.
Comment Comments (2) | Permalink | Most recent comment: Jun 25, 2014 4:15 PM PDT


Applied Predictive Analytics: Principles and Techniques for the Professional Data Analyst
Applied Predictive Analytics: Principles and Techniques for the Professional Data Analyst
by Dean Abbott
Edition: Paperback
Price: $42.05
37 used & new from $32.14

4 of 18 people found the following review helpful
2.0 out of 5 stars An odd book on the subject, May 22, 2014
Vine Customer Review of Free Product (What's this?)
This is a pretty thick book with a lot of material. It includes over a hundred pages related to understanding the data set. Granted this is a critical aspect of analytics, but the extend of this coverage seems an overkill. On the other hand, when it comes down to covering complex quantitative methods such as PCA or clustering the coverage is so sparse that the only ones who would grasp what the author is talking about and how to implement such techniques are the ones who already have deep expertise in the subject.

If you want to understand, study, and implement those techniques you will have to gather much additional reference elsewhere. In my mind, the author could have spent a lot less time on data understanding (20 pages on the subject is plenty) and transfer the print onto explaining more thoroughly the quantitative methods.

A far superior book on this subject is Conrad Carlberg's Predictive Analytics: Microsoft Excel. Unlike Abbott, Carlberg is very hands on. He explains the quantitative methods thoroughly and actually shows how to implement those in Excel and other software.
Comment Comments (7) | Permalink | Most recent comment: Jun 26, 2014 8:54 PM PDT


The Curriculum: Everything You Need to Know to Be a Master of Business Arts
The Curriculum: Everything You Need to Know to Be a Master of Business Arts
by Stanley Bing
Edition: Hardcover
Price: $25.82
86 used & new from $11.37

3.0 out of 5 stars A lot more fun than an MBA, but this is no MBA, May 6, 2014
Vine Customer Review of Free Product (What's this?)
Bing’s “The Curriculum” has nearly nothing to do with a real MBA curriculum. In an MBA program, there is a lot of technical knowledge that is not that much fun to acquire that Bing purposefully avoids. This technical knowledge includes many courses in accounting, finance, economics, and capital markets. You will find none of it in Bing’s book.

Bing’s curriculum, as stated, is a lot more fun than a regular MBA. But his main topic is the satirical aspect of business more than business itself. Satire does not preclude knowledge or wisdom. Occasionally, Bing imparts worthy advice in-between all the the smirks. For instance, his ten commandments of Presentation are excellent. You would do well to abide by those. Across the book in all the “courses” there is most often an element of wisdom or counterintuitive insightful advice.

Bing discloses a lot of graphs supposedly supported by rigorous proprietary studies by The National Association of Serious Studies. I understand the latter is not so serious and has probably been founded by him. So, whenever you see a graph denoting interesting (if not hysterical) social trends there is not a single reference supporting its veracity. But, this is not Bing’s point. His purpose is to make you laugh, think, and wonder if some of his advice should be occasionally taken seriously.


Are We All Scientific Experts Now
Are We All Scientific Experts Now
by Harry Collins
Edition: Paperback
Price: $11.03
28 used & new from $7.03

1 of 4 people found the following review helpful
5.0 out of 5 stars Very interesting treaty on the social interaction of expertise, May 6, 2014
Vine Customer Review of Free Product (What's this?)
The author describes several different types of expertise that socially interact with each other.

Ubiquitous expertise is the one we acquire with the experience of daily living. From it, we develop common sense, survival instinct, experience, and social wisdom. But, none of that gives us the tools to have discriminating scientific judgment. So, the answer to the book's title is very clearly No, we are not scientific experts. Collins advance this is the case even when we read scientific papers. This is because we don't have the knowledge and the social interaction with insider scientists to evaluate the quality of the scientific papers we are reading.

Special interactional expertise endows its members to be much closer from being able to evaluate the science. These are typically science writers, high-level science journalists. They have many scientists among their contacts within the relevant domain. They can bounce ideas and get two sides of the issues from scientists with different opinions. And, they can evaluate if a scientific paper has been rightfully marginalized by the scientific community or not (because the paper draws the wrong conclusion).

Special expertise is what we have when we are insiders to an activity. That is what we gain from our profession. Everyone that has worked at something is a special expert in something. This overlaps and covers scientific expertise. However, the latter is truly differentiating. A scientific expert is obviously in a better position to evaluate the relevant science of his domain than anyone else outside this community. Remember the title of the book, its main question is: are we able to evaluate the science?

Collins covers several issues where he demonstrates the public did not have the judgment to evaluate the science. Those include anti-vaccination campaigns indicating that vaccines cause autism in children. The analyzed data did not support this position at all. There was just a random confounding factor that children develop autism around the time they first get vaccinated. So, if you vaccinate millions of children, randomness will result in numerous children getting autism near the same time they got vaccinated. The only problem is that the rate of autism at such time is not statistically different for the ones who did not get vaccinated.

Collins also covers the credibility of the scientific community. Right after WWII and until the 60s, this credibility was very high. The scientific community was rarely questioned. The Media and the public deferred to scientific pronouncements that it deemed somewhat incomprehensible. Collins calls this period Wave I. Then, in the 60s authority of all kinds was questioned and rebelled against this included scientific expertise. This was also supported by numerous public disclosure of scientific failings including the failing of nuclear power to generate nearly free electricity, the failure of economic theory to predict the near future (even sometimes to grasp the present), the spurious results of numerous clinical trials, Climate Gate, etc. Thus, numerous communities of experts in health, economics, energy, and other technical endeavors lost much credibility. Collins calls it the "growing crisis of expertise." And, he refers to this post-60s era as Wave II. He indicates it is time we reached a more nuanced position vs. the scientific community in between the extremes of Wave I and Wave II, that he would call Wave III.
Comment Comments (4) | Permalink | Most recent comment: May 14, 2014 8:48 PM PDT


Upgrading Leadership's Crystal Ball: Five Reasons Why Forecasting Must Replace Predicting and How to Make the Strategic Change in Business and Public Policy
Upgrading Leadership's Crystal Ball: Five Reasons Why Forecasting Must Replace Predicting and How to Make the Strategic Change in Business and Public Policy
by Jeffrey C. Bauer
Edition: Paperback
Price: $23.63
34 used & new from $19.37

2 of 4 people found the following review helpful
3.0 out of 5 stars A very incomplete introduction to Monte Carlo simulation, May 3, 2014
Vine Customer Review of Free Product (What's this?)
Bauer's main point is that predictions of a single outcome are worthless. Meanwhile, probabilistic forecasts are a lot more useful and better allow you to anticipate various potential outcomes. As a concrete example, a consulting firm predicting that next year the economy will grow by 2.0% is not useful. The reason is because the track record of such economic predictions are no better or worse than random. Instead, a more informative forecast would be that there is a 20% probability of a recession, a 40% probability of near-average economic growth, and a 40% probability of above-average growth.

I would add that a more informative forecast would be to combine the two methods, and state that the mean expected outcome is for the economy to grow by 2.0% with a 95% Confidence Interval of -1% to + 5.0% which would translate into his own forecast outcome of 20%/40%/40%. That's the way quantitative type would present this information using Monte Carlo simulation. But, Bauer never goes that far.

As indicated the way to generate such forecast is by using Monte Carlo simulation. You actually do that by using Monte Carlo simulation proprietary software such as Crystal Ball or @Risk. However, Bauer does not tell you much if anything about that. He just mentions "Monte Carlo Analysis" in parenthesis one single time on pg. 77. His two chapters on forecasting tell you much about data and specifying independent variables. But, it tells you nothing about structuring a Monte Carlo simulation that entails fitting each of the independent variables to specified statistical distributions (Normal, Lognormal, Poisson, etc....) so as to turn the independent variables into random or stochastic ones. Also, you need to extract information regarding the dependent variable. What is its actual full distribution of outcomes associated with the thousands of iterations generated by your simulation. And, that's how you can then generate a Confidence Interval around the mean expected outcome.

Thus, Bauer tells you partly what a good forecast is. But, does not tell you enough on how to construct such a forecast. To his defense, Bauer states that this book is purposefully very short to serve as a brief introduction to expose people regarding what can be done within this forecasting discipline. He tells the reader that if they want to learn more about various aspects on the subject, he has recommended readings at the end of each chapter. But, after the two chapters on forecasting the only reading recommendations were very specialized to weather forecasting. I am not sure if any would serve as a quick and effective introduction to Monte Carlo simulation allowing one to learn how to develop such forecasting models. You may be better off simply reading a couple of Wikipedia articles on the subject.

Bauer makes a forceful case that single point predictions (i.e. economy will grow by 3.0% next year) are worthless because of what he calls discontinuity. In other words, patterns always change. The relationships between variables are unstable. Given that, history is not representative of what may occur in the future. He also calls such circumstances "dynamic systems." That's an excellent point. And, it truly does discredit single point predictions. However, discontinuity can discredit everything including the type of forecasting he so promotes. If a system is dynamic as to be so discontinuous even a probabilistic forecast will miss the boat. That's because the statistical distribution with which you have captured the independent variables based on their data history will be obsolete over the forecasting period. And, that is because of discontinuity.

Many authors ranging from Keynes to Taleb have referred to the shortcomings of such models in the face of true discontinuity. Keynes referred to it simply as uncertainty. And, both Keynes and Taleb advance that the past does not provide you with reliable enough information to fit specified statistical distributions to the independent variables within such models. In other words, models do not work. You should avoid them entirely. Obviously, many others feel otherwise. And, they make equally convincing arguments that models, even though not expected to be accurate can still be very useful. However, within the pro-model crowd Bauer has not made a convincing case for the model-side that Monte Carlo simulation can overcome discontinuity.

On another count, the author's terminology is potentially inaccurate. On pages 27-29, he criticizes univariate models. Those are typically interpreted as multiple regression models with one single dependent variable with several independent variables (the most common types of simulation models). But, Bauer means something completely different. He means time series analysis models. That's where you make projections of a variable based on its own prior values. It is just a form of extrapolating the future based on the history. He then lauds multivariate models. Those typically refer to a type of regression analysis where you regress or estimate more than one dependent variable simultaneously (that's actually a far rarer type of model). But, instead what he really means is multiple regression analysis (several independent variables used to estimate one single dependent variable). Chris Lloyd, Professor of Business Statistics at U of Melbourne, and author of Data Driven Business Decisions taught me this difference. If this paragraph is a bit confusing, here is a short cut translation from the author's semantic to the most common one: univariate = time series analysis; multivariate analysis = multiple regression.

On page 100, he refers to Nate Silver's "The Signal and the Noise" and mentions Silver's support of Bayesian statistics. Bauer goes on stating that forecasting goes much beyond Bayesian statistics. But, he does not support that statement with any explanation. Bayesian statistics and Monte Carlo simulation are very different techniques that I think could potentially supplement each other. By now, you would have a pretty complex model that few are equipped to develop (a Bayesian Monte Carlo simulation). Given that Bauer has not bothered outlining what Monte Carlo simulation is, he was not ready to explain the conjunction of those two quantitative techniques (Bayesian & Monte Carlo).

This book is not without its merit. As mentioned at the beginning of this review its main message is really right on the money (predictions as depicted are worthless, and probabilistic forecasting is a far superior methodology). The two chapters on forecasting are excellent from a narrative standpoint. Ok, they don't tell you about Monte Carlo simulation that is at the essence of such forecasting. But, they tell you a lot about everything else that is also really important such as the validity and quality of the data, how to select your independent variables, etc.... Also, the postscript on Big Data is excellent. And, here the author dares to rebut the hype. He feels that if Big Data ignores the underlying causal relationships between variables it is to its detriment. In other words, the adage that correlation now supersedes causation (the fighting words of Big Data) sets up Big Data for a fall the minute a system is a bit more dynamic than in the recent past and exhibits more discontinuity than expected. The last chapter on strategic response to forecast and related prospective outcomes is also excellent. If you want to study such strategic response in greater detail, one of the best books on the subject is The Predictioneer's Game: Using the Logic of Brazen Self-Interest to See and Shape the Future.

The book has also a lot of interesting specific insights. He states that a good model has typically no more than five explanatory variables. If you have more than that, a model is likely to be overspecified. And, such overspecified models typically break down when forecasting. This is because they may have extracted spurious regression coefficients on their related extra variables. A mathematical fit does not entail a logic fit supported by economic theory or just plain logic. Along those lines, he warns the reader against stepwise regression that by nature typically adds too many variables to a model. Stepwise regression is a good explorative technique. But, you have to judiciously eliminate the superfluous variables to arrive at a parsimonious model. In another section, he indicates that weather forecasting, at this stage anyway, is not effective beyond five days. That's something I had also read within "The Signal and the Noise." His section on assigning weights to variables is also pretty interesting.

If you find this book interesting, you will probably enjoy those other books that are more complete than this one. Those include Nate Silver's The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't, and Sam Savage's The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty.
Comment Comments (2) | Permalink | Most recent comment: Jun 2, 2014 1:55 PM PDT


Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11-20