Profile for Gaetan Lion > Reviews

Browse

Gaetan Lion's Profile

Customer Reviews: 565
Top Reviewer Ranking: 1,844
Helpful Votes: 17480




Community Features
Review Discussion Boards
Top Reviewers

Guidelines: Learn more about the ins and outs of Your Profile.

Reviews Written by
Gaetan Lion RSS Feed
(REAL NAME)   

Show:  
Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11-20
pixel
Old Masters and Young Geniuses: The Two Life Cycles of Artistic Creativity
Old Masters and Young Geniuses: The Two Life Cycles of Artistic Creativity
by David W. Galenson
Edition: Paperback
Price: $22.04
44 used & new from $3.05

5.0 out of 5 stars Galenson is the Nate Silver of art history, November 20, 2014
Verified Purchase(What's this?)
Nate Silver revolutionized the field of political predictions. He aggregated polls and weighted them given their historical accuracy and leveraged their information using statistical tools. Using stats, he has accumulated a track record in politics predictions that is far superior to political pundits.

David Galenson has done pretty much the same thing for art history and the benchmarking of creativity over the life cycle of artists. Galenson by first focusing on modern painters uncovered that artists career were determined by their own temperament and approach to painting. He observed two very different archetypes: conceptual vs. experimental artists.

The conceptual artists think deductively and first develop in their mind a completely finished product. Next, they plan out their product in exhaustive detail with many preliminary sketches and drawings. Then, the actual execution of the product is almost a formality done in a decisive and quick manner. And, the painting is finished with much confidence. Such artists are deemed young geniuses as very early in their career, they come up with radical innovations representing a departure from the past. Picasso and his cubism innovation is such an example. However, their career often peak out early with less to show for in the remainder of their career.

The experimental artists think inductively. They have no clear direction where their art work will take them. They go through numerous random iterations. They don’t plan ahead. They are plagued with uncertainty. They rarely have a sense of completion or closure. Contrary to conceptuals, the experimentals take much longer to mature and succeed. But, their art often improves continuously throughout their entire life. They often achieve their artistic legacy late in life.

Galenson states that conceptuals are sprinters and experimentals are marathoners. This metaphor sums it up pretty well.

Galenson was not the first to notice those two archetypical artistic temperaments. Many artists had noticed those two different temperaments a long time ago including Honore de Balzac in the early 1800s, and William Faulkner in the early 20th century. But, such artists are not in the field of conducting related quantitative studies on age-related curve of creative performance.

Similarly, Galenson was not the first to investigate the age-related curve of creative performance. Harvey Lehman, a psychologist, was a pioneer in that field as demonstrated in his book “Age and Achievement” published in 1953. Lehman studied the relationship between age and outstanding performance in 80 different fields including arts, politics, and sciences. And, he focused on the narrow age range category that represented the maximum average performance. By doing so, he found that for oil painting it was between 32 and 36 years old; for poetry 26 to 31, and novels 40 to 44.

Galenson criticizes Lehman’s study because he felt it focused on the wrong thing and the wrong number. For Lehman the most important causal factor regarding the age vs creativity relationship is the field of activity. In certain activities peak achievements are delivered at a younger age than in other activities. And, Lehman’s most important number is this narrow age range category showing a peak of superior production. But, this human performance data is not normally distributed with a single central tendency (like a Mean, Mode, or Median). It is most often bimodal (or trimodal) looking like a camel’s back with two humps with artists achieving great works early in their career; and others achieving equally great works in their later years. And, when Galenson looked at the entire distribution of such achievements, the narrow peak range Lehman focused on was not so meaningful. At times, it could easily have been overwhelmed by the aggregation of two or more other age range categories in later years.

Galenson found there was a lot more age-variation among the artists within a given field than there was variation between different fields (ANOVA concept). Thus, Lehman’s focus on the field was not so insightful. Galenson’s focus on the temperament and artistic approach of individual artists (conceptual vs experimental) was far more insightful on the relationship between artistic creativity and age.

Galenson finds that in general conceptual artists make their greatest contribution by their early thirties. Meanwhile, the experimental artists make their greatest contribution in their middle age and way into old age. And, that is true across all disciplines he explored. This does not contradict Lehman’s findings driven by field of activity alone. They just focused on different metrics.

Just as Nate Silver combined modern statistics with traditional polling to develop a superior political prediction method, Gleason combined a well- known artist categorization (Balzac already knew about conceptual vs experimental) with modern data analysis to derive a far superior model of age vs creativity relative to Lehman’s model.

The strength of Galenson’s model is due to its robust and innovative measurements. For modern painters he used several metrics including: 1) price of art work over time; 2) number of illustrations in French and English textbooks; 3) presence of art work in retrospective exhibitions; and 4) number of selected works in leading museums. Invariably, those very different metrics are surprisingly consistent. For instance, Cezanne’s art work show a peak price value when he was 67 years old (when he painted the specific art work). For Picasso this peak value is when he was 26 years old. The respective age vs. price curve graphs for Cezanne and Picasso (pg. 23 & 24) are spectacularly different. Cezanne’s looks like a fairly steady escalator with his art price rising ever higher as he ages. Picasso’s curve instead looks like a mountain that does peak at 26 and abruptly drops back down thereafter. The frequency of illustrations in both French and English textbooks also shows the exact same peak at 67 and 26 years old for Cezanne and Picasso, respectively. Cezanne and Picasso are the iconic experimental vs. conceptual artists (and Gleason spends much time comparing the two age vs. creativity curve). The other two metrics (showings in museum) are also very closely related to the first two. One can easily argue all those measures should be highly correlated as they are all self-reinforcing in a positive feedback loop. Nevertheless, the level of consistency between those metrics is stunning.

Galenson is aware that artistic temperament and approach is not strictly binary (conceptual vs. experimental); but is more on a continuum. He does much research and provides several examples using more granular categorizations adding Moderate vs. Extreme as category qualifiers to the original Experimental vs. Conceptual. He notes that within such categorization it gets tricky to differentiate the Moderate-Conceptual from the Moderate-Experimental. Meanwhile, the Extreme versions of both types are very readily differentiable. After much discussion, he concludes that the Moderate vs. Extreme nuance does not add much explanatory power to his binomial model.

Can artists change from one type of temperament to another? Apparently, it is very challenging to do so. Galenson uncovers that Edouard Manet changed from a moderate conceptual in his early years to a moderate experimental artist in his later years. He was successful in this shift in part because it was a “moderate” shift. On the other hand, Camille Pisarro shifted from an experimental Impressionist to a conceptual Neoimpressionist, and failed. He had to quickly reverse course and come back to his experimental temperament to regain his creative artistic bearing.

From his research, Galeson advances something that may come close to a universal cognitive principle. That is a conceptual artist may possibly morph into an empirical one with age. But, that the reverse is nearly impossible. This is because the uncertainty and complexity of an experimental artist who thinks inductively can’t readily change into the certainty and simplicity of a conceptual deductive thinker. Late in the book, Galenson concludes: “I believe it is likely that the distinction between experimental and conceptual innovators exists in virtually all intellectual activities.”

One of the most intriguing parts of the book is where Galenson uncovers his findings of “masterpieces without masters.” What Galenson means is that many second tier artists have produced winning first class art work. For instance, Serusier’s “The Talisman” is the most celebrated painting of his era (most frequent illustrations in textbooks, etc…). Meanwhile, Serusier is a very distant second to his 19th century contemporaries such as Cezanne, van Gogh, and Renoir. The latter three have a body of work that far surpasses Serusier in both quantity and quality. But, that does not preclude Serusier from having painted the number one painting of his era. Galenson explains this conundrum by Serusier being an archetypical conceptual coming up at an early age with a new style of painting that makes an immediate splash on the art scene; Yet, not doing much afterwards. This is unlike the mentioned masters who were experimental and thrived throughout their long career. By the same token, a conceptual body of work is often sparse so a stand-alone masterpiece more readily stands out. The body of work of experimentals is often vast. It also often includes coordinated series (Claude Monet’s paintings of haystacks being an example). In such context, an experimental painting does not so easily stands out as a unique masterpiece.

During eras of rapid artistic innovations, conceptual artists have an advantage over experimental ones. Galenson mentions how modern art developed through a succession of rapid movements chronologically from Impressionism, onto Neoimpressionism, Symbolism, Fauvism, Cubism, Surrealism, and finally Abstract Expressionism. Except for Impressionism, all the other movements were created and dominated by conceptual artists. The rapid cycle of those artistic innovations sometimes did not allow experimental artists enough time to flourish at their slower pace.

In the second half of the book, Galenson leverages his model through different times and disciplines. He uses it to analyze the Italian Renaissance. Leonardo and Michelangelo were experimental artists. Raphael was a conceptual one. Interestingly, Leonardo and Michelangelo would implement their entire work themselves. Meanwhile, Raphael would develop the concept and then delegate various sections of his paintings to specialized painters/artisans. This is a common differentiation between the two temperaments (the experimentals doing it all themselves, the conceptuals delegating often extensively). In modern time, Andy Warhol (a conceptual) was famous for delegating extensively the actual execution of his paintings. That may be why he called his art studio “The Factory.”

Galenson uses his model to analyze modern sculptors, poets, novelists, and movie directors. And, he invariably uncovers the same patterns including: i) conceptuals achieving their most noteworthy works at a much younger age than their experimental counterparts; ii) the “masterpiece without masters” concept replicates itself across all different fields. Conceptual poets (including T.S. Eliot and Ezra Pound) make the vast majority of their best work before 40 years old. Meanwhile, experimental poets (including Frost and Stevens) do so after 50 years old. Conceptual novelists (including F. Scott Fitzgerald, Hemingway) wrote their best novel at 31 in average vs. 43 for their experimental counterparts (including Charles Dickens, Mark Twain, Virginia Woolf). Conceptual directors (including Fellini, Welles) made their best film at 29 vs 61 for the experimental ones (including Ford, Hitchcock). Galenson also refers to a study of Nobel Prize winning economists and finds the same distinction between the conceptual early bloomers and the experimental late bloomers. For these economists, the conceptuals did their best work (most frequently cited) at an average age of 43 vs. 61 for the experimental economists.

Overall, this is an outstanding book describing a successful model in social science (a rare achievement in itself). I only hope Galenson eventually extends his analysis to include scientists (including both social sciences and hard sciences). This model is too interesting to stop now.


Everything Is Obvious: How Common Sense Fails Us
Everything Is Obvious: How Common Sense Fails Us
by Duncan J. Watts
Edition: Paperback
Price: $10.95
64 used & new from $4.64

1 of 1 people found the following review helpful
5.0 out of 5 stars We cognitively live in a Post-hoc fallacy world, November 5, 2014
Duncan Watts is a very interesting author with degrees in physics and theoretical and applied mechanics. Yet, he applies his quantitative knowledge to the field of sociology. He is a research scientist at Yahoo Research.

Duncan Watts leans heavily on the works of behavioral economists, scientists, and quants including Daniel Ariely (Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions), Steven Levitt (Freakonomics: A Rogue Economist Explores the Hidden Side of Everything (P.S.)), Daniel Kahneman (Thinking, Fast and Slow), Nate Silver (The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't), and Nassim Taleb(Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets (Incerto)) among many others. But, this book is far more than a rehash of compiled behavioral principles.

Watts brings many original insights to the unpredictability of human behavior. You can summarize the book in just a few sentences. We really can't learn from the past (sample bias-history runs one single iteration). We can't predict the future (social phenomenon belong to complex systems that invariably break up from their own historical trends). But, we can learn from the present. The latter is actually pretty difficult and entails running live experiments.

The "not learning from the past" concept is really interesting. Watts indicates that to learn from the past is very difficult if not downright impossible. You would need the ability to rewind the tape of history and rerun it a thousand times, and then observe the statistical distribution of outcomes and then you could learn from it. Otherwise, you just observe one unique iteration (sample bias) that is not truly comparable to any other segment of history. Every historical context is different no matter how similar we may think they are. We may compare the war in Iraq with the war in Afghanistan and deduce that certain strategies work well here and not there. But, if you look at details you realize those two wars can't serve as a valid control for each other (Muslim factions, regime structure, geographical terrains, etc... are very different). Given that, historians, sociologists, anthropologists focus on explaining the past. But, they confuse observing and narrating the past with true causal explanations (Post-hoc fallacy). As a result, whatever model they develop invariably has no predictive power. As a side note, Watts mentions that the Post-hoc fallacy is omnipresent within Malcolm Gladwell's books for the mentioned reasons (including specifically in The Tipping Point: How Little Things Can Make a Big Difference). Watts states on pg. 122 "Common sense and history conspire to generate the illusion of cause and effect where none exists."

This Post-hoc fallacy affects everything. It obfuscates the evaluation of investment management, employee performance, CEO performance, and many others. Extricating luck from talent is very challenging. The results will be affected by the selection of your time series over which you sometime have no control.

Within the "not predicting the future" concept Watts refers extensively to the seminal work of Philip Tetlock who tested that experts in various fields (politics, economics) were incapable of predicting the future. Watts also mentions Steven Schnaars who studied the experts' predictions of technological innovations. And, he found those plain wrong 80% of the time. One of the greatest prediction failures is Larry Page & Sergei Brin attempting to sell their Google search business in the 1990s for only $1.6 million. If two of the smartest fellows on the planet can't even come close to predicting the value of their own work what chance does an economist have to predict accurately the future path of an entire economy?

Watts reviews in detail a phenomenon that renders predictions in sociology and economics virtually impossible. And, that's the phenomenon of emergence that he also calls the Micro-Macro problem. As an example, you can understand the behavior of individual neurons. But, this will give you no understanding in the behavior of the brain or of human consciousness. When you move from one scale to another, the emergent plane is governed by different laws. Similarly, you can have a pretty good understanding of one single individual's behavior (even though that alone can prove difficult) with no understanding whatsoever of the behavior of a large group of individuals as captured by an economy.

Another phenomenon that renders the predictions of groups nearly impossible is captured by Granovetter's Riot model. This model has been tested mathematically (as defined it would be challenging to test it empirically). The model entails that people influence each other and generate chain reactions of influence. Person A influences person B who influences person C. As is, you can have two large crowds that seem nearly identical in every respect. But, if just one single link in that chained reaction of influence differs (person L does not influence person M in one crowd, but it does in the other) the two resulting crowd behaviors end up to be completely different.

Watts even questions the whole concept of Black Swan as a Post-hoc fallacy. Black Swans are often defined not just as a single isolated event (a plane crash, an earthquake) but as a chained series of events such as the ascent of the Internet or the organizational fiasco associated with Hurricane Katrina. Those clusters of associated meaningful events [such as for Hurricane Katrina] were only aggregated to define a Black Swan situation after the fact. Thus, you can't predict something or a group of things that are not even defined beforehand.

Despite Watts supporting we can't predict the future; he studies some of the best prediction practices and benchmarks them. Specifically, he tests the accuracy of several prediction markets vs two very simple statistical models regarding forecasting who will win an NFL game. The statistical models are rather primitive. The first one only considers that the home team wins, and that's it. The second one adds the recent win-loss track record. To his surprise the best prediction markets performed only marginally better than the basic statistical model with 61% prediction accuracy. The model just using the home team advantage came in at 58% and the model also considering win-loss record came in at 60.9% almost dead even with the best prediction market! This benchmarking between prediction markets and models has been replicating in other sports and fields with often very similar result. A basic model can do as well as a prediction market. Watts explains this phenomenon by indicating that none of the methods as depicted did very well (60% accuracy is not that different from flipping a coin). This would confirm that the NFL is a fairly complex system in terms of outcomes. Maybe it even has emergent qualities. You could know all the respective players well without fully understanding the spontaneous plays that will lead one team to win. Thus, Watts suggests that coming up with a basic model that can predict the basic tilt of certain predictions (scoring a bit above randomness) is possible. However, adding incremental predictive power with more sophisticated models is very challenging and sometimes futile.

Along the same line, Watts advances that the whole field of scenario planning as pioneered by BP in the 20th century is also rather futile. It is very challenging for humans to correctly select beforehand the emerging trends that will matter and to plan them with parameters that will have any relevance or accuracy once history unfolds. Within the book (pg. 184), he outlines the scenario-planning in 1981 of a large oilfield equipment rig manufacturer predicting (scenario-planning) the number of active rigs in the US over the next decade. This corporation's worst case scenario ended up with figures that were between 2 to 3 x too high vs what actually occurred. Watts advanced that such scenario planning failures are most common.

So, if we can't predict the future and can't learn from the past what can we do? The answer states Watts is to "predict the present." This entails running several parallel small experiments simultaneously and differentiates what works from what does not. A few retail clothes companies have done this successfully (by trying out various clothes designs and see what sells). But, this is very challenging. It entails incredible operational achievement in speed of design, distribution, sales tracking, and inventory management that is out of reach for most companies. However, in the online world such a strategy is a lot easier to implement and as a result is a lot more common. Internet companies readily "predict the present" by testing what ads, what webpage, what storyline news attract more eyeballs or generate more clicks and sales.

If you found this review interesting, you will enjoy the book as I have only scratched the surface of its content.


Creating Business Agility: How Convergence of Cloud, Social, Mobile, Video, and Big Data Enables Competitive Advantage
Creating Business Agility: How Convergence of Cloud, Social, Mobile, Video, and Big Data Enables Competitive Advantage
by Ankit Kumar Verma
Edition: Hardcover
Price: $41.10
48 used & new from $30.00

5.0 out of 5 stars This is the textbook on this specialized subject, October 25, 2014
Vine Customer Review of Free Product (What's this?)
The business world has become increasingly complex. It is omnipresent in different channels or matrices (physical, virtual, digital, Internet, videostreaming, mobile apps, etc…). It has also become increasingly quantifiable and analyzed with cutting edge Big Data capabilities. How can a business get started or sustain itself and thrive in this hypercompetitive marketplace.

The two authors have excellent experience and professional and academic qualifications to provide a very insightful guide on how to succeed within this business complexity. Their book can serve as a textbook on this technical and multidiscipline subject of “Business Agility.”

The authors cover the subject exhaustively over 300 pages. But, their writing style is user friendly. The book is stuffed with excellent graphs and visuals that reinforce their key points and facilitate learning and retention of complex materials. At the end of each chapter, the authors share an extensive reference to both support their arguments by revealing their scientific sources and encourage further studying on certain aspects of the subject.

The book is aimed at a specific audience that includes managers, senior executives, and entrepreneurs. If you are part of this audience, this book may be really worthwhile for you both as a learning tome and a reference book where you can recall quickly some of the main key points relevant to various topics.


The Sports Strategist: Developing Leaders for a High-Performance Industry
The Sports Strategist: Developing Leaders for a High-Performance Industry
by Irving Rein
Edition: Hardcover
Price: $20.72
49 used & new from $16.00

5.0 out of 5 stars Very interesting book on a specialized subject, October 9, 2014
Vine Customer Review of Free Product (What's this?)
The sport industry has become one of the largest and complex sectors of the overall entertainment industry. This industry is increasingly fragmented and competitive. It is associated with numerous different delivery channels (stadium, live events, TV, Pay-per-view, Cable, Internet, Mobile, print Media). The three authors do a really good job of providing an insightful compass throughout this chaotic sport industry jungle.

The writing is lively, full of real life examples. The authors use many tables, diagrams and visuals to render complex materials easily understandable. For instance, the table on pg. 49 does an excellent job of defining the different types of audiences. The diagram on pg. 100 describes well the overall revenue-generating framework of sports organizations. The diagram on pg. 104, clarifies the value-generating framework for such organizations. The revenue analysis on monetizing the value of different audiences is excellent (pg. 107). The diagram on decision tree modeling (pg. 109) is very helpful. The visuals on how sponsorship delivers ROIs to corporate partners is insightful (pg. 117).

The nine different chapters cover a wide range of relevant topics including marketing, finance, budgeting, crisis-management, identity and brand building and recognition. Overall, this is a great book for anyone interested in learning about this specialized subject.


The Solution Revolution: How Business, Government, and Social Enterprises Are Teaming Up to Solve Society's Toughest Problems
The Solution Revolution: How Business, Government, and Social Enterprises Are Teaming Up to Solve Society's Toughest Problems
by William D. Eggers
Edition: Hardcover
Price: $18.39
72 used & new from $9.39

5.0 out of 5 stars Very good book on the subject., October 4, 2014
Vine Customer Review of Free Product (What's this?)
This is an interesting book describing the developing trends of the increasingly tighter cooperation between government, business, nonprofit, nongovernment organizations, and the public to solve problems in the public domain. The cooperation between those five different players is further enhanced by modern technology including the Internet, Social Media, and Big Data. Those technologies have facilitated powerful tools such as crowdfunding, crowdsourcing, contest for prizes for best solution, and data sharing among many others.

Through those multitude forms of cooperation, the government is able to harness far more resources across our civilization, and is able to solve many more problems much faster and more efficiently than otherwise. In particular, governments at the local, city, and State levels are able to tackle issues and solve problems compensating for the Federal government that is most often paralyzed by political polarization. This is a phenomenon that is not unique to the US. Many countries have this paradox whereby their respective Federal government is stuck in neutral due to various cultural political constraints. So, the burden of problem solving is transferred to the local government level.

The two authors have extensive relevant knowledge and experience. They write in a lively and user-friendly way. They literally illustrate their points with many diagrams and pictures rendering complex framework easy to understand. This is the case with their interesting vent diagram of “The Solution Economy” on pg. 9, or the “Expanding education ecosystem” on pg. 181, or the “Affordable housing at the base of the pyramid” on pg. 186.

If you are interested about this subject, and want to know more about it, this book is worth your time.


Rookie Smarts: Why Learning Beats Knowing in the New Game of Work
Rookie Smarts: Why Learning Beats Knowing in the New Game of Work
by Liz Wiseman
Edition: Hardcover
Price: $21.14
82 used & new from $10.87

4 of 8 people found the following review helpful
2.0 out of 5 stars The author way overstates her case, September 26, 2014
Vine Customer Review of Free Product (What's this?)
This book sends a good message. In your endeavors one should maintain the openness of a rookie by keeping on learning, revising assumptions, seeking the advice of others. That's all good.

However, the author way overstates her case. She does say at one point that Rookie Smarts is not about age or experience. It is about a state of mind. But, outside of a very short chapter (chapter 6: The Perpetual Rookie; 18 pages) the vast majority of the examples in this over 200 page book contradicts her statement by invariably illustrating rookies jumping over mountains while experts being as dogmatic as flat-landers. This is both tedious and unrealistic. Examples that do represent outlying scenarios are interpreted as the most likely expected outcome in most situations.

There are a few contradictions in the book. For example, on pg. 27, she refers to Malcolm Gladwell stating that planes are safer when the least experienced person is flying the plane. However, just 10 pages later she mentions a pilot rookie crashing a commercial jet (Asiana flight 214). Next time I am flying, I'd rather have someone with adequate experience flying the plane thank you very much. This is also goes for my surgeon, dentist, lawyer, restaurant chef, etc...

Some concepts are misapplied. As an example of rookies most always being better than experts she refers to the very interesting The Wisdom of Crowdsby James Surowiecki. However, the Wisdom of Crowds does not derive its "intelligence" from an individual being a rookie. It derives its intelligence from the number of players or traders participating in a prediction market or similar venue aggregating information instantly. The intelligence is in the number of traders having independent judgments, and their errors being random in nature therefore cancelling each other out and making a better prediction than a single expert. This is why prediction markets and betting sites beat out static polls, why the market typically beats out active managers and hedge funds, and why such markets invariably beat out the majority of experts. That's not rookie smarts. It is the Efficient Market Hypothesis.

The author emphasizes creating networks of experts (in the physical world) to solicit information. But, there is a more efficient way to be rookie smart. And, that's actually to go directly to the information, the data, and analyze the trends firsthand. The author seems to focus mainly on talking to people to get information. In this information age, it is often far more efficient and effective to get the information directly instead of spending much time talking to "experts." Shane Snow in his far superior book, Smartcuts: How Hackers, Innovators, and Icons Accelerate Success, on a very similar subject indicated that when rookies deliberately researched a topic, their judgment were better than experts' intuition. This makes perfect sense. Additionally, in terms of soliciting experts the Internet can get you in touch with hundreds if not thousands of people with very deep specialized expertise. By reaching them out, you may get a lot more and better information than by soliciting knowledge only from experts in the physical world within your organization.

This book relates to a foundational concept that the author ignored. The concept is about The Hedgehog and the Fox: An Essay on Tolstoy's View of History (Second Edition). That's a philosophical essay first written by Isaiah Berlin in 1953, where he advanced that there were two types of thinkers. The hedgehog believes in one great truth that pretty much rules the universe. Hegehog-thinkers are very confident in their judgment. They are most often dogmatic. And, in a modern context they often dominate the Media because they are definite in their pronouncements and most assertive. The Fox-thinkers are much the opposite. They are not so confident. Their main answer to anything is often "it depends." They are very nuanced in their thinking. They constantly solicit new information and change their assumptions and forecast accordingly. And, they are far better forecasters than the hedgehogs. Unfortunately, because they state "it depends" too often they are often ignored by the Media and the Public. This concept has been very influential. Philip Tetlock, a political psychologist, has done extensive studies on the failing of expert forecasts in numerous domains due to their thinking too much like hedgehogs. Nate Silver, a statistician, has referred to this concept extensively in his excellent book The Signal and the Noise: Why So Many Predictions Fail -- but Some Don't. You could substitute within Wiseman's book the two words Rookie and Veteran with Fox and Hedgehog, respectively, and pretty much get the idea. It is just that the latter is a far better metaphor than the former because it does not entail that experience and knowledge are toxic to learning and performance; and that inexperience and ignorance are magic elixirs for both.


Bird Dream: Adventures at the Extremes of Human Flight
Bird Dream: Adventures at the Extremes of Human Flight
by Matt Higgins
Edition: Hardcover
Price: $18.78
66 used & new from $7.89

1 of 1 people found the following review helpful
5.0 out of 5 stars Excellent reportage on the subject, September 8, 2014
Vine Customer Review of Free Product (What's this?)
This is an excellent reportage on the history of the developments that have lead to wingsuit flying. Interestingly, the first development of something close to a modern wingsuit took place in 1935 when Clem Sohn in the U.S. flew an early wingsuit very successfully. But, just a few jumps later in 1937 he will kill himself when the lines in his parachute get tangled up and he crashes to the ground. Unfortunately, such tragic casualty will become a most common occurrence in this sport's history. More than half a century later, people will track the spread of the development of BASE jumping and wingsuits by simply maintaining a database of all the worldwide related deaths in those sports. This way they will find out that the sport had spread to Australia, New Zealand, South Africa, and pretty much all over the world.

The development and the incremental skill set that lead to wingsuit flying starts with skydiving, BASE jumping, and finally wingsuit flying. Each incremental step gets increasingly challenging and dangerous. And, at each step practitioners always have to push the limits often beyond the breaking point (number of casualties in those sports is amazing). The skysurfers, BASE jumpers, and wingsuit flyers not only fly through the air but do acrobatics they have learned from gymnastics, trampoline, and diving.

Higgins does a very good job at depicting some of the main characters associated with the radical development of the wingsuit.

It goes without saying that the practitioners of such dangerous sports are different from the rest of us. Within Chapter 4, Higgins educates us on the underlying neuroscience that discloses the neurotransmitter and hormonal wiring of these daredevils is really different. A specific gene, the dopamine D4 receptor gene, plays a key role on how much dopamine your neurological system generates associated with different experiences. With daredevils, their system releases low levels of dopamine (hormone associated with pleasure, satisfaction, and other positive sensations). Thus, in every day life they often feel lethargic, bored, or frustrated. They need heightened levels of excitement and stimulation that they experience through extreme risk taking. The latter becomes addictive. And, related risk-taking disciplines will often dominate their career, and their lives. As stated, it will often take away their lives. These people are not interested in long lifespan. They don't buy long-term care insurance. And, they don't qualify for life insurance. Jeb Corliss, the leading character throughout the book, the most famous BASE jumper, and a leading wingsuiter will state: "My biggest fear is dying of old age... I am okay with dying... I know it's going to happen."

The next frontier is to land in a wingsuit without a parachute. That would be replicating the real freedom of a bird. The physics are extremely challenging. To fly a wingsuit and maintain its forward momentum, you need to fly at least 70 mph. Wingsuits actually drop quite rapidly. Thus, landing a wingsuit without a parachute to slow it down is extremely challenging and dangerous.

There are now three competing strategies to land the first wingsuit. The first one proposed by Gary Connery, a stuntman, is to use thousands of cardboard boxes to absorb the shock of the fall. As a stuntman, he has done that jumping off buildings 150 feet high and reaching falling speeds of 60 to 70 mph, close to the minimum wingsuit speed. Jeb Corliss, the most famous wingsuiter proposes to build the equivalent of the landing portion of an alpine ski jump which consists in creating a huge man made hill that would have the same slope as a regular wingsuit approaching the ground (between a 33% to 45% degree angle). Such a project would cost several millions of dollars. So, the fundraising is another challenge. Another wingsuiter is looking into replicating this feat of engineering by simply looking into snow covered mountains and finding a slope with the right slope angle. This one indicates that he would need a team of people to dig him out of the snow after he lands.

Jeb Corliss, the one going after the alpine ski jump strategy, nearly dies in a wingsuit crash. So, he is out of the race to land a wingsuit without a parachute. At the same time Gary Connery (cardboard strategy) with the assistance of the best wingsuit designer develops a new wingsuit that flies slower than other ones. It can fly as slowly as 60 mph corresponding to a descent speed of 22 mph. Gary feels that this slower wingsuit should allow him to land in cardboard boxes. After some amazing organizational setbacks including nearly two months of rain that prevent him from setting up his 18,600 boxes in a field with a large team of volunteers he will eventually succeed. He will ultimately land at a speed of 69 mph with a downward speed of less than 10 mph. For a wingsuit this combination of slow speed and a 7-to-1 glide ratio (7 feet forward for only one foot downward) was probably a record. He will come out of his jump without any injury whatsoever.


Smartcuts: How Hackers, Innovators, and Icons Accelerate Success
Smartcuts: How Hackers, Innovators, and Icons Accelerate Success
by Shane Snow
Edition: Hardcover
Price: $15.69
68 used & new from $10.90

74 of 75 people found the following review helpful
5.0 out of 5 stars Smartcuts is very smart & fun, August 28, 2014
Vine Customer Review of Free Product (What's this?)
This book serves as an original career and entrepreneurship guide in the 21st century (which was not the intent of the author). The main thesis of Shane Snow is that luck does not just happen. Using surfing metaphors, Snow indicates that the ones who catch the wave (luck) are the ones who were ready all along looking for it. These remarkable individuals were bound to catch a good wave (luck) sooner or later. It was just a matter of time. And, the way they went about it; they did not waste much time doing it. They did not pay their dues for decades. They spent no time in stagnant situations. They kept moving forward and often laterally (typically a lot faster than the rest of us). This does not mean they did not work very hard. They did. It does not mean they cheated and cut corners. To the contrary, they maintained superior ethical standards. Snow defines smart cuts as short cuts with integrity. These individuals worked smarter, more creatively, and better understood how to take their next step. Most often, they were guided by a life-passion, an interest, a focus that kept them honed like a heat-missile towards their target.

Snow outlines nine foundational smart cuts principles that can accelerate anyone’s career or one’s company growth. They all make perfect sense, are intuitive, not controversial, and not far-fetched. Snow does not make anything up. Every single of his smart cuts principle is well supported by research and documented by many examples.

The smart-cut thinking is an offshoot of “lateral thinking” as defined and developed by Edward de Bono. And, Snow gives de Bono his due credit for the concept. However, while I have read most of de Bono’s books, and did find them interesting; I find Snow’s book far more insightful.

Each chapter describes thoroughly one of the smart-cut strategies on a stand-alone basis. Of course, they overlap a bit and work well simultaneously. But, it is amazing how powerful each one of those strategies is on a stand-alone basis.

There are numerous passages within the book that are pretty fascinating. The contrast between the careers of US Presidents and US senators is amazing. The Presidents are often outstanding smartcutters with a surprisingly short career in Federal office before acceding to the Presidency (Eisenhower, Carter, Reagan, Clinton, Bush Jr., and Obama among others). Meanwhile, the senators are for the most part stagnant plotters; and, very few of them ever make it to President. Snow even makes the case that some of the Presidents who paid their dues with a lifelong career in politics were some of the worse Presidents (example: Andrew Johnson). Good Presidents mainly acquired leadership credentials outside the field of (national) politics. Meaning, paying your dues career-long is no guarantee of mastery once you get there.

Another interesting fact is that companies who switch fields are often very successful. Moving laterally often causes one to accelerate. The IPhone was developed not by a telecommunication company, but Apple a PC company. Start-ups that “pivot” once or twice raise 2.5 more money, have 3.6 times faster user growth, and are 52% less likely to plateau prematurely.

In another section, you learn about a team of hospital surgeons who learn how to synchronize their surgeries and patient treatments inspired by the exactitude, speed, and efficiency of a racing car formula I team of outstanding mechanics working at the races. Quoting the author: “Before long, the hospital had reduced its worst … errors by 66%.” As an extra, the formula I racers, mechanics, and hospital doctors became very good friends and participated together in fund raisers for various charities.

The whole “Rapid Feedback” strategy (chapter 3) is really interesting. It details the comedians learning processes at The Second City in Chicago. It also shares research on how we learn from mistakes and feedback. Much research show that we actually learn more from the mistakes of others rather than our own. This is because we readily attribute the mistake of others to humans. Meanwhile, we attribute our own mistakes to external circumstances beyond our control so as to protect our own ego. Apparently, what differentiates some masters in whatever discipline from others is their ability to withstand, or even their eagerness to solicit negative criticism. They find negative criticism far more actionable to facilitate their progress.

“Waves” (chapter 5) is at the essence of the book. That’s where Snow goes all out with surfing metaphors that he effortlessly transfers into a multitude of real life and career related examples. He quotes a professional surfer stating: “Being able to pick and read good waves is almost more important than surfing well.” You can see how you could plug in this concept effectively in many situations. There are a couple of specific gems in this chapter that will stay with you. One of them is the amazing power of pattern recognition. If you analyze deliberate trends, use criteria, observe the facts, etc… amateurs undertaking this kind of trend analysis will invariably outsmart experts’ intuition in just about any field. Snow mentions a few weird examples such as the ability to recognize the difficulty level of a professional basketball shots; or the ability to pick out Louis Vuitton fake bags vs authentic ones. Thus, “you can be right the first time” without years of apprenticeship. This will be music to the ears of all the data guys out there (not just the Big Data one). “Deliberate pattern spotting can compensate for experience” as stated by the author. Another gem is that you don’t need to be the first to do something and be successful. Research showed that 47% of first (company) movers failed. By contrast, early leaders-companies that took control of a product’s market share after the first movers pioneered them had only an 8% failure rate. Fast followers benefit from the free-rider effect. Examples: Google beat out Overture in search engine. Facebook beat out Myspace in social networks.

“10 x Thinking” (chapter 9) will turn you into an Elon Musk fan if you are not already. This chapter outlines the genius, perseverance, and sheer bravado Musk demonstrated in pursuing his most daring venture: SpaceX. The concept here is that to revolutionize a field you can’t go for just marginal improvements (10% better, etc…). You have to go for the big swing, 10 x better, or 10 x cheaper, etc… So, it is called 10 x thinking. And, Musk after many failures did just that with SpaceX. His company is literally 10 times more cost effective and 10 times faster in terms of project turnaround time than the former best in the aerospace business: NASA. As a result, SpaceX is now a very viable commercial entity swamped with contracts from all over the world to launch satellites transport resources back and forth to the Space Station, etc… A counterintuitive thought is that sometimes the 10 x improvements are easier than the + 10% one. This is because the former are challenging high-hanging fruits no one dares to go for. While the latter ones are low-hanging fruits crowded with competitors. And, this runs into the N-Effect. The more competitors in a given field the weaker is the individual performance. They found that test takers (SAT, ACT, etc…) perform much better when in a smaller class room with fewer test takers than when in a much larger class room with many test takers.

There is a lot more to the book than what I covered. But, my review should give you a good idea if this book is for you. If you got that far in reading my review, it most probably is.


A Farewell to Alms: A Brief Economic History of the World (Princeton Economic History of the Western World)
A Farewell to Alms: A Brief Economic History of the World (Princeton Economic History of the Western World)
by Gregory Clark
Edition: Paperback
Price: $17.10
83 used & new from $5.50

1 of 1 people found the following review helpful
3.0 out of 5 stars Fascinating, but maybe not the whole story., August 7, 2014
Verified Purchase(What's this?)
Gregory Clark's book is really successful on numerous counts. It engages and sustains the reader's interest thanks to a very lively style that turns a dry academic subject into a page turner. Clark has gathered an immense amount of sociodemographic data going back up to 3,000 years. His main theory is interestingly controversial. The main cause of the Industrial Revolution was that the rich in England had a much higher fertility rate than lower classes. And, they literally propagated throughout society a new set of values supporting the onset of modern capitalism. Those values included discipline, work ethic, education (literacy), thrift, patience (deferred gratification). They passed on those traits both genetically and behaviorally (by example). Clark has been much criticized for throwing into it a genetic component. But, he has defended his thesis extensively referring to numerous contemporary twin studies supporting that many behavioral outcomes (education and career) have a strong inherited component.

Clark also addresses in passing the consequences of the Industrial Revolution on inequality. The latter [inequality] has become the hot topic in economic debates several years after Clark wrote this book. Clark's, data-supported, findings entirely contradict Thomas Piketty's premise (as expressed in Capital in the Twenty-First Century) that capital grows faster than income and leads to rising inequality. Clark instead (within Chapter 14) demonstrates that capital has not grown any faster than income over a very long period of time. His Figure 14-4 on pg. 280 shows that capital's share of income actually steadily declined from 1750 to 2000 in England. Over the same period, he shows that labor share of income increased rapidly leading to a reduction in inequality over the reviewed period.

Clark takes his theory as the exclusive cause of the Industrial Revolution. At times, he may have dismissed other theoretical causes too quickly. For instance, he advances that modern institutions did not play much of a role in fostering the Industrial Revolution because he felt that the institutional environment was already well established in Medieval England vs. Modern England. He supports this statement by observing that the income tax rate and Public Debt/GDP ratio were both a lot lower during Medieval England than currently. Thus, he deducts the institutional environment was actually superior in Medieval England. However, higher tax rates and Public Debt levels in modern times are the difference between a complex and fully developed Government and a far more limited or nearly totally absent one (no Government = no taxes = no public debt). Using this rational, and deriving anything about the quality of current social institutions is just not accurate.

Clark's excluding all other theories entirely is the main weakness of this book and that is a frequent occurrence in the social sciences. One social scientist will come up with an explanatory theory and will in turn make great efforts to demonstrate that their own causal explanation is the only possible one. I find this approach less than optimal. And, I wish they would adopt more readily a Factor Analysis approach where they could assess the relative influence of numerous causal factors instead of a single one. Such a study could for instance find that the Industrial Revolution was in part due to Clark's theory, but also emergence of supporting institutions, England's access to new energy, etc... This would make for a more nuanced, encompassing, and defensible multi-faceted theory. However, this approach within the social sciences is rarely taken (I can't think of a single example).

To his credit, Clark does cover, and rebut numerous competing theories of the Industrial Revolution. And, he does it well (except for the mentioned example regarding quality of institutions). Some of the protagonists have in turn criticized him back. And, Clark has responded to those attacks in a short paper titled: "In Defense of the Malthusian Interpretation of History." The latter is strongly recommended reading as it makes for an excellent supplement to the book.

There is one specific theory that Clark may have short changed; And, that is the one from Ken Pomeranz as expressed in The Great Divergence: China, Europe, and the Making of the Modern World Economy. According to Pomeranz, the Industrial Revolution occurred in England because of its early access to an abundant source of industrial energy: coal and its access to massive food and other resources from the U.S. This allowed England to successfully shift its economic focus from agriculture to industry (manufacturing, railroads, etc...) and forge ahead leading the Industrial Revolution. This seems to make much sense. Maybe the Industrial Revolution can be well explained by: 10% Clark's theory + 10% Pomeranz + 80% numerous other theories and unknown factors.
Comment Comments (2) | Permalink | Most recent comment: Aug 23, 2014 9:41 AM PDT


The Son Also Rises: Surnames and the History of Social Mobility (The Princeton Economic History of the Western World)
The Son Also Rises: Surnames and the History of Social Mobility (The Princeton Economic History of the Western World)
by Gregory Clark
Edition: Hardcover
Price: $20.67
72 used & new from $12.06

9 of 10 people found the following review helpful
2.0 out of 5 stars No math no Law of Social Mobility, July 30, 2014
Verified Purchase(What's this?)
Based on his own extensive data gathering covering centuries, the author derives that there is a law of social mobility associated with an intergenerational correlation or persistence rate (the same thing per his own definition) of 0.75 of the status of a son vs the one of his father: Social Status of a Son = 0.75(Social Status of Father). This denotes a much lower level of social mobility (or much higher persistence rate) than any other economists had derived for OECD countries. It also entailed that social mobility is nearly fixed regardless of eras or societies. The author advances that his findings contrast with other economists because the latter had studied only one narrow aspect of status independently such as income or wealth. Meanwhile, he had studied a much broader measure of social status. And, the author had focused on the propagation of social status through surnames. Meanwhile, other economists studied the general population. For the time being, not questioning Clark's rational but simply looking at his own calculations of social mobility or persistence rate I found numerous issues.

Clark states that although Sweden, U.K, and U.S. have very different conventional social mobility measures, they have nearly identical and much lower social mobility measures or high persistence rates by his own broader measures.

Before investigating the math, let's clarify what we should look at. We are interested in observing how social status for a specific surname reverses to the Average (or the Mean) for the total population. So, the dependent variable is: Social Status of a Son - Average Social Status. And, the independent variable is: Social Status of his Father - Average Social Status. Clark's narrative and calculations appear to state: Social Status of a Son = 0.75(Social Status of Father). But, such a function would eventually have a son from an elite surnamed clan inevitably fall to the lowest status in society. Indeed, over just the next 6 generations the last heir in that surnamed group would have a social status equal to only 0.18 the one of the original ancestor. Indeed, 0.75^6 = 0.18. That would put the most recent generation in a destitute state with little in common with its earlier predecessors.

It makes a lot more sense to look at Social Status above the Average, so the calculation's meaning now is that the most recent generation is not nearly as privileged as the original one, but is still above the Average (instead of destitute). Additionally, Clark's formula structure does not work for surnames that start with Social Status much below Average. Those would never revert back to the Average, but instead drop quickly asymptotically towards a Social Status near zero at the absolute bottom. My formula structure works in both cases (for starting Social Status above or below the Average).

Given the above framework, I revisited the calculations of social mobility for Sweden, US, Medieval England, and Modern England. His average persistence rates are: Sweden 0.77, US 0.75, Medieval England 0.90, and Modern England 0.78. So, based on his own calculations we can see that his Law of Social Mobility at 0.75 holds up very well in three cases out of four (except for Medieval England that is much higher at 0.90). My own calculations using his own data generated the following estimates: Sweden 0.60, US 0.82, Medieval England 0.85, and Modern England 0.59. Those figures are very different. While Clark could advance that the European more abundant government support for public education at all levels, health care, and overall safety net had no impact on social mobility as the latter was no different in Sweden vs the UK and the US; my calculations indicate just the opposite. And, that is that social mobility with much lesser Government support is much lower (higher persistence rate) in the US vs Sweden and the UK. Actually, the US persistence rate is not far off Medieval England's. That's a pretty different finding using the same data set.

Just to illustrate how our calculations diverge, let's look at a precise example. On pages 94-96, he shows that the wealth of the Rich surnames in one generation in 1860 was 187 times greater than the average wealth. And, four generation later it was still 4 times greater than average. He associates this regression-to-the-mean with a persistence rate of 0.71. Instead, I calculate it as follows: the starting point above the Mean is 187 - 1 = 186. After 4 generations, the end point above the Mean is 4 -1 = 3. And, the persistence rate over the next 4 generations is: (3/186)^(1/4) = 0.356. Indeed, 186(0.356)^4 = 3. Meanwhile, using Clark's persistence rate you get: 186(0.71)^4 = 47. I since learned from B. Foley that Clark's calculation actually works out if you log the mentioned variables because he uses the log(wealth) in this case. So, I should take back this criticism. However, it opens up another. When you log such variables, it somehow greatly artificially boosts the perseverance rate coefficient. In this case, as demonstrated it doubles it. So, if you want to prove that social mobility is lower than anyone else was thinking (and perseverance rates are higher), just log the variables and that will do the trick. But, this is just a mathematical artifact. This is not robust social science. What is also obfuscating is to associate and compare such high coefficients on log basis with many other coefficients on other social status dimensions where you did not use logged variables. This is an explicit Apples and Oranges situation that just leads to much noise and no signal. Log variables have a different meaning than nominal ones. They represent the % change in a variable. In this case, the log(wealth) of the Son = 0.75 log(wealth) of the Father means that the Son's wealth change in % represents 0.75 of the Father's wealth change in %. This is a very different concept than his overall intergenerational correlation that he uses on all the other variables (education, occupation, probate, etc...).

Chapter 12 `The Law of Social Mobility and Family Dynamics' is also associated with dissonant calculations. On page 214, he shows the hypothetical paths of the social status of above average families vs below average ones over the past 10 generations and the next 10 prospective ones. The paths for the two are symmetrical. And, they show increase in social status as well as decrease in social status. However, his function as described: Son Social Status = 0.75(Father Social Status) could never accommodate such directional changes. This function would instead have the social status of both families inevitably fall towards the very bottom. Their respective social status could never increase. To increase his "0.75" coefficient needs to be greater than 1. Even my more flexible equation would have both families regress to the Mean. The above average one would regress downward. The below average one would regress upward. Yet, the respective paths would not change direction. They could not all of a sudden regress away from the Mean. For my function to cause social status to regress away from the Mean, I also would need the "0.75" coefficient to be greater than 1.

Within the same chapter 12, looking at his historical data knowing the social status of a given group in one generation gives you no information regarding the subsequent generations. You can see that on the graphs on pages 218 and 219. The graph on page 218 shows an above average group steadily increases their social status over the next 5 generations (moving from 3 to 8 times the average social status). Meanwhile, a very similar group experiences a steady decline in social status from 4 down to 2 times the average over the next 4 generations. And, the time periods very much overlap. So, if you know a group had a social status much above the average around the early 1700 you have no way of knowing whether this group's social status will increase or decrease over the next several generations. Thus, Clark advances that his model is highly predictive (the past is highly predictive of the future of social status), meanwhile his own data set suggests the opposite. The past does not tell you any information whether prospective social status will increase or decrease in the next generations.

The above raises the issue of when does an above average group experiences an inflection point from an increasing trend to a decreasing trend in social status. On page 222, two graphs show that for various upper class groups that inflection point can greatly vary from 16 to 64 times the average social status. So, if a group is at 16 times, is it only early on its path to amassing more wealth and status? Or, is it at its apex? And, it will inevitably regress to the Mean in future generations? You actually have no way to tell. That's what the data conveys. On page 226 and 227, Clark looks at similar trends for China. Now, for some reason China has a lot lower and constant inflection point at 8 times. In this case, if the future is like the past one could say that a Chinese group at 8 times the average social status is quite likely to revert downward to the Mean going forward. But, if a Chinese group is at 4 times which way is it going up or down? There is no way to tell.

Now, getting away from calculations and graphs let's revisit some of his rational. Clark states that his derived social mobility is so much lower because he looks at a broader measure of social status vs other economists who just focused on one single dimension at a time like wealth, or income, or education. But, Clark does not do what he preaches. He like the other economists he criticizes focuses on a single aspect of social status at a time. He never ever combines two or more dimensions to create a broader measure of social status. This would have entailed creating principal components within a Principal Component Analysis framework or factors within a Factor Analysis one. But, he does not go anywhere near those methodologies that would have facilitated his creation of a broader social status measure.

However, on page 110 and 111 Clark comes up with a second argument of why his social mobility measure results in lower social mobility. It is simply because he looked at subgroups (surnames). And, he indicates the resulting lower social mobility directionally would have been similar if he had used different subgroups such as race, religion, nationality of origin, etc... as long as those categorical dimensions do not correlate with the error term of the original regression. But, to reduce the error term in the original regression those dimensions (race, religion, etc...) should correlate with the error term (in other words explain the error term). Otherwise, I don't know how they would reduce the error term. Also, his explanation entails that by using subgroups that do reduce the error term of the original regression it would automatically increase the regression coefficient of his function. And, that's how he gets a 0.75 meanwhile other economists typically got much lower coefficients. I am not sure that is correct. Let's take the simple example of stock returns. A stock index has a given stock return and volatility. It is the aggregate of the stock returns of the stock in the index, and a resulting volatility associated with the interaction of the volatility of each stock return and their respective correlation with each other. Clark's rational would suggest that by looking at a single sector, you could develop a model with a lower standard error than if you modeled the index. And, given that you have reduced the standard error it would automatically have resulted in measuring a more accurate and higher stock return for this specific sector. But, we know that to be false. Some sectors will have higher or lower returns than the index. But, their weighted average return will be exactly the index's. However, in most cases a sector will have a much higher volatility of return than the index because it is so much less diversified. This analogy contradicts Clark's second argument of why his social mobility is much lower than other economists.

Another trap Clark may have fallen into. Autoregressive models (Son = 0.75(Father)) can work very well at predicting over a single period. They can work very well whenever a trend does not change sign (no inflection point). But, they can't handle inflection points. Even when they can predict very well, they fall into statistical fallacies (Unit Root and stationary issues) that entail that the model is no better than just a trend (counting periods 1, 2, 3, ...). In summary, his model is no better than observing that during some time periods the trends in social status went in a certain direction. But, it does not provide any information regarding why that trend shifted direction in the past or present, and what will it be in the future.
Comment Comments (6) | Permalink | Most recent comment: Oct 20, 2014 5:07 AM PDT


Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11-20