Don't you have to understand the Bell Curve before you can appreciate this book?


Sort: Oldest first | Newest first
Showing 1-9 of 9 posts in this discussion
Initial post: Apr 24, 2007 5:36:54 AM PDT
Last edited by the author on Apr 24, 2007 5:37:40 AM PDT
I must acknowledge that I have not read this book yet. From the decriptions I've read above, my first reaction was that many people in business don't understand variance and bell curves to start. The idea of using variance and bell curves is great in theory, but surprisingly rarely used in practice (outside of some specific industries; notably financial). I've found that many processes would benefit if people considered bell curves, probability, variance, etc. Isn't that what Deming's statistical process control, six sigma, and actuarial tables are all about?

Before you can appreciate something like The Black Swan (which I have no doubt in), don't you have to acknowledge and appreciate the underlying patterns of the world? I'm concerned that a book like this will cause the ignorant masses of business to discount bell curves altogether and continue to resort to conventional wisdom.

I strongly recommend Peter Bernstein's excellent book Against the Gods: The Remarkable Story of Risk. He describes a world where people believed all occurrences in the world were purely random. The Romans never conceived of the 1 in 6 probability associated with rolling a six-sided die. It wasn't until gamblers pondered how to gain an advantage in games of chance that statistical concepts emerged.

It's true that many industries including insurance and financial markets already rely on variance, bell curves, etc. However, there are many industries that remain as ignorant as the Romans and their dice. These industries would benefit more from embracing the concept of the bell curve than to be thrown totally off track with ideas like black swans. It's possible that some of the very companies in the funds Taleb manages may realize better returns if they used the bell curve to their advantage rather than deny its existence (or remain ignorant of it altogether).

In reply to an earlier post on Apr 24, 2007 9:53:29 AM PDT
Last edited by the author on Apr 24, 2007 10:10:54 AM PDT
The real question that needs to be answered is whether or not the actual time series data in a particular field satisfies some kind of goodness of fit test.The Chi-Square test for goodness of fit is the simplist to use.
Shewhart and Deming demonstrated ,in their work on quality control,that the kinds of data generated by industrial manufacturing processes satisfied goodness of fit tests over time as well as the Law of Large Numbers and The Central Limit Theorem.This is not the case in the areas of economics and finance.No goodness of fit tests are demonstrated to fit the time series data before the assumption of normality is made.Instead, economists just " assume " that the normal distribution is applicable.
Benoit Mandelbrot has demonstrated for well over 50 years that the time series data in financial markets DOES NOT come close to fitting a normal ,bell shaped ,curve distribution.The distribution that fits the data the best by far is the Cauchy Distribution.The Cauchy Distribution has the fat to very fat tails and extreme kurtosis that leads to the conclusion that the so called highly improbable(in fact, practically impossible if you specify a normal distribution) " outlier" events that economists and financial economists ,who rely exclusively on some version of the normal distribution, use to categorize events like those that occurred in Oct.,1929,1987,and 1989,for instance,are not outlier events at all.They only appear to be outlier events because the economics profession is using the WRONG distribution to characterize financial markets and macroeconomics(the same exact stochastic model,based on the normal distribution, is used in both finance economics and in modern macroeconomics,such as real business cycle theory,rational expectations theory and monetarism,excluding the work of Brunner amd Meltzer).
Taleb's book is superb because he has picked up and expanded upon ,directly or indirectly, the very similar warnings issued to the early econometricians by John Maynard Keynes,in his exchange over the logical,inductive foundations of econometrics with its primary founder,Jan Tinbergen ,in the Economic Journal of 1939-40.Keynes warned Tinbergen that his assumption of normality(Tinbergen was using a version of least squares,ordinary least squares,as the foundation of his multiple correlation and regression analysis of the business cycle) was doubtful,at best.The time series data was not continuous,independent, uniform,homogeneous,and stable over time due to,for instance, changing expectations of the future,as well as continuous technological change,innovation and advances made in physical capital goods and financial instruments over time.Joseph A Schumpeter warned nearly 100 years ago that the business cycle could not be based or analyzed on the assumption of normality.Taleb brings this conclusion home for the 21st century.

THe problem is that Taleb's analysis of what is wrong with the current technical analysis being applied in economics and finance is that it will be ignored as was the analysis of Keynes and Mandelbrot before him.It will be ignored because the economics profession ,on a priori grounds,belive that free markets must be stable,convergent,and continuous through the operation of the Invisible Hand over time.The only risk is the mild risk of the Normal Distribution's standard deviation, sigma.They can't bear to face the fact that ,instead,one is confronted by the wild,destabilizing risk of the Cuachy distribution.

In reply to an earlier post on Apr 26, 2007 7:19:12 AM PDT
Last edited by the author on Apr 26, 2007 7:20:30 AM PDT
G. MCGHEE says:
But, Patrick, the Romans were right -- weren't they?

If I throw a die in one location, then throw the same die half-way around the world,
and continue to do this n times, what is there to link these events as a sample space?
Nothing. They are separate events.

To make the claim that separate events are somehow linked is to postulate a kind of
metaphysical causality. And the Romans had a word for it: Fate, to which even Zeus was subject!

In reply to an earlier post on Apr 27, 2007 4:35:36 AM PDT
Last edited by the author on May 14, 2007 2:14:21 PM PDT
McGhee,

I challenge you to roll a die 100 times and tabulate the results. At the very same time, find someone somewhere else to do the same thing (if you insist). If you add the two results together or look at them independently, you will find that there is a 16.666% chance of roling any given number. If you don't, please report back to this board, because you would have indeed encountered a "black swan."

Are you going to know exactly which number will be roled before you role it? No! That's not the point. No one is claiming that separate events are linked, we're talking about probability. I'm not quite sure I understand the connection you are making. The law of large numbers will result in the probability given above.

By the way, if you roll two dice and tabulate the summarized results, you will get a bell curve with 7 being the center of the distribution. Here are the expected results, and the more often you roll, the closer you will arrive at these results (again, the law of large numbers).

Total Roll Probability
2 2.8%
3 5.6%
4 8.3%
5 11.1%
6 13.9%
7 16.7%
8 13.9%
9 11.1%
10 8.3%
11 5.6%
12 2.8%

In reply to an earlier post on Aug 16, 2007 5:54:54 PM PDT
[Deleted by the author on Aug 16, 2007 5:55:25 PM PDT]

In reply to an earlier post on Aug 16, 2007 6:02:04 PM PDT
Old Granny says:
Tossing dice works okay in Mediocristan. Well made dice exist in Mediocristan. It works not at all in Extremistan where you need to be prepared to deal with the impact of: tsunamis in Indonesia, terrorists flying into buildings, meeting the person (or persons) you will eventually marry, the success of Hula Hoops or Madonna, and other unforseen events which have a significant impact either on your personal life or our society. You did read the book, didn't you?

In reply to an earlier post on Oct 5, 2007 4:24:50 PM PDT
conservative says:
I will cut you some slack over your overlooking the fact that since 100 is not evenly divisible by twelve, the occurrences of the twelve cannot match your all-too-neat table exactly in 100 throws. But I have to inform you that 100, or 104 or whatever, is not a large number for the purposes of your assertion. And if in fact you had some target number of throws after which you expect the distribution to match the table, then you are assuming that when you have tossed the dice one fewer time than that specified number of throws that all but one of your twelve numbers will have come up exactly the "correct" number of times - and that when the dice are tossed the last time they will add up to exactly the number which was short the toss before. IOW, you are saying that the last toss of the die is not random but predetermined. And indeed the penultimate dice throw must, according to your theory, have been restricted to one of only two possiblities in order for there to be the possiblity that the last die will make things come out even. It follows that coming out "even" at any predetermined number of tosses is quite improbable.

And that is aside from the fact that the distribution you delineate, which I haven't checked but think is probably right mathematically, is in fact not a normal distribution but a symmetric triangular distribution with three being twice as likely as two, 4 being three times as likely as two, 5 being four times as likely as two, and seven being six times as likely as two. And with 12 being equally likely as 2, 11 being equally as likely as 3, and 8 being equally likely as 6.

The Central Limit Theorem asserts that any random variable which is the sum of a large number of random variables - eg, a large number of dice - will have a bell shaped distribution. Adding eight dice together would pretty much do it, but the sum of two dice does not come particularly close.

In reply to an earlier post on Oct 8, 2007 7:38:29 AM PDT
Luis Sisamon says:
To start with the distribution of the results of rolling two dices can not be a normal distribution as the values it can take are integer values exclusvicely, so you are using an approximation of a normal curve...
I really do not get your point about the "predetermination"... he is at no point talking about the outcome of individual rolls but about the distribution of a relatively large number of outcomes.
BTW if you consider 100 not a large number... well it all depends for a system like the rolling dice I think it is more than enough. If you want to consider an even simpler example think of tossing a coin 100 times.

I am happy to bet with you that you will have an outcome very close to 50 heads and 50 tails and that does not involve any predetermination in the last roll. Nope. Actually you may get any number of heads... but most of the time you will be very close to 50. Rolling dices and tossing coins is very safe mediocristan territory. These are well known mechanical systems and this book certainly does nto apply to them.

And if you think about it the range of potential outcomes with just 100 tossings is a little bit larger than the age of the universe (even expressed in microseconds) according to our accepted cosmogonic models. You will never be able to predict which sequence you will get... but very likely it will have a number of heads very close to 50.

In reply to an earlier post on Oct 8, 2007 7:43:32 AM PDT
Luis Sisamon says:
To start with the distribution of the results of rolling two dices can not be a normal distribution as the values it can take are integer values exclusvicely, so you are using an approximation of a normal curve...
I really do not get your point about the "predetermination"... he is at no point talking about the outcome of individual rolls but about the distribution of a relatively large number of outcomes.
BTW if you consider 100 not a large number... well it all depends for a system like the rolling dice I think it is more than enough. If you want to consider an even simpler example think of tossing a coin 100 times.

I am happy to bet with you that you will have an outcome very close to 50 heads and 50 tails and that does not involve any predetermination in the last roll. Nope. Actually you may get any number of heads... but most of the time you will be very close to 50. Rolling dices and tossing coins is very safe mediocristan territory. These are well known mechanical systems and this book certainly does nto apply to them.

And if you think about it the range of potential outcomes with just 100 tossings is a little bit larger than the age of the universe (even expressed in microseconds) according to our accepted cosmogonic models. You will never be able to predict which sequence you will get... but very likely it will have a number of heads very close to 50.
‹ Previous 1 Next ›
[Add comment]
Add your own message to the discussion
To insert a product link use the format: [[ASIN:ASIN product-title]] (What's this?)
Prompts for sign-in
 


 

This discussion

Participants:  6
Total posts:  9
Initial post:  Apr 24, 2007
Latest post:  Oct 8, 2007

New! Receive e-mail when new posts are made.
Tracked by 2 customers

Search Customer Discussions
This discussion is about
The Black Swan: The Impact of the Highly Improbable
The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb (Paperback - January 1, 2008)
3.7 out of 5 stars   (871)