August 2010
Monthly Archive
August 29, 2010
Posted by safeisrisky under
Books,
Statistics
Leave a Comment
Last few weeks have been good , for most of the trades have gone right. When something works, I generally attribute it to 90% luck + 10% logic. When something doesn’t work, I usually have the proportions reversed in my mind as it helps in creating better logic/algo. It really doesn’t matter what one believes when something has already worked, but one’s attitude/belief system DOES matter when things don’t work . On this “feeling lucky” note, I thought I should write something about Multivariate Stats .
Knowingly or Unknowingly , any analyst deals with multivariate stuff, if his/her study contains an analysis of more than one variable OR variable with more than one dimensions. Average/ Quartiles,/ Median / Mode are all known to most of us one way or the other. But things become interesting as well as complicated when we move to the multivariate world.
This book is probably the most easily understandable text out there. Typically books like classic texts by Anderson are laden with complicated math that a novice to this area would be overwhelmed by it. In contrast, this book can be termed as a more MBAish book where there is less emphasis on theorems / proofs/lemmas and more emphasis on the applications. The latest book is the sixth edition which obviously means that this book is a hit in some part of the reader community. Ok, let me get on with summarizing the chapters of the book
To start with , this book is organized in such a way that the first 4 chapters give all the math that is needed to understand multivariate analysis. One thing about doing work in the multivariate area is that "a knowledge of matrix algebra” is vital to doing the most basic analysis in MV world.
Chapter 1 : Applications of Multivariate Techniques
Data reduction / data sorting / investigation of dependence among variables / prediction / hypo testing are some of the useful apps of MV Techniques. One of the first thing that anyone can do with out going through math is GRAPHIC display. It is often said that a good graphic display in all its variation is half the analysis done. Tools that are available to any analyst are the usual scatterplot, marginal dot plot, scatterplot plus boxplots on the same graphic, 3d scatter plots, Star plots, Distance plots. Chernoff faces. I have typically used all these graphics at some point in time or the other, except the last one, Chernoff faces. When I first came across this type of graphic (Chernoff faces) , i thought it was pretty cool technique though I am yet to use it in real life. The funda behind it is simple. It is easy to recognize faces by humans, a little change in a feature of a face and we can instantly recognize it. This aspect of human brain is used to create a graphic of multidimensional data in the form of human faces so that patterns can easily be detected.
As a side note, this chapter uses Mahalanobis Distance to show contours of equal density. This made me think about the concept of distance itself. Probably any high school kid who learns coordinate geometry knows the distance formula between 2 points. As he progresses, he learns more and more complicated formulae, theorems, etc…alas! uncertainty is never discussed.. I don’t recollect any teacher till date, posing a question as following :
If there is uncertainty in the measurement of x coordinates and y coordinates and let’s say you know by what uncertainty the measurements along x axis and along y axis are collected, Can you come up with an alternate formula than the usual distance formula ?
If you think about it, this is what we find in reality. Take any application of the real world, uncertainty is unavoidable. So, a question as simple as above one, is good enough to motivate a young mind to explore the problem and come up with an alternate measures of distance. Well, one measure which has appeared in various applications is from an Indian Scientist , Prasanta Chandra Mahalanobis . Anyways I have digressed from the intent of the post.
Chapter 2 : Matrix Algebra & Random Vectors
Matrices are your best friends in dealing with multidimensional data. For any number cruncher a thorough understanding is imperative. In that sense, this book merely scratches the surface. Obviously it gives all the important results that are needed to get your hands dirty doing MV Stats. Personally I found the proof of Extended Cauchy Schwartz inequality much more intuitive to understand than other books. I have always been fascinated by math inequalities. Inequalities become very powerful when used in the right application. Touch any mathfinstat area, you are bound to see innumerable inequalities applied to real life problems like valuation / hedging / forecasting etc. I had my crush 🙂 on inequalities after reading the book “The CauchySchwartz Master Class” by Prof Steele. If you want to know the kind of applications where inequalities can be used, the book will be a fascinating account. Will blog about Prof.Steels’s book some other day.. Anyways coming back, one of the applications of Extended Cauchy Schwartz is the Optimization of Quadratic forms where the inequality helps one to connect an Optimization problem to the eigen values of the matrix involved in the optimization. Truly beautiful linkage between optimization and matrix algebra through an inequality.
Chapter 3 : Sample Geometry and Random Sampling
Basic properties of a p dimensional data matrix such as sample mean, sample covariance are given a geometric interpretation. Basic stuff like mean being a projection on each column of the data on a unit vector, connection between determinant of the covariance matrix with generalized sample variance and significance of the same, etc are provided. Given a dataset , if you know already the way to compute mean, covariance of the original dataset OR compute the mean , cov of linear combination of the columns in the dataset, you can safely ignore this chapter.
Chapter 4 : The Multivariate Normal Distribution
Well, basically this chapter is about data which is generated from a multinorm distribution. The framework of this chapter is again intuitive and nothing fancy. Took a brief pause before the going through chapter and asked myself , “ What would I teach somebody about Multivariate Dist , if I were asked to ?“. Well , the following would be the basic stuff that I would cover in relation to X( a pdim Normal Random Variable)

Basic density form of X

Properties which would help in checking whether subsets of X converge to the same distribution

How to identify independent components of the X ?

Sample mean and Sample Covariance of the p dimensional Random Variable( RV)

Relevance of Mahalanobis distance and constant probability contours for a p dim RV

How to connect between ChiSquare distribution and Ellipsoids arising out of a p dim RV ?

How do you simulate a pdim RV ? Can you simulate given any customized estimator of mean and covariance ?

What are the ways of estimating the covariance from the sample ? What are the robust estimators ? Which one to choose and Why ?

Sampling distribution of Sample mean and Sample covariance matrix. The former is again a p dim Normal RV while the latter is a Wishart Random variable.

Where is Wishart distribution used ? How do you simulate a RV from Wishart ?

What are the characteristics of Wishart distribution ?

Law of Large Numbers and CLT in the context of X and sample mean

How can you test whether the data actually comes out of p dim normal RV ?

How can you test whether the data has no tail dependency ?

How do you transform the data so that you have marginals and joint distribution as normal distribution ?
Out of this laundry list, the book covers most of the aspects…Again the treatment is MBAish..So you might get an intuitive feel of things..crunching data is the only way to understand the above stuff.
Now, why the hell should real life data that we see should be a realization of a multivariate normal distribution ? In 99% of the cases, especially financial data, it will not be true… So, what’s the point in going through the above stuff ? All I can say for now is that it will make you skeptical and enthuse you to figure out something in the non parametric world . Subsequently you can marry stuff from parametric and non parametric worlds. Also, it will make you extremely skeptical of the offtheshelf solutions that sell side vendors provide in the name of quant models.
Chapter 5 : Inferences about the Mean Vector
t test is a classic test that is covered in any stats101 class for testing sample means. By squaring the t test statistic, one can use an equivalent F statistic. This t^2 statistic in a multivariate case becomes Hotelling T square in honor of Harold Hotelling, a pioneer in multivariate analysis. Thankfully there is a way to compare Hotelling T square with F distribution and hence it becomes easier to check the null, create confidence intervals for the component means. The importance of this chapter lies in the formulation of control charts for multidimensional data. Having univariate control charts with a specific sigma level is not going to be useful and instead a chart based on Hotelling T Square is used.
Chapter 6 : Comparison of Several Multivariate means
This chapter is basically the extension of Chap 5 to more than one mean. Well, the statistic remains the same , Hotelling T Square, except that it is valid under specific assumption relating to covariance matrix. This chapter is pretty useful as it mentions testing covariance matrices across populations and this is something that is pretty useful in finance. Imagine you have n assets and you have a sample covariance matrix in a time period t1 to t2. One of the basic questions to ask is , “Has the covariance matrix changed "?” . Well, this chapter clearly show you the way to test the invariance of covariance structure. There is also a mention of Path Analysis , MANOVA and the stats behind them.. I am going to refer to this chapter very often for it has a lot of relevant stuff relating to finance.
Chapter 7 : Multivariate Linear Regression Models
Multivariate Linear Regression models are the most basic models which any econometric test would cover extensively. Starting from the data matrix and formulation of a linear regression equation, the entire regression structure is built ground up. Thankfully matrix notation is used extensively as it makes the transition from a single predictor to multiple predictor analysis easier. MLE estimates, their distribution, inferences about the regression model , Likelihood ratio rests for the parameters are all the discussed thoroughly. If you are well versed with the reg model, this chapter would serve as a quick recap of all the concepts including outlier detection, residuals analysis etc.
Chapter 8 : Principal Components
PCA is basically used for data reduction and integration. Unlike its counterpart which is extremely popular in finance, PCA works on covariance matrix. There is no need for the underlying data to be a realization of multivariate distribution. PCA basically tries to line up the various combination of the p dimensional vectors in such a way that the principal components are ordered by the variation captured. So, the resulting principal components are nothing but finding an appropriate bases for the data matrix so that maximum variation is captured on each of the bases vectors. Spectral decomposition is used to calculate the principal factors and the eigen values associated with the components play a very important role in the analysis. Very low eigen values typically means that there is data dependency in the structure and a subset of variables need to be removed for better data interpretation. Very few high eigen values could mean that there are a few major modes which gives rise to the variation in the data, meaning most of the variation seen is the common cause variation. There are some graphical tools mentioned in the chapter like scree plot, T square control chart , Constant elliptical density charts that can be used in the context of multivariate data. What graphic needs to be used is obviously dependent on the context and the nature of the data used for the analysis
Chapter 9 : Factor Analysis
Factor analysis is something used synonymously with PCA but there is fundamental difference between the two. PCA works on covariance matrix and has no model assumptions while Factor analyses by definition hypothesizes a model and works on Correlation Matrices. So, when one hypothesizes a model, one obviously needs to estimate the parameters and test the assumptions. This is where Factor analysis gets tricky. There are a ton of assumptions and at least 34 estimation methods. Also the solutions are not unique , so it has spawned a ton of literature on factor rotations etc. I read somewhere that if you initial hypothesized model does not work properly out of sample, don’t rotate factors and all the crap. Just ditch your model and start from scratch.
Will summarize the rest of the three chapters of the book at a later date. After a typing marathon of 2 posts today, my fingers and my mind are crying for a break 🙂
August 29, 2010
Posted by safeisrisky under
Books,
Philosophy
Leave a Comment
Anne Morrow Lindbergh decides to take a brief retreat from her life to spend some time on Florida’s Captiva Island and ponders over several aspects of life. This book is a compilation of those thoughts. First published in 1955, this book was a big hit and was translated in to 45 odd languages. More recently, there was a 50th anniversary edition of the book with a preface written by Lindbergh’s daughter Reeve Lindbergh
The book is about 5 types of shells that author finds on the island . She relates those shells to life situations and talks about ways to deal with different life situations.
Channelled Whelk:
The author looks at the bare beauty of this shell and realizes that her shell is ugly. The way in which accumulations of things , thoughts , people have cluttered her life and made her shell ugly, is the lesson she draws from this shell. Simplification of life is one of the ways to build a beautiful shell in your life. “To ask how little, not how much, can I get along with? “ is the essence of a minimalist life and this shell reminds her this stuff each day.
Moon Shell :
The author describes this shell as an island , set in everwidening circles of waves, alone, selfcontained and serene. We are all, in the last analysis, alone. And this basic state of solitude is not something we have any choice about. Instead of deluding ourselves and acting as though we are not solitary, we attract endless distractions in our lives and finally the distractions consume us completely. She suggests a way to have moon shell inside each one of us, “ If you can take a hour off/ day off for solitude at regular intervals, you see that your life becomes that much more richer”
DoubleSunrise :
The author describes this shell as an image of pure relationship, smooth , whole and unblemished. We experience only moments of having such feelings but sometimes we cling to the thought that such moments should last forever. A couple would always want the relationship to be the same as the one during the courtship period, a startup founder would love to keep the culture of the company same forever, a mother might want her son to always be kind, tender and loving , a son might want his parents to always be nice to him etc. There are umpteen number of situations where we experience a few moments / days of DoubleSunrise shell and we want that situation to remain forever. This is a false notion, says the author. For things cannot remain same with the changing situations. But this DoubleSunrise shell reminds each one of us that we must take time out and recreate such moments from time to time. This helps us in seeing relationships with a fresh vigour and infact helps in coming to terms with tides that ebb and flow in life.
Oyster Bed :
In author’s words, “Oyster Shell is humble and awkward and ugly. It is slatecolored and unsymmetrical. Its form is not primarily beautiful but functional” She compares Oyster Bed to middle years of marriage. It could as well be the years just after a start up has grown big enough and can no longer be called a startup. However my analogy to a startup is probably not that appropriate as that of a maturing marital relationship. “Spreading family and growing children might be the appropriate situation that Oyster Bed symbolises. This shell symbolises the time when you are tied up with so many aspects that the beauty lies in the functional aspect of the role.
Argonauta :
“ There are in the beachworld certain rare creatures, the “Argonauta” , who are not fastened to their shell at all. It is actually a cradle for the young, held in the arms of the mother Argonauta who floats with it to the surface, where the eggs hatch and the young swim away. Then the mother Argonauta leaves her shell and starts another life. Can we middleaged Argonauts , when we outgrow our Oyster bed, look forward to a similar freedom of nautilus who has lefts its shell for the open seas ? “ questions the author. She then describes a day spent with her sister on the beach and tries to think about the perfect day and writes these beautiful words.
What has made the day so perfect ? To begin with , it is a pattern of freedom. It’s setting has not been cramped in space or time. An island, curiously enough, gives a limitless feeling or both. Nor has the day been limited in kinds of activity. It has a natural balance of physical, intellectual and social life. It has an easy unforced rhythm. Work is not deformed by pressure. Relationship is not strangled by claims. Intimacy is tempered by lightness of touch. We have moved through our day like dancers not needing to touch more than lightly because we were instinctively moving to the same rhythm.
A good relationship has a pattern like a dance and is built on some of the same rules. The partners do not need to hold on tightly, because they move confidently on the same pattern, intricate but swift and free, like a country dance of Mozart’s. To touch heavily would be to arrest the pattern and freeze the movement. to check the endlessly changing beauty of its unfolding. There is no place here for the possessive clutch , the clinging arm, the heavy hand, only the barest touch in passing. The joy of such a pattern is not only the joy of creation of the joy of participation , it is also the joy of living in the moment.
She ends the book by saying that she would keep these 5 shells on her table at her home back in the city to constantly remind herself of what they stand for in her mind.
Takeaway :
This book can be read by anyone, at any stage of their lives. One will always find some situation where one can relate to the situation / context mentioned in the book. Author’s powerful words will definitely make you reflect on your own life. If you read this book in silence, with music shut, TV switched off, cell phones off, cutting all the usual distractions, a few pages from the book are enough to make you be at peace with yourself and the life around.
One can treasure this book for life.
August 28, 2010
Posted by safeisrisky under
Books,
Math
Leave a Comment
This book is like “The Lady Tasting Tea” for Math. “The Lady Tasting Tea” is one my favourite books on the history of Statistics. The tone of this book , the examples used , the historical perspective given makes me think of this book as “The Lady Tasting Tea” for Math. What does the book contain ? It contains 12 chapters on various aspects of math with history and examples blended in such a way that a reader will love the pace of the book.
0. Head for Numbers
The book starts off with questioning the concept of NUMBER itself. Is it a cultural artefact or Is it something that we have an innate ability to comprehend ? Alex cites an interesting piece of research which reaffirms our tendency to think in logarithmic scale. Innately, we compare numbers by ratios and not by their distance between them. So, the distance between 10 and 100 appears same as 100 and 1000, though our schooling in math says they are different. The point made by the author is that we are innately kind of wired with log scale BUT we encounter the linear scale in our math education. So, the suggestion is to marry both scales somehow and lessen the math phobia amongst some of the kids at the school level. If understanding quantities approximately in terms of estimating ratios is a universal human intuition, then the question remains ,”Why should Kids learn about times tables , instead of sharpening their estimation skills ?”
I have a similar feeling about the distance formula taught at a school level. Every high school kid , when given two Cartesian coordinates (x1, y1) and (x2, y2) can write the Euclidean distance in a jiffy. But ask the same kid , “ what happens if points on the x axis have a systematic error in measurement with average e1 and points on y axis have a systematic error in measurement with average e2 ? “. He would search for a formula in his textbook and give up. Basically , he is not taught that Euclidean distance is just one of MANY measures of distance between two points and in real world nobody hands him perfect coordinates of anything. Real world data is by default , uncertain. Shouldn’t estimation skills be taught , ground up , starting from as simple as a concept as Euclidean distance ? What should be done to kids to explore beyond the rudimentary knowledge ? May be better teachers ! !
1. The Counter Culture:
Author then moves on to talk about the counting system that had evolved over the centuries. He traces counting system and evolution of various bases like 20, 33, 8 , 12 , 60 and finally binary systems. The chapter also talks about the “way” in which we say numbers effects the learning speed. There is also a mention about the amazing Japanese Kids who do mind blowing arithmetic calculations using Soraban . In fact years ago I saw a YouTube video(below) about it and was truly fascinated by what I saw. BTW, there is an elderly couple in Goregaon, Mumbai who have been teaching Soroban to kids / teachers in India. According to them, they have seen kids doing extremely fast mental calculations in a flash of a second , though none of them have reached a state of manipulating beads in their mind with out the actual device. I guess it will take some time , but soon some talented Indian kid will amaze us all like this kid in the video below.
2. Behold!
In this chapter, the author talks about geometry, origami , Euclid’s contribution etc. You will never look at origami the same way again after reading the the math mentioned in this chapter.
3. Something about Nothing:
There is an account of the author going down to India to meet Sankaracharya, a person who is given demigod status in India. He is also responsible for propagating Vedic mathematics / sutras , which have supposedly been culled out of Vedas, the sacred text of Hindus. The first person to have inferred the sutras / principles from Vedas was Bharti Krishna Tirthaji who in the role of Sankarcharya toured the country to give lectures on Vedic math. There is also a mention of shunya / zero in the Indian math context which lead to quicker calculations and more importantly to the evolution of placeholder system in arithmetic. The concept of zero made a lot of calculations easier and thus lead to Arabic Numerals which stamped their supremacy on the other number systems such as Greek , Romans etc. By their very nature, the other systems were cumbersome for large number calculations. Not having place holder system in place, long calculations were painful. An offshoot of that system is that people did not think of very large numbers. In contrast, Buddha is supposed to have thought of very very small numbers and very very large number to the order of 10^421 . Arabic Numerals with zephyr (0) was a breakthrough. The author ends the chapter reflecting on the concept of nothingness which is the underlying principle of Indian Spirituality and muses that zero was a manifestation of India’s spirituality in to the math world.
4. Life of Pi
Pi is one of the fascinating numbers in the math world and it is no surprise that the author spends some time talking about it. Pi has a very colorful history that entire books have been devoted to the number. The author condenses all that in to a couple of pages to give the vibrant nature of the number whose digits have enthralled mathletes, scientists, random number generating algos , criminals, psychopaths etc. Why should someone try to find the decimals in pi is a mystery ? It has captivated a lot of people and out of all this craze, there has been one good practical application, i.e test the performance of computing / compare the performance of computing between devices . The rate of convergence of any math formula , machine can be characterized in the rate of convergence of digits in Pi and thus is a good measure to compare stuff. Pi is known up to 2.7 trillion places after the decimal but that is not stopping people from exploring. The race to find whether Pi is a “normal number” is on since ages
5. The xfactor
Mathematical notation plays an important role in formulation of any problem. In fact the right notation is equivalent to problem half solved. The author explores the prevalence of x in most of the algebraic equations and gives a quick historical narrative of quadratics, cubic’s, quadratics. Also the evolution of notations gave rise to new concepts like logarithms. History of slide rules, curta, HP35 calculator is given for the reader to get a sense of the importance of measurement tools for the advancement of science and technology. The chapter ends with the narration of a breakthrough in math, i.e connecting algebra and geometry using Cartesian coordinates. Descartes was the man behind it and thus started a love affair between algebra and geometry. In today’s world we hardly notice the dichotomy when we solve a system of equations using vector spaces. However one must pause once in a while to appreciate the fact that some one took the effort to make the connection because of which we are able to use concepts from geometry and algebra together to solve problems. Imagine statistics with out any geometry. Statistics is one of those fields where the marriage between algebra(equations) and geometry is so powerful an alliance , that it helps in understanding the data driven world a lot better.
6. Playtime
An interesting note on the history of the popular puzzles can be found here. Magic Squares , Latin Squares, Sudoku, 15 Puzzle, Tangrams , Rubik’s cube, Tetris, etc are all mentioned here and it leaves a reader with the feeling that puzzles might seem practically useless things at the outset. But when looked closely, some of the most interesting developments in math have come from people pondering over puzzles. Graph Theory, Combinatorics , Developments in probability theory are all offshoots of solutions to puzzles.
7. Secrets of Succession
This chapter explores sequences and more importantly the role of primes in the number theory. Like pi, the prime numbers have been the fascination for many years and it continues to be. Application wise, primes are useful in encryption stuff, random number generation algos, etc.
8. Gold finger
A book on math with out the mention of golden ratio is unthinkable as it is is a ratio which is evident in a wide variety of natural phenomenon. One trivia I learnt from this book is that iPod is designed in such a way that it has golden ratio in its design. Nature evokes beauty and it is no wonder that Apple guys learnt things from the nature.
9. Chance is a fine thing
Girolamo Cardano, a 16th century gambler was the first to conceive of the idea of frequentist world, probabilities are proportional to the frequency of occurrence of the events. The important stuff about probability was kick started by interesting questions from Chevalier de Mere. Question on the probability of obtaining double sixes on throwing 2 dice and “problem of points” was posed to Blaise Pascal who corresponded with Fermat to provide mathematical notions to chance events. Gambling was an area where people could easily see the probability in play and the payoff from the bets. The book then goes in to various gambling machines / games explaining that the law of large numbers makes sure that casinos never go bankrupt. Shannon – Thorpe deadly duo’s story is given where they discover a way to beat the casino. Thorpe goes further to master blackjack and other games where the gambler might have > 100 % payback percentage. His book “ Beat the Dealer”, spawned a ton on math related to gambling. Thorpe went on to create a hedge fund which to this date is revered in the finance world for its awesome CAGR returns. For interested readers, a journalistic account of the math used by Thorpe’s is given by a book titled ,”Fortune’s Formula”,which also mentions Kelly Criterion ( a betting strategy) that has become an indispensable risk management criterion in various types of activities in the financial domain. Few interesting trivia from the book are

Did you know that John Venn, popularly known for his Venn diagram had developed a powerful graphic to show randomness?

Perso Diaconis and Frederick Mosteller  Law of very large numbers : With a large enough sample , any outrageous thing can happen
10. Situation Normal
Gaussian Distribution is present in many phenomenon in the world. The author introduces this fascinating discovery and the people behind it. Carl Friedrich Gauss, Poincare, Adolphe Quetelet, Francis Galton are the characters that make up a great story and story leads up to concepts such as correlation, regression towards mean etc. William Sealy Gosset, is also mentioned in the context of his superb discovery of t distribution which makes the underlying distribution of random variables irrelevant as the statistic is based on averages, a fact which has lead to major offshoots in estimation procedures and hypothesis testing.
11. The end of the line
The last chapter contains narrative about nonEuclidean geometry , one of its main postulates is that “on a hyperbolic surface , there are infinite number of parallel lines through a point”. The chapter ends with Hilbert spaces and types of infinities. The book ends with a nice thought that math has come a long way, from a time when there were too many things and not enough numbers to count TO George Cantor who provided with so many different kinds of infinities that there are no longer enough things to count 🙂
Takeaway :
It gives you an entertaining and a lively account of all the math developments that have happened over centuries. Definitely worth reading.
August 15, 2010
Posted by safeisrisky under
Books,
Statistics
Leave a Comment
The book’s intro caught my attention which said “ This book contains stories that provide real life lessons on understanding and managing uncertainty”. That statement was good enough to motivator to read through the 21 short stories about uncertainty
The book starts off by describing the sample variance equation as the most dangerous equation, dangerous because of its profound misuse. Well, any elementary stats book tells that sample standard deviation is inversely proportional to square root of sample size. The author talks about various situations where a misunderstanding of this simple rule caused tremendous losses.

Trial of Pyx where the mint owners did a fraud on the gold coin manufacturing for over 600 years.

Gates foundation giving millions of dollars to support the cause of small schools , assuming that small schools yield better performance.

Lawrence Summers innocous comment costing him the job at Harvard

Studies where underlying population is heterogeneous in size but the conclusions on the metric of study does not mention this aspect, where by a lot of the studies one sees in the newspapers are flawed
All these points illustrate one single point, i.e , smaller sample size causes higher variation in the tails. As a side note, here is a visual that summarizes Trial of Pyx and the 600 years of blunder.
The desirable variation in the weight of a coin is shown as green bar in the illustration on the left. However trials were made on 100 coins each and the false notion that variation scaled proportionately to the number of trials was held for 600 years .. Instead of having restriction limits based on the green area ( covered in the illustration towards the right), the restrictions were placed based on the red area!!! The notion was a big blunder because variation scales as a square root of coins tested.. For 600 years, mint owners created high variation in the gold coins and were never caught in the trial runs. !! Heaven for Mint people and Hell for the barons, who supplied gold to the mint owners!!.. No wonder the sample variance equation is termed as “Most Dangerous Equation”, by the author
Part II of the book covers Political issues from graphics and stats perspective The takeaway from the bunch of the stories mentioned is that , Articles cited in the media , when looked in detail, tell a different story , than that of the caption. One nice learning from this part of the book is “Never trust a graph which has 2 y axis on either side”. It is extremely easy to manipulate the scales of the y axis and tell any story!!
Part III of the book takes a dig on the testing machinery in the educational system where uncertainty and its shades have not been appreciated properly. The first story talks about the ways to standardize the test measure across normal students and disabled students. How can one mix the scores of a person who is given unlimited time to solve a test Vs a normal student who needs to answer in a stipulated time ? The second story talks about a published report on Whites Vs Black Performance on SAT Scores and uses a simple simulation to show the flaw in the argument. This mode of thinking is useful in many places, especially more so in the field of finance where any price evolution is a realization of some stochastic process. It would be foolhardy to build a model /estimate with out taking in to consideration the alternate worlds. The third story in the book is very interesting as it shows that Standard error is not really useful in the context of an academic entrance exam as these exams are more of contests. Standard error which is more of a test of reliability of observations and its usage as supporting arguments in the case of exam is flawed. Whenever one uses a standard error of an estimate, it is important to distinguish whether the situation is a diagnostic test or a Contest. For a Contest, the statistical machinery is not useful.
Part IV of the book explores graphical aspects of depicting uncertainty. Three principles of effective display are mentioned that can be called “THREE COMMANDMENTS of effective data display of data”

Remind us that the data being displayed do contain some uncertainty , and then

Characterize the size of that uncertainty as as it pertains to the inferences we have in mind , and in so doing

help keep us from drawing incorrect conclusions through the lack of a full appreciation of the precision of our knowledge.
I guess it makes a LOT of sense to keep the above things in mind whenever any data display for ANY purpose is produced. It goes with out saying that these apply more to financial world than any other discipline
This part of the book talks about using graphic displays such as Catalogtree, Confidence aperture plot, basic stem and leaf plots, error plots etc. One takeaway for me was the subtle point of displaying time series data. Author clearly shows examples where doing away with the legend and labelling the series at both ends gives a far more clear display than using legends , colors, and the jazz associated.
Mendel effect is a good story about the importance of the choosing the correct binning criterion . Basically if we have two uncorrelated variables, you can cut the data to suit your need!!. If you want to show the means of groups is increasing/decreasing/remains constant, it is very easy to cut the groups accordingly and creating a FRAUD graph… The good old scatterplot is the solution to these fraud graphs.
After reading this part of the book, one would definitely get motivated to refer to the works of William Playfair, Tukey’s Exploratory Data Analysis , Edward Tufte’s books.
Part V of the book is a historical narrative of people who introduced innovative graphics / maps to display 4/5/6 dimensionality data. Some of the stalwarts covered in this part are William Playfair , Charles Joseph Minard , and Jacques Bertin. Various graphics from each of these individuals are displayed and explained to give a sense of richness , a well thought out graphic can bring out.
Takeaway
A key tool to understand uncertainty is a graphic display . A display that shows the data in all of their variability acts as a control , preventing us from drawing inferences based on a single summary number(eg, mean) that can only feebly characterize a complex situations.
As they say, Stories have immense power to communicate. This book is primarily in a story format and hence a reader is likely to forever remember the moral of the stories.