In this post, I will attempt to briefly summarize the main points of the book

An optimistic skeptic

The chapter starts off by saying that there are indeed people in the world who become extremely popular, make tons of money, get all the press coverage, by providing a perfect explanation, after the fact. The author gives one such example of a public figure who rose to fame explaining events post-fact, Tom Friedman. At the same time, there are many people who are lesser known in the public space but have an extraordinary skill at forecasting. The purpose of the book is to explain how these ordinary people come up with reliable forecasts and beat the experts hands down.

Philip Tetlock, one of the authors of the book is responsible for a landmark study spanning 20 years(1984-2004) that compares experts predictions and random predictions. The conclusion was, the average expert had done little better than guessing on many of the political and economic questions. Even though the right comparison should have been a coin toss, the popular financial media used "dart-throwing chimpanzee pitted against experts" analogy. In a sense, the analogy was more "sticky" than the mundane word, "random". The point was well taken by all; the experts are not all that good in predicting outcomes. However the author feels disappointed that his study has been used to dish out extreme opinions about experts and forecasting abilities such as, "all expert are useless". Tetlock believes that it is possible to see into the future, at least in some situations and to some extent, and that any intelligent, open-minded, and hardworking person can cultivate the requisite skills. Hence one needs to have "optimistic" mindset about predictions. It is foolhardy to have a notion that all predictions are useless.

The word "Skeptic" in the chapter’s title reflects the mindset on must possess in this increasingly nonlinear world. The chapter mentions an example of Tunisian man committing suicide that leads to a massive revolution in the Arab world. Could anyone have predicted such catastrophic ripple effects of a seemingly common event ? It is easy to look backward and sketch a narrative arc, but difficult to actually peer in to the future and forecast? To make effective predictions, the mindset should be that of an "optimistic skeptic".

So is reality clock-like or cloud-like? Is the future predictable or not? These are false dichotomies, the first of many we will encounter. We live in a world of clocks and clouds and a vast jumble of other metaphors. Unpredictability and predictability coexist uneasily in the intricately interlocking systems that make up our bodies, our societies, and the cosmos. How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.

In fields where forecasts have been reliable and good, one sees that the people who make these forecasts follow, Forecast, measure, revise. Repeat. procedure. It’s a never-ending process of incremental improvement that explains why weather forecasts are good and slowly getting better. Why is this process non-existent in many stock market predictions, macro economic predictions? The author says that it is a demand-side problem. The consumers of forecasting don’t demand evidence of accuracy and hence there is no measurement.

Most of the readers and general public might be aware of the research done by Tetlock that produced the dart-throwing chimpanzee article. However this chapter talks about another research study that Tetlock and his research partner&wife started in 2011, Good Judgment Project. The couple invited volunteers to sign up and answer well designed questions about the the future. In total, there were 20,000 people who volunteered to predict event outcomes. The author collated the predictions from this entire crew of 20,000 people and played a tournament conducted by IARA, an intelligence research agency. The game comprised predicting events spanning a month to a year in to the future. It was held between 5 teams, one of which was GJP. Each team would effectively be its own research project, free to improvise whatever methods it thought would work, but required to submit forecasts at 9 a.m. eastern standard time every day from September 2011 to June 2015. By requiring teams to forecast the same questions at the same time, the tournament created a level playing field-and a rich trove of data about what works, how well, and when. Over four years, IARPA posed nearly five hundred questions about world affairs. In all, one million individual judgments about the future. In all the years, the motley crowd of forecasters of The Good Judgment project beat the experts hand down. The author says that there are two major takeaways from the performance of GJP team

  1. Foresight is real : They aren’t gurus or oracles with the power to peer decades into the future, but they do have a real, measurable skill at judging how high-stakes events are likely to unfold three months, six months, a year, or a year and a half in advance.
  2. Forecasting is not some mysterious gift : It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.

The final section of the first chapter contains author’s forecast on the entire field of forecasting. With machines doing most of the cognitive work, there is a threat the forecasting done by humans will be no match to that supercomputers. However the author feels that humans are underrated(a book length treatment has been given by Geoff Colvin). In the times to come, the best forecasts would result from a combination of human-machine teams rather than humans only or machines only forecasts.

The book is mainly about specific type of people, approximately 2% of the volunteered forecasters who did phenomenally well. The author calls them "superforecasters". Have they done well because of luck or skill? If it is skill, what can one learn from them ? are some of the teasers posed in the first chapter of the book.

Illusions of Knowledge

Given the number of books and articles that have been churned out, talking about our cognitive biases, there is nothing really new in the chapter. The author reiterates the System-1 and System-2 thinking from Daniel Kahneman’s book. He also talks about the perils of being over-confident of our own abilities. He talks about various medical practices that were prevalent before the advent of clinical trials. Many scientists advocated medicine based on their "tip-of-your-nose" perspective without vetting their intuitions.

The tip-of-your-nose perspective can work wonders but it can also go terribly awry, so if you have the time to think before making a big decision, do so-and be prepared to accept that what seems  obviously true now may turn out to be false later.

The takeaway from this chapter is obvious from the title of the chapter. One needs to weigh in both System-1 and System-2 thinking in most of our decisions. An example of Magnus Carlsen is given that illustrates this kind of mixed thinking. In an interview, the grandmaster disclosed that his intuition tells him what are the possible steps immediately(10 seconds), and he spends most of the time double checking his intuition. Only then does he make the next move in a chess tournament. Its an excellent practice to mix System-1 thinking and System-2 thinking, but one requires conscious effort to do that.

Keeping Score

The chapter starts with the infamous statement of Steve Ballmer who predicted that iPhone was not going to have a significant market share. To evaluate Ballmer’s forecast in a scientific manner, the author looks at the entire content of Ballmer’s speech and says that there are many vague terms in the statement that it is difficult to give a verdict on the forecast. Another example is the "open letter" to Bernanke that was sent by many economists to stop QE to restore stability. QE did not stop and US has not seen any of the dire consequences that economists had predicted. So, is the forecast wrong ? Again the forecast made by economists is not worded precisely in numerical terms so that one can evaluate it. The basic message that the author tries to put across is, "judging forecasts is difficult".

The author’s basic motivation to conduct a study on forecasting came while sitting on a panel of experts who were asked to predict the future of Russia. Many of the forecasts were a complete disaster. However that did not make them humble. No matter what had happened the experts would have been just as adept at downplaying their predictive failures and sketching an arc of history that made it appear they saw it coming all along. In such scenario, how does one go about testing forecasts ? Some of the forecasts have no time lines. Some of the forecasts are worded in vague terms. Some of them are not worded in numbers. Even if there are numbers, the event happened cannot be repeated and hence how does one decide whether it is luck or skill ? We cannot rerun history so we cannot judge one probabilistic forecast- but everything changes when we have many probabilistic forecasts. Having many forecasts helps one pin down two essential features of any forecast analysis, i.e. calibration and resolution. Calibration involves testing whether the forecast and the actual are in sync. Resolution involves whether the forecast involved are decisive probabilistic estimate and not somewhere around 40%-60%. The author takes all the above thoughts in to consideration and starts a 20 year project from 1984-2004 that goes like this :

  • assemble experts in various fields
  • ask a large number of questions with precise time frames and unambiguous language
  • require that forecast be expressed using numerical probability scales
  • measure the calibration of the forecasters
  • measure the resolution of the forecasters
  • use brier score to evaluate the distance between the forecast and the actual

The author patiently conducts the study for 20 years to see the results of all the forecasts. The following are the findings/insights from the project :

  • To make a good analogy, the author says big idea thinkers are akin to "Hedgehogs" and many idea thinkers are akin to "foxes"
  • Foxes were better forecasters than Hedgehogs
  • Foxes don’t fare well with the media. Media likes authoritative statements to probabilistic statements.
  • Aggregating among a diverse set of opinions beats hedgehogs. That’s why averaging from several polls gives a better result than single poll. This doesn’t mean "wisdom of any sort of crowd" works. It means "wisdom of certain type of crowd" works.
  • The best metaphor for developing various perspective is to have a dragonfly eye. Dragonflies have two eyes, but theirs are constructed very differently. Each eye is an enormous, bulging sphere, the surface of which is covered with tiny lenses. Depending on the species, there may be as many as thirty thousand of these lenses on a single eye, each one occupying a physical space slightly different from those of the adjacent lenses, giving it a unique perspective. Information from these thousands of unique perspectives flows into the dragonfly’s brain where it is synthesized into vision so superb that the dragonfly can see in almost every direction simultaneously, with the clarity and precision it needs to pick off flying insects at high speed. A fox with the bulging eyes of a dragonfly is an ugly mixed metaphor but it captures a key reason why the foresight of foxes was superior to that of hedgehogs with their green-tinted glasses. Foxes aggregate perspectives.
  • Simple AR(1), EWMA kind of models performed better than hedgehogs and foxes


The chapter starts off recounting the massive forecasting failure from the National Security Agency, the Defense Intelligence Agency, and thirteen other agencies that constitute the intelligence community of US government. These agencies had a consensus view that IRAQ had weapons of mass destruction. This view made everyone support Bush’s policy of waging the Iraq war. After the invasion in 2003, no WMDs were found. How come the agencies that employ close to twenty thousand intelligence analysts were so wrong? Robert Jervis who has critically analyzed the performance of these agencies over several decades says that the judgment was a reasonable one but wrong. This statement does require some explanation and the author provides the necessary details. The takeaway from the story is that the agencies did some errors that would have scaled back the probability levels that were associated with the consensus view. Who knows it would have changed the course of Iraq’s history?

After this failure, IARPA(Intelligence Advanced Research Projects Activity) was created in 2006. Its mission was to fund cutting-edge research with the potential to make the intelligence community smarter and more effective. They approach the author with a specific type of game in mind. IARPA’s plan was to create tournament-style incentives for top researchers to generate accurate probability estimates for Goldilocks-zone questions. The research teams would compete against one another and an independent control group. Teams had to beat the combined forecast-the "wisdom of the crowd"-of the control group. In the first year, IARPA wanted teams to beat that standard by 20%-and it wanted that margin of victory to grow to 50% by the fourth year. But that was only part of IARPA’s plan. Within each team, researchers could run experiments to assess what really works against internal control groups. Tetlock’s team beat the control group hands down. Was it luck ? Was it the team had a slower reversion to mean ? Read the chapter to judge it for yourself. Out of several volunteers that were involved GJP, the author finds that there were certain forecasters who were very extremely good. The next five chapters are all about the way superforecasters seem go about forecasting. The author argues that there are two things to note from GJP’s superior performance :

  1. We should not treat the superstars of any given year as infallible. Luck plays a role and it is only to be expected that the superstars will occasionally have a bad year and produce ordinary results
  2. Superforecasters were not just lucky. Mostly, their results reflected skill.


The set of people whom the author calls superforecasters do not represent a random sample of people. So, the team’s outcome is not the same thing as collating predictions from a large set of random people. These people are different, is what the author says. But IQ or education are not the boxes based on which they can be readily classified. The author reports that in general the volunteers had higher IQ than others but there was no marked distinction between forecasters and superforecasters. So it seems intelligence and knowledge help but they add little beyond a certain threshold-so superforecasting does not require a Harvard PhD and the ability to speak five languages.

The author finds that superforecasters follow a certain way of thinking that seems to be marking better forecasters

  • Good back of the envelope calculations
  • Starting with outside view that reduces anchoring bias
  • Subsequent to outside view, get a grip on the inside view
  • Look out for various perspectives about the problem
  • Think thrice/four times, think deeply to root out confirmation bias
  • It’s not the raw crunching power you have that matters most. It’s what you do with it.

Most of the above findings are not groundbreaking. But what it emphasizes is that good forecasting skills do not belong to some specific kind of people. It can be learnt and consciously cultivated.

For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.


Almost all the superforecasters were numerate but that is not what makes their forecasts better. The author gives a few examples which illustrate the mindset that most of us carry. It is the mindset of Yes, No and Maybe, where Yes mean very almost certainty, No means almost impossible and Maybe means 50% chance. This kind of probabilistic thinking with only three dials does not help us become a good forecasters. Based on the GJP analysis, the author says that the superforecasters have a more fine grained sense of probability estimates than the rest of forecasters. This fine grained probability estimates are not a result of some complex math model, but are a result of careful thought and nuanced judgment.


The chapter starts with the author giving a broad description of the way a superforecaster works:

Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem. Also explore the similarities and differences between your views and those of others-and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly. Finally, express your judgment as precisely as you can, using a finely grained scale of probability.

One of the things that the author notices about superforecasters is their tendency to make changes to the forecasts frequently. As things/facts change around them, they revise their forecasts. This begs the question, "Does the initial forecast matter ?". What if one starts with a vague prior and keep updating it based on the changing world. The GJP analysis shows that superforecasters initial estimates were 50% more accurate than the regular forecasters. The real takeaway is that "updating matters";frequent updating is as demanding as challenging and it is a huge mistake to belittle belief updating. Both under and overreaction to events happening can diminish accuracy. Both can also, in extreme cases, destroy a perfectly good forecast. Superforecasters have little ego invested in their initial judgments and the subsequent judgments. This makes them update their forecasts far quicker than other forecasters. Superforecasters update frequently and update in smaller increments. Thus they tread the middle path between over forecasting and underforecasting. The author mentions one superforecaster who uses Bayes theorem to revise his estimates. Does that mean Bayes is the answer to getting forecasts accurate? No, says the author. He found that even though all the superforecasters were numerate enough to apply Bayes, but nobody actually crunched numbers that explicitly. The message is that all the superforecasters appreciate the Bayesian spirit, though none had explicitly used a formula to update their forecasts. But not always "small updations" work. The key idea that the author wants to put across is that there is no  "magic" way to go about forecasting. Instead there are many broad principles with lots of caveats.

Perpetual Beta

The author starts off by talking about Carol Dwecks’ "growth mindset" principle and says that this is one of the important traits of a superforecaster.

We learn new skills by doing. We improve those skills by doing more. These fundamental facts are true of even the most demanding skills. Modern fighter jets are enormously complex flying computers but classroom instruction isn’t enough to produce a qualified pilot. Not even time in advanced flight simulators will do. Pilots need hours in the air, the more the better. The same is true of  surgeons, bankers, and business executives.

It goes without saying that practice is key to becoming good. However it is actually "informed practice" that is the key to becoming good. Unless there is a clear and timely feedback about how you are doing, the quantity of practice might be an erroneous indicator of your progress. This idea has been repeated in many books in the past few years. An officer’s ability to spot a liar is generally poor because the feedback of his judgment takes long time to reach him. On the other hand, people like meteorologists, seasoned bridge players learn from failure very quickly and improve their estimates. I think this is the mindset of a day trader in financial markets. He makes a trade, he gets a quick feedback about it and learns from the mistakes. If you take a typical mutual fund manager and compare him/her with a day trader, the cumulative feedback that the day trader receives is far more than what an MF manager receives. Read any indexing book and you will always read arguments debating whether Mr.XYZ was a good fund manager or not. You can fill in any name for XYZ. Some say luck, Some say skill. It is hard to tease out which is which when the data points are coarse grained. However if you come across a day trader who consistently makes money for a decent number of years, it is hard to attribute luck to his performance, for the simple reason that he has made far more trades cumulatively than an MF manager. The basic takeaway at least for a forecaster is that he/she must know when the forecast fails. This is easier said/written than done. Forecasts could be worded in ambiguous language, the feedback might have a large time lag like years by which time our flawed memories can no longer remember the old forecast estimate. The author gives a nice analogy for forecasters who do not have timely feedback. He compares them with basketball players doing free throws in the dark.

They are like basketball players doing free throws in the dark. The only feedback they get are sounds-the clang of the ball hitting metal, the thunk of the ball hitting the backboard, the swish of the ball brushing against the net. A veteran who has taken thousands of free throws with the lights on can learn to connect sounds to baskets or misses. But not the novice. A "swish!" may mean a nothing-but-net basket or a badly underthrown ball. A loud "clunk!" means the throw hit the rim but did the ball roll in or out? They can’t be sure. Of course they may convince themselves they know how they are doing, but they don’t really, and if they throw balls for weeks they may become more confident-I’ve practiced so much I must be excellent!-but they won’t get better at taking free throws. Only if the lights are turned on can they get clear feedback. Only then can they learn and get better.

Towards the end of this chapter, the author manages to give a rough composite portrait of a superforecaster :

Philosophical Outlook Cautious Nothing is certain
  Humble Reality is infinitely complex
  Non Deterministic What happens is not meant to be and does not have to happen
Abilities and thinking styles Actively open-minded Beliefs are hypotheses to be tested, not treasures to be protected
  Intelligent, Knowledgeable with a need for cognition Intellectually curious, enjoy puzzles and mental challenges
  Reflective Introspective and self-critical
  Numerate Comfortable with numbers
Methods of forecasting Pragmatic Not wedded to any idea or agenda
  Analytical Capable of stepping back from the tip-of-your-nose perspective and considering other views
  Dragon-fly eyed Value diverse views and synthesize them into their own
  Probabilistic Judge using many grades of maybe
  Thoughtful updaters When facts change, they change their minds
  Good-intuitive psychologists Aware of the value of checking thinking for cognitive and emotional biases
Work ethic Growth mindset Believe it’s possible to get better
  Grit Determined to keep at it however long it takes

The author says that the single most predictor of rising to the ranks of superforecasters is to be in a state of "perpetual beta".


The author uses GJP as a fertile ground to ask many interesting questions such as :

  • When does "wisdom of crowd" thinking help ?
  • Given a set of individuals, does weighing team forecasts work better than weighting individual forecasts?
  • Given that there are two groups, forecasters and superforecasters, does acknowledging superforecasters after the year 1 performance works in improving or degrading the subsequent year performance for superforecasters?
  • How do forecasters perform against prediction markets ?
  • How do superforecasters perform against prediction markets ?
  • How do you counter "groupthink" amongst a team of superforecasters?
  • Does face-to-face interaction amongst superforecasters help/worsen the forecast performance ?
  • If aggregation of different perspectives gives better performance, should the aggregation be based on ability or diversity ?

These and many other related questions are taken up in this chapter. I found this chapter very interesting as the arguments made by the author are based on data rather than some vague statements and opinions.

Leader’s dilemma

I found it difficult to keep my attention while reading this chapter. It was trying to address some of the issues that typical management books talk about. I have read enough management BS books that my mind has become extremely repulsive to any sort of general management gyan. May be there is some valuable content in this chapter. May be there are certain type of readers who will find the content in the chapter appealing.

Are they really Super?

The chapter critically looks at the team of superforecasters and tries to analyze viewpoints of various people who don’t believe that superforecasters have done something significant. The first skeptic is Daniel Kahneman who seems to be of the opinion that there is a scope bias in forecasting. Like a true scientist, the author puts his superforecasting team in a controlled experiment that gives some empirical evidence that supeforecasters are less prone to scope bias. The second skeptic that the author tries to answer is Nassim Taleb. It is not so much as an answer to Taleb, but an acknowledgement that superforecasters are different. Taleb is dismissive of many forecasters as he believes that history jumps and these jumps are blackswans (highly improbable events with a lot of impact). The author defends his position by saying

If forecasters make hundreds of forecasts that look out only a few months, we will soon have enough data to judge how well calibrated they are. But by definition, "highly improbable" events almost  ever happen. If we take "highly improbable" to mean a 1% or 0.1% or 0.0001% chance of an event, it may take decades or centuries or millennia to pile up enough data. And if these events have to be not only highly improbable but also impactful, the difficulty multiplies. So the first-generation IARPA tournament tells us nothing about how good superforecasters are at spotting gray or black swans. They may be as clueless as anyone else-or astonishingly adept. We don’t know, and shouldn’t fool ourselves that we do.

Now if you believe that only black swans matter in the long run, the Good Judgment Project should only interest short-term thinkers. But history is not just about black swans. Look at the inch-worm advance in life expectancy. Or consider that an average of 1% annual global economic growth in the nineteenth century and 2% in the twentieth turned the squalor of the eighteenth century and all the centuries that preceded it into the unprecedented wealth of the twenty-first. History does sometimes jump. But it also crawls, and slow, incremental change can be profoundly important.

So, there are people who trade or invest based on blackswan thinking. Vinod Khosla invests in many startups so that one of them can be the next google. Taleb himself played with OTM options till one day he cracked it big time. However this is the not the only kind of philosophy that one can adopt. A very different way is to beat competitors by forecasting more accurately-for example, correctly deciding that there is a 68% chance of something happening when others foresee only a 60% chance. This is the approach of the best poker players. It pays off more often, but the returns are more modest, and fortunes are amassed slowly. It is neither superior nor inferior to black swan investing. It is different.

What Next ?

The chapter starts off by giving a few results of the opinion polls that were conducted before the Scotland’s referendum of joining UK. The numbers show that there was no clear sign of which way the referendum would go. In any case, the final referendum was voted NO. It was hard to predict the outcome. There was one expert/pundit, Daniel Drezner, who came out in the open and admitted that it is extremely easy to give an explanation after the fact but doing so, before the fact forecast, is a different ball game. Drezner also noted that he himself had stuck to NO for sometime before switching to YES. He made an error while correcting his prior opinion. As a learning, he says, in the future he would give a confidence interval for the forecast, rather than a binary forecast. The author wishes that in the future many more experts/forecasters adopt the confidence interval mindset. This shift from point estimate to interval estimate might do a world of good, says the author. What will this 500 page book do to the general reader/society ? The author says that there could be two scenarios.

  • Scenario 1: forecasting is mainly used to advance a tribes interests. In all such situations, the accuracy of the forecast would be brushed aside and whoever makes the forecast that suits the popular tribe will be advertised and sadly actions will be taken based on these possibly inaccurate forecasts. This book will be just another book on forecasting that is good to read, but nothing actionable will come out of it.
  • Scenario 2 : Evidence based forecasting takes off. Many people will demand accuracy, calibration results of experts

Being an optimistic skeptic, the author feels that evidence based forecasting will be adopted in the times to come. Some quantification is always better than no quantification(which is want we see currently). The method or system used in the forecasting tournament to come out ahead is a work-in-progress, admits the author. However that doesn’t mean it is not going to improve our forecasting performance.

Towards the end of the book, the author does seem to acknowledge the importance of Tom Friedmans of the world, not because of their forecasting ability. It is their vague forecasts that are actually superquestions for the forecasters. Whenever pundits give their forecasts in a imprecise manner, that serves as the fodder for all the forecasters to actually get to work. The assumption the author makes is that superforecasters are not superquestioners. Superquestioners are typically hedgehogs who have one big idea, think deeply and see the world based on that one big idea. Superforecasters, i.e. foxes are not that good at churning out big questions, is what the author opines. In conclusion, he says an ideal person would be a combination of superforecaster and superquestioner.



This book is not "ONE BIG IDEA" book. Clearly the author is on the side of foxes and not hedgehogs. The book is mainly about analyzing the performance a specific set of people from a forecasting team that participated in IARPA sponsored tournament. The book looks at these superforecasters and spells out a number of small but powerful ideas/principles that can be cultivated by anyone, who aspires to become a better forecaster.


The book starts by explaining an example project that one can download from the author’s github account. The project files serve as an introduction to reproducible research. I guess it might make sense to download this project, try to follow the instructions and create the relevant files. By compiling the example project, one gets a sense of what one can accomplished by reading through the book.

Introducing Reproducible Research
The highlight of an RR document is that data, analysis and results are all in one document. There is no separation between announcing the results and doing number crunching. The author gives a list of benefits that accrue to any researcher generating RR documents. They are

  • better work habits
  • better team work
  • changes are easier
  • high research impact

The author uses knitr / rmarkdown in the book to discuss Reproducibility. The primary difference between the two is that the former demands that document be written using the markup language associated with the desired output. The latter is more straightforward in the sense that one markup can be used to produce a variety of outputs.

Getting Started with Reproducible Research
The thing to keep in mind is that reproducibility is not an after thought – it is something you build into the project from the beginning. Some general aspects of RR are discussed by the author. If you do not believe in the benefits of RR, then you might have to carefully read this chapter to understand the benefits as it gives some RR tips to a newbie. This chapter also gives a road map to the reader as to what he/she can expect from the book. In any research project, there is data gathering stage, data analysis stage and presentation stage. The book contains a set of chapters addressing each stage of the project. More importantly, the book contains ways to tie each of the stages so as to produce a single compendium for your entire project.

Getting started with R, RStudio and knitr/rmarkdown
This chapter gives a basic introduction to R and subsequently dives in to knitr and rmarkdown commands. It shows how one can create a .Rnw or .Rtex document and convert in to a pdf either through RStudio or the command line. rmarkdown documents on the other hand are more convenient for reproducing simple projects where there are not many interdependencies between various tasks. Obviously the content in this chapter gives only a general idea. One has to dig through the documentation to make things work. One learning for me from this chapter is the option of creating .Rtex documents in which the syntax can be less baroque.

Getting started with File Management
This chapter gives the basic directory structure that one can follow for organizing the project files. One can use the structure as a guideline for one’s own projects. The example project uses gnu make file for data munging. It also gives a crash course of bash. 

Storing, Collaborating, Accessing Files, and Versioning
The four activities mentioned in the chapter title can be done in many ways. The chapter focuses on Dropbox and Github. It is fairly easy to learn to use the limited functionality one gets from Dropbox. On the other hand, Github demands some learning from a newbie. One needs to get to know the basic terminology of git. The author does a commendable job of highlighting the main aspects of git version control and its tight integration with RStudio.

Gathering Data with R

This chapter talks about the way in which one can use GNU make utility to create a systematic way of gathering data. The use of make file makes it easy for other to reproduce the data preparation stage of a project. If you have written a make file in C++ or in some other context, it is pretty easy to follow the basic steps mentioned in the chapter. Else it might involve some learning curve. My guess is once you start writing make files for specific tasks, you will realize their tremendous value in any data analysis project. A nice starting point for learning make file is robjhyndman’s site.

Preparing Data for Analysis
This gives a whirlwind tour of data munging operations and data analysis in R.

Statistical Modeling and knitr
The chapter gives a brief description of chunk options that are frequently used in an RR document. Out of all the options, cache.extra and dependson are the options that I have never used in the past and is a learning for me. One of the reasons I like knitr is its ability to cache objects. In the Sweave era, I had to load separate packages, do all sorts of things to run a time intensive RR document. It was very painful to say the least. Thanks to knitr it is extremely easy now. Even though cache option is described at the end, I think it is one of the most useful features of the package. Another good thing is that you can combine various languages in RR document. Currently knitr supports the following language engines :

  • Awk
  • Bash shell
  • CoffeeScript
  • Gawk
  • Haskell
  • Highlight
  • Python
  • R (default)
  • Ruby
  • SAS
  • Bourne shell

Showing results with tables
In whatever analysis you do using R , there are always situations where your output is in the form of a data.frame or matrix or some sort of list structure that is formatted to display on the console as a table. One can use kable to show data.frame and matrix structures. It is simple, effective but limited in scope. xtable package on the other hand is extremely powerful. One can use various statistical model fitted objects and pass it on to xtable function to obtain a table and tabular environment encoded for the results. The chapter also mentions texreg that is far more powerful than the previous mentioned packages. With texreg , you can show the output of more than one statistical model as a table in your RR document.There are times when the output classes are not supported by xtable. In such cases, one has to manually hunt down the relevant table, create a data frame or matrix of the relevant results and then use xtable function.

Showing results with figures
It is often better to know basic LaTeX syntax for embedding graphics before using knitr. One problem I have always faced with knitr embedded graphics is that all the chunk options should be mentioned in one single line. You cannot have two lines for chunk options. Learnt a nice hack from this chapter where some of the environment level code can be used as markup rather than as chunk options .This chapter touches upon the main chunk options relating to graphics and does it well, without overwhelming the reader.

Presentation with knitr/LaTeX
The author says that much of the LaTeX in the book has been written using Sublime Text editor. I think this is the case with most of the people who intend to create an RR. Even though RStudio has a good environment to create a LaTeX file, I usually go back to my old editor to write LaTeX markup. How to cite bibliography in your document? and How to cite R packages in your document? are questions that every researcher has to think about in producing RR documents. The author does a good job of highlighting the main aspects of this thought process. The chapter ends with a brief discussion on Beamer. It gives a 10,000 ft. of beamer. I stumbled on to a nice link in this chapter that gives the reason for using fragile in beamer.

Large knitr/LaTeX Documents: Theses, Books, and Batch Reports
This chapter is extremely useful for creating long RR documents. In fact if your RR document is not large, it makes sense to logically subdivide in to separate child documents. For knitr, there are chunk options to specify parent and child relationships. These options are useful in knitting child documents independently of the other documents embedded in the parent document. You do not have to specify the preamble code again in each of the child documents as it inherits the code from the parent document. The author’s also shows a way to use Pandoc to change rmarkdown document to tex, which can then be included in the RR document.

The penultimate chapter is on rmarkdown. The concluding chapter of the book discusses some general issues of reproducible research.


This book gives a nice overview of the main tools that one can use in creating a RR document. Even though the title of the book has the term “RStudio” in it, the tools and the hacks mentioned are IDE agnostic. One can read a book length treatment for each of the tools mentioned in the book and might easily get lost in the details. Books such as these give a nice overview of all the tools and hence motivate the reader to dive into specifics as and when there is requirement.


This book can be used as a companion to a more pedagogical text on survival analysis. For someone looking for an appropriate R command to use, for fitting certain kind of survival model, this book is apt. This book neither gives the intuition nor the math behind the various models. It appears like an elaborate help manual for all the packages in R, related to event history analysis.

I guess one of the reasons for the author writing this book is to highlight his package eha on CRAN. The author’s package is basically a layer on survival package that has some advanced techniques which I guess only a serious researcher in this field can appreciate. The book takes the reader through the entire gamut of models using a pretty dry format, i.e. it gives the basic form of a model, the R commands to fit the model,and some commentary on how to interpret the output. The difficulty level is not a linear function from start to end. I found some very intricate level stuff interspersed among some very elementary estimators. An abrupt discussion of Poisson regression breaks the flow in understanding Cox model and its extensions. The chapter on cox regression contains detailed and unnecessary discussion about some elementary aspects of any regression framework. Keeping these cribs aside, the book is useful as a quick reference to functions from survival, coxme, cmprsk and eha packages.



As the title suggests, this book is truly a self-learning text. There is minimal math in the book, even though the subject essentially is about estimating functions(survival, hazard, cumulative hazard). I think the highlight of the book is its unique layout. Each page is divided in to two parts, the left hand side of the page runs like a pitch, whereas the right hand side of the page runs like a commentary to the pitch. Every aspect of estimation and inference is explained in plain simple English. Obviously one cannot expect to learn the math behind the subject. In any case, I guess the target audience for this book comprises those who would like to understand survival analysis, run the model using some software packages and interpret the output. So, in that sense, the book is spot on. The book is 700 pages long and so all said and done, this is not a book that can be read in one or two sittings. Even thought the content is easily understood, I think it takes a while to get used the various terms, assumptions for the whole gamut of models one comes across in survival analysis. Needless to say this is a beginner’s book. If one has to understand the actual math behind the estimation and inference of various functions, then this book will equip a curious reader with a 10,000 ft. view of the subject, which in turn can be very helpful in motivating oneself to slog through the math.

Here is a document that gives a brief summary of the main chapters of the book.


This book is vastly different from the books that try to warn us against incorrect statistical arguments present in media and other mundane places. Instead of targeting newspaper articles, politicians, journalists who make errors in their reasoning, the author investigates research papers, where one assumes that scientists and researchers make flawless arguments, at least from stats point of view. The author points a few statistical errors, even in the pop science book, “How to lie with statistics?”. This book takes the reader through the kind of statistics that one comes across in research papers and shows various types of flawed arguments. The flaws could arise because of several reasons such as eagerness to publish a new finding without thoroughly vetting the findings, not enough sample size, not enough statistical power in the test, inference from multiple comparisons etc. The tone of the author isn’t deprecatory. Instead he explains the errors in simple words. There is minimal math in the book and the writing makes the concepts abundantly clear even to a statistics novice. That in itself should serve as a good motivation for a wider audience to go over this 130 page book.

In the first chapter, the author introduces the basic concept of statistical significance. The basic idea of frequentist hypothesis testing is that it is dependent on p value that measure Probability(data|Hypothesis). In a way, p value measures the amount of surprise that you find in the data given that you have a specific null hypothesis in mind. If the p value turns out to be too less, then you start doubting your null and reject the null. The procedure at the outset looks perfectly logical. However one needs to keep in mind, the things that do not form a part of p value such as,

  • It does not per se measure the size of the effect.
  • Two experiments with identical data can give different p values. This is disturbing as one assumes that p value somehow knows the intention of the person doing the experiment.
  • It does not say anything about the false positive rate.

By the end of the first chapter, the author convincingly rips apart p value and makes a case for using confidence intervals. He also says that many people do not report confidence intervals because they are often embarrassingly wide and might make their effort a fruitless exercise.

The second chapter talks about statistical power, a concept that many introductory stats courses do not delve in to, appropriately. The statistical power of a study is the probability that it will distinguish an effect of a certain size from pure luck. The power depends on three factors

  • size of the bias you are looking for
  • sample size
  • measurement error

If an experiment is trying to test a subtle bias, then there needs to be far more data to even detect it. Usually the accepted power for an experiment is 80%. This means that the probability of bias detection is close to 80%. In many of the tests that have negative results, i.e the alternate is rejected, it is likely that the power of test is compromised. Why do researchers fail to take care of power in their calculations? The author guesses that it could be because the researcher’s intuitive feeling about samples is quite different from the results of power calculations. The author also ascribes to the not so straightforward math required to compute the power of study.

The problems with power also plague the other side of experimental results. Instead of detecting the true bias, the results often show inflation of true result, called M errors, where M stands for magnitude. One of the suggestions given by the author is : Instead of computing the power of a study for a certain bias detection and certain statistical significance, the researchers should instead look for power that gives narrower confidence intervals. Since there is no readily available term to describe this statistic, the author calls it assurance, which determines how often the confidence intervals must beat a specific target width. The takeaway from this chapter is that whenever you see a report of significant effect, your reaction should not be “Wow, they found something remarkable", but it needs to be, "Is the test underpowered ?". Also just because alternate was rejected doesn’t mean that alternate is absolute crap.

The third chapter talks about pseudo replication, a practice where the researcher uses the same set of patients/animals/ whatever to create repeated measurements. Instead of bigger sample sizes, the researcher creates a bigger sample size by repeated measurements. Naturally the data is not going to be independent as the original experiment might warrant. Knowing that there is a pseudo replication of the data, one must be careful while drawing inferences. The author gives some broad suggestions to address this issue

The fourth chapter is about the famous base rate fallacy where one ascribes the p value to the probability of alternate being true. Frequentist procedures that give p values merely talk about the surprise element. In no way do they actually talk about the probability of alternate treatment in a treatment control experiment. The best way to get a good estimate of probability that a result is false positive, is by considering prior estimates. The author also talks about Benjamini-Hochberg procedure, a simple yet effective procedure to control for false positive rate. I remember reading this procedure in an article by Brad Efron titled, “The future of indirect evidence”, in which Efron highlights some of the issues related to hypothesis testing in high dimensional data.

The fifth chapter talks about the often found procedure of testing two drugs with a placebo and using the results to compare the efficiency of two drugs. Various statistical errors can creep in. These are thoroughly discussed. The sixth chapter talks about double dipping, i.e. using the same data to do exploratory analysis and hypothesis testing. It is the classic case of using in-sample statistics to extrapolate out-of-sample statistics. The author talks about arbitrary stopping rules that many researchers employ for cutting short an elaborate experiment when they find statistically significant findings at the initial stage. Instead of having a mindset which says, "I might have been lucky in the initial stage", the researchers over enthusiastically stops the experiment and reports truth inflated result. The seventh chapter talks about the dangers of dichotomizing continuous data. In many research papers, there is a tendency to divide the data in to two groups and run significance tests or ANOVA based tests, thus reducing the information available from the dataset. The author gives a few examples where dichotomization can lead to grave statistical errors.

The eighth chapter talks about basic errors that one does in doing regression analysis. The errors highlighted are

  • over reliance on stepwise regression methods like forward selection or backward elimination methods
  • confusing correlation and causation
  • confounding variables and Simpson’s paradox

The last few chapters gives general guidelines to improve research efforts, one of them being “reproducible research”. 


Even though this book is a compilation of various statistical errors committed by researchers in various scientific fields, it can be read by anyone whose day job is data analysis and model building. In our age of data explosion, where there are far more people employed in analyzing data and who need not necessarily publish papers, this book would be useful to a wider audience. If one wants to go beyond the simple conceptual errors present in the book, one might have to seriously think about all the errors mentioned in the book and understand the math behind them.


The book serves a nice intro to Bayes theory for an absolute newbie. There is minimal math in the book. Whatever little math that’s mentioned, is accompanied by figures and text so that a newbie to this subject “gets” the basic philosophy of Bayesian inference. The book is a short one spanning 150 odd pages that can be read in a couple of hours.  The introductory chapter of the book comprises few examples that repeat the key idea of Bayes. The author says that he has deliberately chosen this approach so that a reader does not miss the core idea of the Bayesian inference which is,

Bayesian inference is not guaranteed to provide the correct answer. Instead, it provides the probability that each of a number of alternative answers is true, and these can then be used to find the answer that is most probably true. In other words, it provides an informed guess.

In all the examples cited in the first chapter, there are two competing models. The likelihood of observing the data given each model is almost identical. So, how does one chose one of the two models ? Well, even without applying Bayes, it is abundantly obvious which of the two competing models one should go with. Bayes helps in formalizing the intuition and thus creates a framework that can be applied to situations where human intuition is misleading or vague. If you are coming from a frequentist world where “likelihood based inference” is the mantra, then Bayes appears to be merely a tweak where weighted likelihoods instead of plain vanilla likelihoods are used for inference.

The second chapter of the book gives a geometric intuition to a discrete joint distribution table. Ideally a discrete joint distribution table between observed data and different models is the perfect place to begin understanding the importance of Bayes. So, in that sense, the author provides the reader with some pictorial introduction before going ahead with numbers. 

The third chapter starts off with a joint distribution table of 200 patients tabulated according to # of symptoms and type of disease. This table is then used to introduce likelihood function, marginal probability distribution, prior probability distribution, posterior probability distribution, maximum apriori estimate . All these terms are explained using plain English and thus serves as a perfect intro to a beginner. The other aspect that this chapter makes it clear is that it is easy to obtain probability of data given a model. The inverse problem, i.e probability of model given data, is a difficult one and it is doing inference in that aspect that makes Bayesian inference powerful. 

The fourth chapter moves on to  continuous distributions. The didactic method is similar to the previous chapter. A simple coin toss example is used to introduce concepts such as continuous likelihood function,  Maximum likelihood estimate, sequential inference, uniform priors, reference priors,  bootstrapping and various loss functions.

The fifth chapter illustrates inference in a Gaussian setting and establishes connection with the well known regression framework. The sixth chapter talks about joint distributions  in a continuous setting.  Somehow I felt this chapter could have been removed from the book but I guess keeping with the author’s belief that “spaced repetition is good”, the content can be justified. The last chapter talks about Frequentist vs. Bayesian wars, i.e. there are statisticians who believe in only one of them being THE right approach. Which side one takes depends on how one views “probability” as – Is probability a property of the physical world or is it a measure of how much information an observer has about that world ? Bayesians and increasingly many practitioners in a wide variety of fields have found the latter belief to be a useful guide in doing statistical inference. More so, with the availability of software and computing power to do Bayesian inference, statisticians are latching on to Bayes like never before.

The author deserves a praise for bringing out some of the main principles of Bayesian inference using just visuals and plain English. Certainly a nice intro book that can be read by any newbie to Bayes.


image Takeaway

“Write your code as though you are releasing it as a package” – This kind of thinking forces one to standardize directory structure, abandon adhoc scripts and instead code well thought out functions, and finally leverage the devtools functionality to write efficient, extensible and shareable code.




imageTakeaway :

This book is a beautiful book that describes the math behind queueing systems. One learns a ton of math tools from this book, that can be used to analyze any system that has a queueing structure within it. The author presents the material in a highly enthusiastic tone with superb clarity. Thoroughly enjoyed going through the book.


image Takeaway :

The book is definitely overpriced for what it delivers. Given that 50% of this book explains R basics, the title is not at all appropriate. Even the quant stuff that is covered in the remaining 50% of the book is laughably inadequate.


The central theme of this book is — major governments & central banks around the world have been waging a war on “gold” to keep its price low. The reason being, they want everyone to believe in their fiat currency, so that they can print away the money to solve their temporary problems.

The author strongly believes that by 2020, no amount of “managing gold price” tactics will work. Dollar will lose its supremacy and gold prices will skyrocket to represent its true value.

The book is organized as a set of 86 questions/topics spanning six different sections. The author answers each question with no more than a page of explanation,that can be quickly grasped. In this post, I will briefly summarize the answers for each of 86 questions.


  • What is the origin of money ?
    • A simple closed community never needed money. However when societies grew, the demand arose for a complex trading system. Desired products like cattle and dried meat were used more and more frequently as a method of payment. Bartering is still the most elementary system of trade and shows up whenever crisis situations arise.
  • How did gold become money ?
    • Typical Characteristics of anything to act as money are : easily divisible, portable, imperishable and scarce. Gold and Silver fit the bill and more over, they were enormously desirable all over the world. Out of the entire periodic table of elements, gold and silver are most suitable as a means of payment as they are impossible to copy. Gold became equivalent to money when people realized that its purchasing power remained same across time.
  • When did coins come into existence ?
    • The first western coins originated in Turkey, around 650 B.C. Many great kings built their empires around a monetary system based on gold.  Because gold was more rare, silver was used for coins with a low nominal value.
  • A short history of monetary gold
    • The gold coins existed from 491 A.D to 1453 A.D and were accepted as money from England to China. Gold coins called florins appeared in 1252. Dutch guilder came  next. Ducats were another form of gold coins that were introduced in Italy.
  • What are the advantages of a gold standard ?
    • Most important advantage is that it forces government to be disciplined in their fiscal policy because they cannot turn to printing money. Gold also acts as an inflation hedge. Due to the mounting silver shortages, the United Kingdom and many countries in the British Empire adopted a gold standard in 1816. They were soon followed by Canada (1853), the US (1873) and Germany, where the new gold mark was introduced in 1872. In the course of the 19th century, the gold standard became more and more popular
  • Why was the gold standard abandoned ?
    • Abandoning gold standard gives the government to print fiat money. This was the main reason why US abandoned gold standard in 1971. Many European countries ditched gold standard in 1914 to finance First World War.
  • What is fiat money ?
    • Money that is not backed by something substantial. Its value rests on the confidence that goods or services can be paid for
  • What is meant by fractional banking ?
    • The initial credit from the central bank to commercial banks is in turn used by commercial banks to generate a far higher credit level in the society.
  • Where was fiat money invented ?
    • China( Emperor Khan found a way of creating paper money that was pitched as something that is as valuable as gold and silver)
  • Other examples of fiat money throughout history
    • Louis XIV, king of France in 1716 set up a bank and issued bank notes. However it ended as a disaster
  • Other misfortunes with fiat money
    • Post French revolution, “assignats” were issued that could be later used as money. This exercise of Quantitative Easing 101 failed miserably. In 1796, hyperinflation hit France and paper money lost all its value
  • What is Quantitative Easing ?
    • One can easily understand this complicated and fancy term if one thinks it as equivalent to “ Operation Firing up the Printing Press”. Japan’s BOJ is seen as the inventor of QE. The original purpose of QE was to lower the interest rates. But from 2008 onwards, the Fed and other central banks started aggressively expanding their balance sheets by buying up assets such as Treasuries (US government  bonds) and mortgage-backed bonds in order to support the housing market and to finance the large fiscal deficits that arose as a result of the economic fallout from the credit crisis.
  • Do all central bankers agree on QE ?
    • No. Dallas Fed Reserve president owns a sizable portion of its portfolio as gold. Bank of England ED of Financial stability is also against QE
  • When did hyperinflation occur ?
    • Hyperinflation arises when the money loses its value and power. Some examples are Germany in 1923, Hungary in 1946, China in 1949, Yugoslavia in 1994, and Zimbabwe in 2008
  • Can we trust official inflation figures ?
    • In many countries, manipulation of key figures and economic indicators has been elevated to a work of art. CPI publication has become a con job
  • How is inflation calculated ?
    • Many tricks are used by statisticians, who are pressurized by politicians, to lower CPI – Replacement of cheaper alternatives, Geometric means, Hedonic adjustments etc.
  • Examples of the distortion of inflation figures
    • WSJ articles illustrating cost price of a TV model
  • Do central banks combat or cause inflation ?
    • Even though they advertise that their main job is to control inflation, they are the ones causing inflation by printing more and more money. Through excessive growth of debt, most Western currencies have lost over 95% of their spending power in the last century alone.
  • Does anybody really understand this financial system ?
    • In most countries, governments and banks have worked together to monopolize the creation of money. The fact that our money is backed by nothing but hope and trust must be kept hidden from ordinary people. Even most economists do not fully understand money. Only those who have studied monetary economics know the inner workings of our financial system. And most of them
      end up working for their government or central bank, so they are bound to keep their mouths shut.



  • When did the first form of banking emerge ?
    • The first recorded debt systems were in Sumer civilization around 3500 B.C. European banks appeared in early Middle ages. The word ‘bank’ comes from the Italian word ‘banca’, the name used for
      the marble tabletop upon which Italian goldsmiths dropped foreign coins. From the sound of the coins being dropped, they could assess whether a coin contained a lot of copper or nickel.  Banks as we know them today were first set up during the Renaissance in the Italian cities of Florence, Venice and Genoa.  The most famous amongst them is Medici Bank.
  • How did central banking start ?
    • European Royalty needed money to fight wars. This financing was provided by so-called moneychangers. These moneychangers  understood pretty quickly that lending to powerful entities such as kings and churches carried less risk because of the continual stream of income. The German Rothschild family established an international banking business and dynasty, becoming one of the most powerful families in the 19th century
  • The first central bank
    • Amsterdam Wisselbank founded in 1609.  Most central banks in the past 400 years were initiated by rich businessmen who understood quite well that (central) banks, which owned the monopoly on creating money and were backed by government tax revenue, had a wonderful business model.
  • Who created the first government bonds ?
    • Scotsman William Paterson.  Paterson was backed by a group of rich traders from the City of London who would generate the starting capital. He was also supported by Charles Montagu, one of the most important officials within the Ministry of Finance. Together, they persuaded the government to create a bill so that the Bank of England could be established. The Royal Charter was granted on 27 July 1694. The first loan by the Bank of England was to finance the Royal Navy by issuing Navy Bills. The start of the Bank of England is often seen as the start of a new era. Fiscal deficits by governments could be financed by means of selling (perpetual) bonds
  • How large has the bond bubble become ?
    • As of 2012, US – $17 T, EU Total – $16T, UK – $10T, Japan – $2.7 T, Australia – $1.7 T, Switzerland – $1.3 T
  • Who supervises central banks ?
    • Over the course of 20th century, many governments took over central banks from private shareholders.
  • Where are the most important decisions about the banking industry made ?
    • Bank of International Settlements ( BIS) in Basel. The BIS can be seen as the mother of central banks and was founded at the International Bankers Conferences at Baden Baden (1929) and The Hague (1930). In April 1945, a decision to liquidate the BIS was made, but it was reversed by the US in 1948. The BIS had survived but was badly wounded. The BIS still operates as a counterparty, asset manager and lender for central banks and international financial institutions. Switzerland agreed to act as the headquarter state for the BIS. The headquarters would be situated in Basel. Today it is 60 central banks. Fed did not join until 1994 as US saw BIS as a rival to their IMF. Soon US joined BIS to get EU support for their war on gold.


  • How did central banking get started in the US ?
    • Robert Morris, a former government official, founded the first central bank in the US in 1781. He is seen as the father of the system of credit in the United States. His Bank of North America was based on the model of the Bank of England and could create as much money as needed through fractional reserve banking.
  • When was the Federal Reserve created ?
    • December 1913. It was the most beautiful Christmas present Wall Street could have wished for. For the third time in US history, the monopoly on the printing of dollars was transferred from government to private banks. Not many politicians realized the far-reaching consequences this decision would have. Immediately after the introduction of the law, all US banks became compulsory
      shareholders of the Fed.
  • Is the Fed really independent ?
    • New York Fed is far more important in the Fed system than all the other 11 regional Reserve banks combined.  Even though Fed presents itself as a normal central bank with 12 districts, NY Fed runs the show
  • When was the dollar system born ?
    • 1944 – Bretton Woods conference, names after the forest surrounding the hotel where the conference took place
  • What was decided at the Bretton Woods conference ?
    • Dollar is the new world currency
  • Why did Europe accept the dollar system ?
    • US proposed Marshall Plan, which was designed to help finance Europe after the devastations of the war.
  • For how long did the Bretton Woods system work ?
    • Following the Bretton Woods conference, all national currencies became pegged to the dollar, which was linked to gold at a rate of $ 35 per ounce. The dollar was the official world’s reserve
      currency and the anchor of the monetary system. The world now operated under a pseudo gold standard which economists call the ‘gold exchange standard’.  However by end of 1960s, the system was falling apart
  • When did the US close its ‘gold window ?
    • Aug 1971
  • How did the world react to Nixon’s decision in 1971 ?
    • Technically speaking, America defaulted in August 1971, since the country could no longer fulfill the obligations agreed upon in Bretton Woods. But surprisingly, the Nixon shock created only
      a relatively short dollar panic in the world’s financial markets.  At first, the inflation caused by the printing of extra dollars was moderate, but later in the 1970s, inflation began to take off, leading to
      a severe recession in 1979 and 1980. It would take years of strong leadership by Fed Chairman Paul Volcker to tame inflation and make the dollar a ‘strong’ currency again.
  • How important is the worldwide oil trade for the survival of the dollar ?
    • Very important. By trading only in dollars, the oil trading countries keep the demand for dollar alive
  • What is the role of the IMF and World Bank in this dollar system ?
    • Support the dollar as a world reserve currency. US insisted that countries could join IMF only after decoupling their currency from gold
  • How transparent is the Fed ?
    • Not very. There is  a strong culture of secrecy within the Fed organization.
  • Have any Wall Street bankers gone to jail ?
    • Very few. Most get away by paying fines.  A study of hundreds of media reports shows that the total amount of fines and settlements paid by Wall Street banks between 2000 and 2013 to avoid prosecution, adds up to $ 100 billion.



  • When did the music stop ?
    • After over thirty years of falling interest rates, the period of unrestrained private build-up of debt came to an end with the start of the credit crisis. Between 2008 and 2013, central banks worldwide created over $ 10 trillion of new money to take over bad loans from the private sector, to monetize debts and to stimulate the economy. The Fed balance sheet grew from $ 800 billion to almost $ 4,000 billion in just five years.
  • What has happened to the US national debt since the start of credit crisis ?
    • The US national debt grew by $ 8 trillion in a five-year period to reach $ 17 trillion at the end of 2013. To put this into perspective, it took 169 years (from 1836 to 2005) for the first $ 8 trillion of national debt to accumulate
    • To resuscitate interbank lending, central banks allowed commercial banks to borrow money at interest rates close to 0
    • Budget deficit in 2009,2010, 2011, 2012, 2013 is –10%, –9%, –8%, –7%, –6% of GDP
  • When does the size of fiscal deficits become dangerous ?
    • Historical analysis of hyperinflations suggests that the the tipping point is when government’s deficit exceeds 40% of the expenditures. Japan is at the risk of hyperinflation where half of govt. revenues goes in to debt servicing. Savings rate in Japan has dropped to 2% because of aging population
  • Didn’t the credit crisis start much earlier in Japan ?
    • After the crash of Japan in 1990s, it decided to turn on the printing press of its central bank. Debt service costs 25% of tax revenues. Public debt is 240% of GDP. Most banks are barely surviving. BoJ has maintained short term interest rate at close to 0 since 1999.
  • Who is most aggressive in their QE policies, Japan or the US ?
    • Japan – Amount set aside for QE is twice that of US QE allocation
  • Is China still financing the US ?
    • After QE1 in 2008, QE2 in 2010 and QE infinity in 2012, there is a widespread concern that dollar might get massively devalued. Hence China has been investing hundreds of billions of dollars per year in hard assets such as gold and other commodities.
    • QE3 is now called QE infinity by many because of the open ended nature of the program where Fed has launched $85 B per month of bond purchasing program
    • Market jumped by 12% after QE1 announcement, 3.1% after QE2 announcement and moved by barely 2% after QE3 – Law of Diminishing effects can be seen in QE announcements too
    • Many suggest that China and other developing countries should invest in Australian and Canadian dollar as the countries have large base of commodity assets
  • How large is China’s credit growth ?
    • China is more addicted to printing press mentality than US or Japan
    • Despite national financial reserves of almost $ 4,000 billion, China has been confronted with its own debt crisis after the banking system’s assets grew by $ 14 trillion between 2008 and 2013.This is the same amount as the entire US banking system.  China’s credit to gross domestic product (GDP) ratio surged to more than 200% last year from just over 110% in 2008
    • Short term trust loans amount to 50% of GDP
    • Shadow banking is rampant in china and is estimated to be 45% of GDP
    • Ironically, GDP itself is speculated to be a cooked number
  • Is the renminbi ready to replace the dollar ?
    • Since renminbi is not fully convertible, it will take many years for this possible scenario to pan out
  • So China is fearful of making too sudden monetary changes ?
    • Yes, History gives enough evidence for the same.
  • How big is Europe’s debt problem ?
    • 1 Trillion Euros
    • The total amount of money created during UK’s QE program from 2010 to 2014 was around $ 598 billion
    • compared with Japan, where the size of QE is double the size of the American program (relative to GDP), money printing has been slowing down in Europe between 2012 and 2014
  • Is Switzerland still a monetary safe haven ?
    • As a result of all the monetary madness after the outbreak of the financial crisis in 2008, more and more money started to flee to Switzerland. This caused the Swiss franc to gain in value, which
      had a substantial negative impact on Swiss exports and tourism. To avoid further harm, the Swiss National Bank (SNB) pegged the Swiss franc to the euro at a value of 1.20 euro.
    • At the end of 2013, the SNB had the most holdings relative to GDP (85%) of any major industrialized country
  • What is happening in the so-called currency wars ?
    • most of the currencies involved have stayed on a par with each other. To the general public, the dollar, the British pound, the euro and the Swiss franc all seem to have kept their value. But this is only with respect to each other. Because of this ‘debasement of currencies’, the smart money has started to flee towards commodities and other hard assets.
  • Can we grow our way out of this debt ?
    • In the eighteen most important countries belonging to the OECD, the total amount of public and private debt (relative to GDP) grew from 160% in 1980 to 321% in 2011. This amassing of debt has not caused any problems, since the interest rate over the same period fell from over 20% in 1980 to almost 0% after the credit crisis. National debts increased by 425% on average and have risen in many countries to almost 100% of their GDP
    • Growing out of debt, works only in the context of strong economic growth. Is there a sign of one in most of the countries today ?
  • How can we get rid of our debts ?
    • Default / Print money and Induce inflation  / Raising taxes
  • How have debt cancellations worked before ?
    • If you go by growth rates, they seem to have worked. But one must always be wary about the lies, damn lies and statistics. Whenever a country is in a crisis and undertakes debt cancellation, it is already at its nadir in terms of the growth, that any improvement in the post debt cancellation stage looks like a stellar performance
  • Possible debt cancellation scenarios
    • Fed cancels its $2 T debt out of a total of $17 T of treasury debts
    • Central banks cancelling government debt
  • When do things go wrong ?
    • when the national debt rises to over 90% of GDP, this tends to slow future economic growth
    • cumulative increase in public debt in the three years following a banking crisis is on average 186%. This explains why public debt in many advanced countries (the US, Japan, the UK) has increased strongly in recent years and reached or even crossed the 90% level.
    • The Minsky moment, named after American economist Hyman Minsky, is the point in time at which, after decades of prosperity, a wave of selling takes place by parties who had made
      investments with too much debt. In order to reduce these debts, they even have to sell good investments at increasingly lower prices. Such a disastrous sell-off of government bonds is one of the
      major risks we are now facing. At some point, central banks could end up buying almost all their domestic government bonds. Investor money would then flee towards equities and hard assets
    • The risks of things going wrong has increased since 2008 as central banks have resorted to unorthodox interventions that would have been considered unthinkable before the credit crisis.


  • The essence of the war on gold
    • It is an endeavor to support the dollar. Also, the level of gold price and the general public’s expectations of inflation are highly correlated. The survival of the current financial system depends on people preferring fiat money over gold
  • Do central banks fear a flight to gold ?
    • Yes. The war is being fought not only by central banks but also by commercial financial institutions
    • In 2013, both ABN AMRO and RBS cancelled gold accounts that allowed investors to redeem their value in physical gold. In a letter to clients, ABN AMRO explained that it had changed its precious metals custodian rules and the bank would ‘no longer allow physical delivery’, only paper settlement. And US banks are only allowed to advise investors to buy gold shares when they have a gold analyst on their payroll.
  • Was private ownership of gold ever prohibited ?
    • Post crash of 1929, President Roosevelt presented an economic recovery plan called ‘the New Deal’. The plan included a ‘Gold Reserve Act’, passed by Congress at the end of January 1934, which empowered the federal government to confiscate all of the Fed’s gold and bring it under the US Department of the Treasury. Roosevelt also made use of his special presidential authority to issue Executive Order 6102. This prohibited civilians from possessing gold, gold coins or gold certificates. Anyone caught ‘hoarding’ gold was to be fined $ 10,000. In Europe there has never been a ban on possessing gold.
  • When did the war on gold start ?
    • War on gold took off in the 1960s when trust in dollar started to fray.
  • How was the gold price managed ?
    • London Gold Pool creation in 1961 to keep the gold price low
  • The IMF’s role in the war on gold
    • IMF created international reserve assets called special drawing rights(SDR). Since 1975, the Americans have worked with the IMF time and again to try to control the gold market by unloading tons of
  • How did the IMF amass its gold reserves ?
    • The IMF received most of its gold from member countries, which had to pay 25% of their funding quotas to the IMF in physical bullion. This was because gold played a central role in the international monetary system until the collapse of the Bretton Woods agreements in 1971. Seven years later, the IMF fundamentally changed the role of gold in the international monetary system by eliminating its use as the common denominator of the post-World War II exchange rate system and ended its obligatory use in transactions between the IMF and its member countries
  • Are there more cases of double counting in the US ?
    • Possibly yes
  • How often have US gold reserves in Fort Knox been audited ?
    • None
  • Did the game plan change after 1980 ?
    • The new trick in the town is “expectation management”. Time and again, it has been communicated through press communiqués that the Fed or the IMF was considering selling gold, and time and again we have seen the gold price fall as a result
  • Didn’t the British help by unloading gold in 1999 ?
    • Yes,  Between 1999 and 2002, the UK embarked on an aggressive selling of its gold reserves, when gold prices were at their lowest in 20 years.  The UK sold almost 400 tons of gold over 17 auctions in just three years. So, Britain was hand in glove with US in the war on gold
  • Further evidence of systematic gold price suppression
    • The central bank of Australia confirmed in 2003 that its gold reserves are mainly used to control the price of gold.
  • Recent methods to manipulate the gold price
    • Futures market trading via electronic means has given rise to sharp volatility in gold.  Especially since the start of the credit crisis, market participants have now and again been bombarding precious metal futures markets with a tsunami of sell orders. The price of gold was forced down by $ 200 during a two-day raid in April 2013, and silver was sent 35% lower in three days in September 2011.
    • Speculator driven markets will only drive away long term investors. Inducing volatility is another method that has been used in the fight against gold.
  • More evidence of manipulation of precious metal markets
    • The author provides email transcripts to show that silver markets are rigged big time
  • Investigations into manipulation in precious metals markets
    • CFTC seems to playing a blind eye to the rampant manipulation in the gold and silver market
  • Do regulators now want Wall Street to stop trading commodities ?
    • Yes
  • Why has this gold manipulation not been reported on before ?
    • It has been reported before but the mainstream financial media have so far neglected to pick up this story.



  • Why do you expect a Big Reset of the global financial system ?
    • Two major problems in the world’s financial system have to be addressed: 1) the demise of the US dollar as the world’s reserve currency, and 2) the almost uncontrollable growth in debts and
      in central banks’ balance sheets. For all of these issues, central banks have only been buying time since the start of the credit crisis in 2007. Insiders predict that much more radical action will be needed before 2020.
  • How can the international monetary system be changed ?
    • No easy answers
  • Since when have people started planning a new international monetary system ?
    • Soon after the worldwide crash of financial markets in 2008, the IMF and others began brainstorming about a possible next phase of our international financial system. In 2010, the IMF published a report that looked into the possibility of a financial system without a dollar anchor. As if to underline its intention to reform the international monetary system, in 2012 the IMF added the Australian and
      Canadian dollars
  • Will gold be part of a reset ?
    • While most experts believe there will be no return to a full gold standard, gold will probably play a much greater role in the next phase of the financial systems
  • Will SDRs become the new world currency ?
    • Likely. According to some experts, the IMF needs at least five more years to prepare the international monetary system for the introduction of SDRs.
  • Some other reset scenarios
    • gold-backed dollar could be introduced in the US
    • US still values gold at the historical price of just $ 42 per ounce. This is unusual because the ECB and many other central banks value their gold reserves at market prices. The US government hopes to spread the message that gold is a metal with little value, while the dollar is the value of choice
  • What is China’s master plan ?
    • China has been hoarding gold in order to safeguard the country’s economic stability and to strengthen its defense against ‘external risks’, which could be translated as a collapse of the dollar or the euro or even the global financial system
  • How large are China’s gold holdings compared with the West ?
    • The Chinese want to increase their gold reserves ‘in the shortest time’ possible to at least 6,000 tonnes. That amount would put the Chinese on a par with the US and Europe on a gold-to-GPD ratio.
  • Does China understand the US war on gold ?
    • Yes, from the articles from the Chinese officials. China had accumulated over $ 1 trillion of US Treasuries between 2000 and 2010, a dollar devaluation would be very negative for China
  • Why is a monetary reset desired by China ?
    • They want to de-Americanize the world
  • The Russian point of view
    • Similar to China
  • Could the US confiscate foreign gold reserves stored in New York to introduce a new gold standard?
    • It is a possibility
  • Do we need to fear more financial repression?
    • History has shown that the closer we come to a major reset, the more likely it is that forms of financial repression will be activated. The reset of the Cyprus banking system demonstrated that  very few of those affected were prepared in advance.


The author concludes saying that reset is imminent :

We Westerners concluded our capitalist system, based on free markets, was a superior one because communist countries ‘switched over’ to our side. Well, at the end of 2008 our system also ran aground. However, like the communists leaders in the early 90s we pretend all is still fine. Authorities are now turning to precisely those measures which we so despised in the communist system. Economic figures are increasingly being manipulated and colored to reflect a more rosy picture. Good news  is often the result of propaganda and the work of spin-doctors. The economy and its financial markets are being increasingly centrally controlled. Free markets are disappearing more and more. Interest rates are manipulated, gold wars are fought, the  ‘plunge protection team’ intervenes almost openly in American stock markets, QE as far as the eye can see. We have entered an era of virtual global state capitalism. China is a perfect example. As are Russia and the US, the EU, the Arab World, the UK and Japan. The economies of West and East are now  intertwined in a way never seen before. 

Since the fall of Lehman, central bankers are desperately trying to avoid a collapse of the financial system. Governments and  central bankers know the whole economic system will fall apart once they stop printing money. This leads to the only logical conclusion that we are stuck with infinite QE. As more and more paper assets are being printed, more and more ‘smart money’ will flee towards asset classes that can’t be printed.

For the very first time in history, a financial and monetary crisis has emerged which is so severe that it has the capacity to end in an all-encompassing distrust of paper assets. This could even lead to an unprecedented wave of hyperinflation in which prices explode, debts melt away, the economy collapses and banks will close.

Central bankers are therefore very much aware that it is essential to come up with a reset plan before this occurs. Authorities will do everything possible to modify the financial system in order to avoid another 2008-style collapse. In my opinion it’s not a matter of if, but only when, they will introduce their reset plans