December 2013


image

 

The book is a take on how we look at the world and brand something as an advantage and something as a limitation. The things that we attribute as advantages sometimes become limitations and vice-versa. There are three parts to the book and each part has three stories.

Part I: The advantages of disadvantages (and the disadvantage of advantages).

The three stories mentioned in this part of the book go on to illustrate that we are often mislead about the nature of advantage. We think of things as helpful that actually aren’t and think of other things as unhelpful that in reality leave us stronger and wiser.

  • Vivek Ranadive : A computer nerd who trains a basketball team (David) to play against strong teams that play conventional basketball( Goliath). By combining unusual strategies that sometimes ONLY suit a disadvantaged, his team goes on to win many matches against supposedly strong teams.
  • Shepaug Valley School : A story that illustrates the inverted U principle. Something that we think of an advantage beyond a point becomes a disadvantage. I think this is applicable in quite a number of situations like more money is good for success, but beyond a point, it is actually detrimental for the success. In a similar way shrinking large class sizes is good but beyond a point it is in fact negatively correlated to academic achievement. Being born in a middleclass family is actually better than being born with silver spoon. A similar sentiment is echoed in Geet Sethi’s book, “Success Vs. Joy” where he says that his middle class status actually was one of the reasons that motivated him to practice more than usual hours. Had he been born in a super rich family, he says he would not have become a world class billiards player.
  • Caroline Sacks: This is a story of a girl who was interested in science, ends up choosing a prestigious institute instead of a second rung institute, thus choosing to be a small fish in an ocean than a big fish in a pond. What was the consequence? She drops out of science and decides to graduate in a different field. The book gives the reason as “relative deprivation”. Gladwell cites a raft of research that says that it’s how smart you feel relative to others in your classroom that matters. The bottom quartile/decile of top universities has been found not to be as productive as top quartile/top decile of fairly average university. Going by pure academic achievements and IQ levels, it should not be the case. But it is. I think this small fish vs. big fish is a valuable thought to be kept in mind while choosing many other aspects in our lives – Where do you go to school? / Where do you choose to work? / What do you choose to work on? etc. While taking these decisions, we inevitable weigh the advantages and disadvantages of various options. In that process what we think of as an advantage might actually be a disadvantage and vice versa. Let’s say a good reason to work in well known company is the money, the infrastructure, the prestige etc. that come with it and one might think that it is a great advantage to keep working in that company as a small fish. It might be beneficial for certain type of people, but for some, a smaller firm with constraints on money/ infra and loads of uncertainties, can actually spur them in to doing creative stuff.

Part II: The theory of desirable difficulty

The three stories mentioned in this part of the book illustrate that we are often mislead by what we perceive as disadvantages.

  • David Boies(Successful trial lawyer), Ingvar Kamprad(IKEA)Gary Cohn(Goldman Sachs) : All these people have one common thread. They were dyslexic. Gladwell talks about “compensation learning”, a technique adopted by those who cannot pursue “capitalization learning”, ( work on what’s one good at and keep working at it). All the stories illustrate one message those who have inherent disadvantages in the traditional education/career setting can actually force them to develop skills that might otherwise have lain dormant.
  • Emil Jay Freireich : A story of a man, who had a troubled childhood, never grew in an environment of empathy, and turns out to be the inventor of a successful cure for a certain type of cancer. When we see an orphan or a child who has lost his parent(s), we might think they are disadvantaged. Gladwell argues that these types of people are what he calls,”remote misses”. The fact that a person has lost his parent and has survived a bit brings in him a sense of accomplishment and that in turn feeds in to growing cycle of courage and self-confidence. There are a number of research findings that say that most of the successful people had lost one or both of their parents in their childhood. Something that is an obvious disadvantage for having a happy childhood can actually be beneficial.
  • Wyatt Walker :   A story that illustrates that disadvantaged, sometimes have nothing to lose and go all out and achieve extremely improbable events

Part III – The limits of power

The three stories in this part of the book illustrate the inherent weaknesses of power. The same inverted U principle applies to power too. British army assumed that too much of power would make insurgency go away; however power, after a certain point in time goes from being effective to ineffective. There’s a story of Joanne Jaffe, a police officer who transforms Brownsville using not more power but less power. The story of Mike Reynold shows that an aggrieved parent can marshal forces to have an entire law enacted, in this case, a law called “Three Strikes Law”, which looks good on paper but fails completely over the long run. As a parallel story, Gladwell cites the example of another aggrieved parent, Wilma Derksen, who chooses a completely different path, and the outcome was a happy outcome.

image Takeaway :

If you are a Gladwell fan and like his way of writing, this book is a nice treat peppered with 9 stories that revolve around the theme – “There are things that we might think are helpful but actually aren’t and there are things that we might think are unhelpful but in reality leave us stronger and wiser”.

image

The author is a CS professor at SUNY, Stony Brook. This book recounts his experience of building a mathematical system to bet on the play outcomes of what is considered the fastest ball game in the world, “Jai alai”. In the English vernacular this is sometimes spelled as it sounds,that is, “hi-li”.  The book recounts the history of the game and how it made to US from Spain and France. However the focus of the book is on using mathematical modeling and computers to analyze the game and design a betting system. The game itself is designed in such a way that it is a textbook case for analyzing the game mathematically. The players enter the competition based on FIFO queue and the player who gets to score 7 points is the winner. It takes hardly a few minutes to understand the game from this wiki.

With the help of some of his grad students, the author works on the following questions :

  • Given a player starts in a specific position, what is probability that he ends up in a Win/Place/Show ?
  • What are the best combination of numbers that have the highest probability of winning a Trifecta ?
  • How does one build a statistical model to evaluate the relative skills of the players ?
  • Given that two players A and B have probabilities of winning as pb and pb, How does one construct a model that evaluates the probability of A winning over B ?
  • How does one create a payoff model for the various bets that are allowed in the game ?
  • How do you deal with missing  / corrupt data ?
  • Given the 1) payoffs of various bets, 2) the probabilities of a player winning from a specific position, and 3) the relative skillsets, how does one combine all of these elements to create a betting strategy ?

I have just outlined a few of the questions from the entire book. There are numerous side discussions that makes the book a very interesting read. Here is one of the many examples from the book that I found interesting :

Almost every person who learns to do simulation comes across Linear congruential generator(LCG), one of the basic number theory technique to generate pseudo random numbers. It has the following recursion form :

By choosing appropriate values for a, c and n, one can generate pseudo random numbers.

The book connects the above recursive form to a roulette wheel :

Why do casinos and their patrons trust that roulette wheels generate random numbers? Why can’t the fellow in charge of rolling the ball learn to throw it so it always lands in the double-zero slot? The reason is that the ball always travels a very long path around the edge of the wheel before falling, but the final slot depends upon the exact length of the entire path. Even a very slight difference in initial ball speed means the ball will land in a completely different slot.

So how can we exploit this idea to generate pseudorandom numbers?A big number (corresponding to the circumference of the wheel) times a big number(the number of trips made around the wheel before the ball comes to rest) yields a very big number (the total distance that the ball travels). Adding this distance to the starting point (the release point of the ball) determines exactly where the ball will end up. Taking the remainder of this total with respect to the wheel circumference determines the final position of the ball by subtracting all the loops made around the wheel by the ball.

The above analogy makes the appearance of mod operator in LCG equation obvious.

One does not need to know much about Jai-alai to appreciate the modeling aspects of the game and statistical techniques mentioned in the book. In fact this book is a classic story of how one goes about modeling a real life scenario and profiting from it.

image

 

The book broadly deals with two strategies, “mean reversion” and “momentum”.These strategies cover six chapters of the book, out of which four of them are on mean reversion and two of them are on momentum strategies. Besides these six chapter, there is one chapter on backtesting strategies and there is another on risk management.

Given the importance of backtesting in any strategy, the first chapter starts off with some of the pitfalls of backtesting. It also gives three general methods for backtesting any strategy. First method is the usual frequentist method of testing whether the null : “returns from the strategy is 0”. Second method is to simulate various return paths and check for the number of times the strategy beat the returns based on historical data. The third method involves randomizing the longs and shorts and seeing whether the strategy makes sense. One can also think about a fourth method where you resample the returns series and general price path and then check out your strategy. The author also cautions about the relevance of backtesting and says that any regime shifts will make all the backtesting irrelevant and hence the strategy, despite looking good on paper, is going to fall flat. The author ends with a brief account of the various software available for a algo trader. Even though there are software that are mentioned that could help a non-programming trader, I fail to see what value such software can add. A basic requirement of anybody thinking of algo trading is working knowledge of at least one or two programming languages. UI driven interfaces can only supplement the code and not replace it.

The second chapter goes in to some basic stat/math skills to test stationarity of a series, check for a cointegration etc. Four methods are used to check for the stationarity of a time series, i.e. Dickey Fuller test, Hurst exponent, Variance ratio test, Half life of AR(1) process. Well, one can say, what’s the point in all these tests such as unit tests, variance ratio tests etc. Why not just backtest and decide whether to go ahead with the strategy or not. One point to consider is that these tests are far more statistically significant that what your strategy results show. The strategy by its very nature will have limited data points whereas the above tests use almost all the data points. Besides that if a series shows stationarity or a set of series show stationarity, then one can atleast put an effort in coming up with a strategy. What’s the point in testing strategy after strategy when the series itself is close to random ?.

For finding cointegrated assets, the chapter discusses 2 step Engle-Granger method and Johansen procedure. The former is easy to understand and implement but is plagued by some problems that are explained in this chapter. To understand Johansen procedure, one needs to sweat it out by reading the math from some other book. This chapter is more of "here is how Johansen function can be used to find cointegrating relations". If your concern is merely to know the method, then you don’t need any further reference. But if you are like me who wants to understand what the test does, i guess there is no option but to spend time understanding the math behind it.

The third chapter begins with a very interesting take on the input to a cointegration exercise. What can be the input ? raw prices or log prices or ratio of two stocks ? The first variant of pair trading that is typically seen is the ratio trade, compute the ratio between two stocks and trade the spread. The stocks might not be cointegrating but the spread might be mean reverting in a shorter time frame. In fact this was a recurrent theme in almost all the pair reports that I used to see from Brokerage houses. These were pushed under the title cointegrated pairs. I always use to wonder why should a ratio of two stocks be stationary ? Equivalently, why should the hedge ratio between two stocks be 1 ? This chapter kind of nails that question down by saying that ratio based trading is convenient and works if the spread is stationary on a short time frame. So, given a choice between a cointegrating vector based trading and ratio based trading, which one should be selected? The author says he has no clear answer except his backtesting shows that it is better to take cointegration based hedge than to rely on ratio based trades. The chapter introduces Bollinger bands and Kalman filter based strategies for getting a higher return and Sharpe ratio compared to naive strategies.

The fourth  starts off with the author discussing about the difficulties with pursuing mean reverting strategies that involve stocks.Subsequently, mean reverting strategies for ETFs are explored. One of the first strategy that is explored in the book is Buy-on-Gap strategy. The author says he had used this strategy for his fund as well as his personal account and it made money till 2009. The chapter explores arbitrage strategies between ETFs and its constituent stocks. MATLAB results for the strategy are given and the author provides his interesting commentary on the results. The chapter ends with an exploration of cross sectional mean reversion where stocks are longed or shorted based on the relative movement with respect to sector index or a market index.

The  fifth chapter deals with some specific aspects of pairs trading in currencies. After reading this section I had a feeling that it is more important to identify the correct instruments for setting up a cointegrating equation. It is easy to make a mistake while backtesting the strategy.The strategies explored are trading currency cross rates, trading futures calendar spreads. The basic message that the author tries to convey is the decomposition of returns in to spot returns and roll returns. It is often that a strategy performs well over long backtesting period because of roll returns. One must be careful in attributing the performance of a strategy to the respective return types. The author tests out various Futures intermarket spreads and shows that none of the spreads form a stationary series. The chapter has a section where VIX futures and E-mini S&P 500 futures contracts are used to create a cointegrating pair. At least on paper, the strategy looks promising.

The sixth and seventh chapters deal with interday and intraday momentum strategies.The author says that interday momentum strategies have been performing badly since crisis and the entire action nowadays is in the intraday game. The last chapter deals with Kelly criterion and other risk management techniques. In fact the author covers quite extensively the topic of Kelly criterion in his previous book. If you are new to Kelly criterion, it is worthwhile to read the paper by Thorpe and understand various aspects of Kelly criterion. The paper is so well organized that you will have many aha moments along the way. After going through the paper, I could understand most of the material in this chapter. The book ends with discussing stop losses and CPPI techniques for dealing with the problems of applying Kelly criterion in a practical scenario.

All the code is written in MATLAB. Thanks to the author’s site, I could find most of the datasets that are used in the book. If you are a non MATLAB user like me, you can easily through the MATLAB code and translate the code in to whatever language you are comfortable with, to verify the strategy results mentioned in the book.

 

image Takeaway

The book discusses “mean reverting strategies” and “momentum strategies” at length. This book helped me tie a couple of loose ends in my thought process relating to mean reversion strategies. The practical insights in to Kelly criterion and Risk management makes this book a great resource for risk managers and prop traders.

image

The following is a deck that I have prepared with some of the main points from the various sections of the book.

 

The above deck  in pdf format:

 

image Takeaway :

This book was published in 2003. In the last 10 years, US markets have changed dramatically and so have other markets all over the world. The dominant form of trading is via Electronic Order Book.  Open outcry markets have been almost completely taken over by Screen based trading. Specialists roles have become less prominent with HFT players acting as middlemen.

Given all these rapid developments, this book still does an awesome job of piecing together various elements of a market microstructure. The highlight of this book is that it introduces various stylized traders and analyzes market microstructure mechanisms via the eyes of these traders. In this way, the text provides a superb insight in to the various interactions of market participants, that ultimately define a “market” and its “microstructure”.

image

Introduction

The authors begin their introductory chapter stating that gone are the days when the primary purpose of stock market was capital allocation. Instead, they say,

The primary purpose of the stock exchanges has devolved to catering to a class of highly profitable market participants called high frequency traders, or HFTs, who are interested only in hyper-short term trading, investors, be damned

Indeed if one looks at some of the basic numbers that drive volumes, it is clear that HFT firms have become exchanges’ biggest customers.

  • HFTs account for 50–75% of the volume traded on the exchanges each day and a substantial portion of the stock exchanges’ profits.
  • While smaller HFTs churn hundreds of millions of shares per day, a few of the larger HFTs each account for more than 10% of any given day’s trading volume.
  • HFTs earn anywhere from $8 billion to as much as $21 billion a year that comes at the expense of long-term investors

Gone are the days when human dealers / human specialists were the people involved in market value discovery of any stock. Today the asset pricing in the stock is largely a result of a high frequency algorithmic automated trader.

The authors say that one of the main reasons for writing this book is to expose the HFTs and the way they are colluding with the exchanges to wreak investor confidence. They are worried that the very foundation of markets – investor confidence is at stake.

A retail investor a few decades ago was mainly concerned with the bid ask spread and executional risk (if it is a limit order). However in today’s world, the same retail investor is facing a much bigger and grave problem, i.e., his order is being sold to hyper efficient HFT firms. These HFT firms then either front run or use strategies that generate profits at the expense of the retail investor. The common argument favoring HFT players is that bid ask spread has come down. However this is the case only for 5% of the actively traded stocks. For the rest 95%, the spreads have become wider.

Chapter 1 – Broken Markets

The chapter starts off by saying that stock market used to be single unified place for investors. Now it is a connected mess of more than 50 exchanges, dark pools and alternative trading venues. The market is like a shattered vase held by weak glue – HFT firms.

The stock market has changed dramatically in the last 15 years. Specialist executing a trade at NYSE and an electronic dealer at NASDAQ were primarily responsible for the final execution. With the technological advances, one class of participants has grown extremely big in size, HFT players. When did HFT start?

The authors trace the HFT to Instinet. Instinet was the world’s first electronic brokerage firm, that was order driven, anonymous and no specialists to facilitate order flow. Instinet began courting a new type of customer who promised large volumes in exchange of Instinet’s top-of-book and depth-of-book data. What these firms did was to use quote matching strategies to rip off uninformed block traders. They fed the data in to algorithms that gave the probability of seeing an uninformed vs. informed trade. Based on these probabilities these firms either front ran the block orders or provided liquidity. Basically they were using quote matching strategies to make money from these strategies and they generated massive volume at Instinet. But the thing that got HFTs to explode was two pieces of regulation:

  • Regulation ATS – This mandated all orders go to a public quote. With this the automated traders had the entire ocean in front of them to use their predatory strategies
  • Reg NMS – This created the concept of NBBO (National Best Bid and Offer) that made all the exchanges route the order to the venue that quoted NBBO.

With these two regulations in place, “Speed” became the key differentiating factor amongst various exchanges. These changes turned the market from an investor-focused mechanism that welcomes traders and investors TO a sub-second trader-focused mechanism.

The brokers who enticed retail investors in to trading at low commissions, actually made money by selling them to HFTs who in turn made money by trading around the order flow. Do HFTs act as specialists? Given the recent crash, it definitely looks like a NO. These firms are the first one to demand liquidity instead of offering liquidity in times of stress. Even worse, they might even disappear and stop trading for a few days and stop offering liquidity altogether (that might make the fragmented market even more fragile).

Chapter 2 – The curtail pulled back on HFT

This chapter talks about how HFTs predatory practices became known to a much wider audience. The defining characteristics of a HFT firm are:

  • Large technological expenditures in hardware, software and data
  • Latency sensitivity (order generation and execution taking place in sub-second speeds)
  • High quantities of orders, each small in size
  • Short holding periods, measured in seconds versus hours, days, or longer
  • Starts and ends each day with virtually no net positions
  • Little human intervention

The privatization of exchanges and their hunger for volume was perfect for HFT players.

High frequency traders need high-computing power and ultralow latency (high speed). They get it by renting server space from the stock exchanges. They also need to access big amounts of data, which they analyze and run though algorithmic trading programs to detect patterns in the markets. They get the data from the stock exchanges, too. Then they trade, capitalizing on those patterns. And, in many cases, the exchanges pay them to trade.

The chapter talks about four strategies that HFT firms use to make insane amount of money

  • Market marking Rebate arbitrage
    • Exchanges reward HFT firms with rebates for providing liquidity
    • Even if the firms buy and sell at the same price, the exchange rebates made Designated market makers very successful
    • Exchanges allowed parity and it enables them to buy alongside other customer orders
  • Statistical Arbitrage
    • With explosion of ETFs, stat arb generated tremendous volumes for the exchange and huge profits for HFT players
  • Latency Arbitrage
    • The two speed market – SIP vs. the NBBO direct data feed clearly made the information asymmetric.
  • Momentum Ignition
    • Pump up the momentum artificially

It ends with a list of events (arrest of a HFT programmer from Goldman), media articles and popular TV shows that made HFT’s predatory practice known to the vast majority of investors.

 

Chapter 3 – Web of Chaos

This chapter traces the history of US stock market and recounts the developments that took place that led to a fragmented market. It begins by explaining a strange phenomenon that the authors witnessed as agency traders. Whenever they submitted an order to the exchange there were three activities that were seen in the order book

  • Quote flickering – bids or offers would mysteriously rise or disappear
  • Penny jumping – somebody already had this information and were using against the order
  • More impact on prices

As agency brokers, they were stunned to see that the so called healthy market of 50 competing exchanges were actually making them worse off. What they realized in their investigation was that the market has become one big conflicted, for-profit web of more than 50 trading destinations. What’s the impact on a retail trader or an agency trader? The authors say :

  • If you are a retail investor, there’s a reason why your online brokerage firm charges you only $8, or even nothing, for your orders. It’s because they sell your orders to HFT firms that make money off of you.
  • If you are a professional investor, there’s a reason why your brokerage firm charges you only a half a penny a share if you use a volume weighted average price (VWAP) or percentage of volume (POV) algorithm to execute your trades. (Algos slice a large order into hundreds of smaller orders and feed them into the market.) It’s because your orders are fed to proprietary trading engines that make money off of you. Their algo figures out how your algo works, forcing you to pay more for buys and receive less for sells.
  • And if you are an agency broker, like we are, there’s a reason why some big brokerage firm salesmen offer their VWAP algo for free! It’s because their firm has a way to make money by disadvantaging your orders all day long.

The authors strongly believe that 50 destinations on which the orders are traded and the complexity exist for two reasons

  • To maximize your interaction with HFT in a way that disadvantages you
  • To maximize HFT ability to collect exchange rebates

The chapter gives the history of NYSE, NASDAQ, SOES, Instinet, Archipelago, Direct Edge, BATS and recounts the wave of consolidation between various exchanges and ECNs. Today the US has 13 exchanges, 10 of which are owned by Big Four

  • NYSE, NYSE Amex, and NYSE Arca (3)
  • NASDAQ, NASDAQ PSX, and NASDAQ BX (3)
  • BATS and BATS Y (2)
  • EDGX and EDGA (Direct Edge) (2)

Besides the above 10 exchanges, there has been a proliferation of dark pools and alternative trading systems. The main reason for this mushrooming of these avenues is that it benefits HFT in four critical ways

  1. Exchange Arbitrage – Guys who have speed and technology on their side can arb away the differences between various exchanges. This argument actually has a flip side to it. HFTs want more destinations and hence market is going to be more fragmented with new trading avenues catering to these players.
  2. Rebate Arbitrage – The maker-taker pricing models used by exchanges are perfect for HFT as they have become masters in jumping the queue.
  3. Fragmentation – Each exchange sells tools and data feeds that benefit the exchanges. At the same time it enables HFTs to be jump the queue.
  4. Dark pools serving a conduit through which investor orders can be internalized by brokerage firms.

The takeaway from the chapter is that enormous competition amongst trading avenues has not led to “executing a trade efficiently and in cost effective way”, but has led to “trading around investor orders”.

 

Chapter 4 – Regulatory Purgatory

This chapter starts off mentioning a research paper that was published on NASDAQ in 1994 that showed that the market makers were avoiding odd-eighth quotes. This research report shook the entire US investor confidence as it exposed the fact that the biggest stock markets in the world were conducting business much like a mafia. The SEC was put on the defensive and it had to act. What then followed were a series of regulations that have completely changed the way instruments are traded in US. The chapter lists down some of the key aspects of these regulations:

  1. Order Handling Rules ( Through these rules, the SEC shut down the private market that brokers and institutions were using to trade )
    1. Display rule that made market makers and specialists to display publicly the limit orders they receive from the customer when the orders are better than the market maker’s or the specialist’s quote.
    2. Quote rule that made market makers and specialists publish the best prices at which they are willing to trade. No longer can a market maker or specialist hide a quote on a private trading system
  2. Regulation ATS – Force all ECNs to display all their orders to the public
    1. Seeing the orders was a major win for HFT traders. They could model order books and predict prices with much greater certainty
    2. Up until this time, trading algos, which sliced up block orders and fed them piecemeal into the market, were primarily used by sophisticated quantitative traders. With Reg ATS and more quotes being displayed, algos were about to be used more widely by institutional investors.
  3. Rule 390 annulled – This rule prevented member firms from trading NYSE listed stocks away from the trading floor
  4. 1997 -2000 : Decimalization
    1. Quote size reduced
    2. Shortsale uptick rule became useless because it took a penny to move a stock back in to compliance
    3. This wiped out many small mid-cap market makers. That roles was filled by HFTs who have none of the affirmative and negative obligations that specialists and NASDAQ market makers had
  5. Demutualization of NASDAQ in 2000 and NYSE in 2006( change from net trading to transaction based models)
    1. Suddenly it became a volume game
    2. Attracting traders who get volumes became priority and HFT traders became priority clients
  6. SEC published Reg NMS in Feb 2004, its most devastating regulation. Out of the four parts of the proposal, the trade through proposal met with heavy criticism and SEC gave it. It modified Trade through proposal in to Order protection rule. The Order Protection Rule would protect only quotations that were on top of the book and electronically accessible. If the NYSE wanted to be part of the NBBO, then it would have to change from a “slow” market to a “fast” market

These 6 pieces of regulation over a period of 10 years have changed the face of trading forever. From 1997 to 2007, the SEC had fully changed how the equity market functioned. Volumes in listed stocks exploded as competing market centers began fragmenting liquidity. Because the NYSE was becoming obsolete, so was the block trade. Average trade sizes plummeted, as orders began to get chopped up by institutional traders seeking to cloak their larger orders from fast HFT traders. Spreads did shrink, but so did the amount of displayed liquidity in the best bid and offer. The new equity market had arrived, and it was about to wreak havoc on every investor.

 

Chapter 5 – Regulatory Hangover

After the flash order controversy became known to a wider audience in the summer of 2009, SEC came under fire and issued two proposals, one on stopping flash orders and second on regulating dark pools. Unfortunately, even though proposed in 2009, the SEC has yet to approve any parts of either proposal. Flash orders are still legal and dark pools are still gaining market share. The two-tiered market that the SEC feared in 2009 is alive and well and growing every day. The US markets were rocked with flash crash on May 6, 2010 and it showed the fragility of the US systems. Since the Flash Crash, rather than address the entire fragmented equity market, the SEC proposed and approved a number of short-term band-aid fixes such as

  • Single stock circuit breakers
  • Elimination of Stub quotes
  • Sponsored access rule
  • Large trader reporting rule
  • Consolidated audit rule

 

Chapter 6 – The Arms Merchants

The authors compare the US stock markets to arms merchants who supply weapons but never get caught in the cross fire. Since demutualization and fragmentation of markets, the revenue mix of the exchanges has completely changed. Nearly 75% of the revenues come from 2% of the clients – HFTs. So, the exchanges are bending over their back to provide products and services to these 2% of clients. The stock market companies have also changed the way they look at themselves. They no longer think that they are the venues for capital allocation decisions, but position themselves as a “technology company”. The revenue from listings business has come down dramatically and revenue from transaction based fees has become very high. Also the nature of the market is changing. In 2011 there were 302 ETFs listed vs. 125 IPOs. The exchanges had to look out for alternative ways to generate profits. The chapter describes four of the products and services that exchanges have introduced that has changed the way trade is executed today. They are

  • Colocation
    • NYSE center consumes 28 MW of power, enough to run 4500 residential homes. Its equivalent to 7 football fields
    • NASDAQ doesn’t own its data center. It chooses to lease out from third parties;
  • Private Data feeds
    • The granddaddy of all data feeds is NASDAQ’s ITCH, which was developed by the Island ECN, which, in turn, was developed by Datek Securities, a SOES Bandit brokerage firm. Many other exchanges and ATSs around the world, including the London Stock Exchange’s offer private data feeds based on the ITCH protocol
  • Rebates for order flow – The Maker/Taker Model
    • This is at the core of equity market structure problem.
    • It has influenced how most broker-sponsored smart order routers access liquidity. Institutional investors typically enter their algorithmic orders into a smart order router, or SOR, provided by their brokerage firm in exchange for a low commission rate. The “algo” chops up large block orders of 100,000 shares, for example, and doles out small slices of say 100–500 shares each that are routed to various market centers. The purpose is to minimize market impact. However, some orders are not routed to the destination where best execution would dictate, but to the destination where the broker receives the best rebate. While these SORs may be “smart” for the broker, they may be pretty dumb for the client.

The above changes make the stock exchange a completely different animal.

 

Chapter 7 – It’s the Data, Stupid

This chapter talks about the nefarious ways in which stock exchanges are selling out ITCH data feed revealing information about hidden orders. By creating a data feed structure that makes it easy for an HFT firm to track hidden large orders, the exchanges are working hand in glove with HFT players in ripping off retail and institutional clients. The authors investigate the ITCH feed from NASDAQ, BATS and other exchanges and find that there is a lot more information that is being systematically sent out to HFT firms.

Another area where exchanges are playing to their clients tunes is the computation of indices. With one in three trades being executed away from primary exchanges, all the indices are actually phantom indices. They are misrepresented and this gives a fantastic opportunity for the HFTs to arb that difference. This problem can be easily fixed by to accurately reflect all traders intraday in a timely manner. But it’s not being done. This is a clear case of exchanges setting aside investor interests and entertaining HFT clients.

Machine readable news data feeds are also another development in this crazy tech arms race. The algos read the news and spit out trades. The authors cite just one of the many incidents that have led to an increased volatility in the markets.

Chapter 8 – Heart of Darkness

The chapter starts off by highlighting the example of one of the largest dark pools, pipeline that swindled money in the name of anonymity. Dark pools came in to satisfy the demand of institutional traders who wanted anonymity while trading large blocks. Pipeline advertised anonymity but at the same time established a HFT firm to work around the trades and profit from the dark pool. Dark pools / crossing networks served a genuine need when they came in to existence. But with regulatory changes and the developments in the US market, they have actually become lit pools. All major brokers in the name of internalization feed the owners’ desires for greater revenue, cost reduction and most damaging to investors – prop trading.

The chapter also talks about latency arbitrage strategies that HFT players use. Typically a dark pool pricing is 10 to 15 milliseconds behind real-time prices that HFT firms use. This creates a two tiered market where the privileged few are ripping off institutional orders.

In December 2010, SEC Chair Mary Schapiro testified before Congress that the Commission was looking into “abusive colocation and data latency arbitrage activity in potential violation of Regulation NMS.” At the close of 2011, however, there had been no action on this subject, and the SEC’s own 2009 Dark Pool Proposal is still sitting in limbo. Almost all the dark pools in existence feed/benefit prop trading as their primary intention. Will there be a regulation on it?

 

Chapter 9 – Dude, Where’s my order ?

This chapter talks about the path that an institutional order goes through in the new web of chaos. The market has become so complex that under the name of SOR, the order actually travels around ECNs, dark pools and a host of other venues before getting filled.

image

All this complexity makes one wonder whether client should replace the full form of SOR from Smart order routing to Sub optimal order routing. Is there a choice? Looks like NO. In today’s world the buy side trader has not much clue how his order is going to be filled. Once he chooses one of the algos to execute( VWAP, TWAP, POV, close-targeting algos, arrival-targeting algos, Dark liquidity seeking algos, etc), the HFTs take over the order. Nowadays buy side traders spend an enormous time policing their order execution process. Shouldn’t they be spending time on generating alpha rather than policing the execution ? Or is the execution the only alpha left in the market ?

Chapter 10 The Flash crash + Chapter 11 The Aftermath

The next two chapters are guest chapters written by R.T.Leuchtkafer. The basic takeaway from these chapters is that HFT scalpers have become the new middlemen in the market and they are an unregulated lot. They act as liquid supplier and liquidity demanders according to their own profit motives. They have no obligation to make market continuous. In fact as the flash crash illustrates, they are the first ones to run away in the times of crisis. These chapters have a tone of disappointment with SEC. Two and half years have passed since the flash crash and no great measure have been put in place. The BATS IPO failure is an example that illustrates that markets are still very fragile and the next flash crash can happen anytime.

Chapter 12 – Killing the stock market that laid the golden eggs

This is a guest chapter authored by David Weild and Edward Kim, who say that IPO markets have been battered mainly because of Reg ATS. The main point of the chapter is that spreads have become so low that only a few large cap, high volume stocks are able to profit from the stock exchanges. They draw a parallel between roads and stock exchanges. Roads collect optimum tolls to fund other development activities. In a similar way, the investors should be charged higher commissions to trade to bring back the market to favor IPOs. Will this happen ? Once investors are used to low commissions, will they pay high commissions? I don’t think so.

Chapter 13 – Call to Action

The authors say that the only way to restore the market confidence is to create a parallel market where there are human market makers with wider spreads, with obligations on market makers to maintain price continuity etc. This is like going back to the world of duopoly NYSE-NASDAQ. Will such a recommendation be taken seriously and acted upon? I think it is very unlikely such a thing will happen.

The authors list down a set of demands from the regulators

  • Ability to opt-out of private data feeds
  • Eliminate the speed differential between the slower public SIP and faster private data feeds
  • Eliminate Phantom indexes
  • Real-time identification of which dark pool traded a stock
  • Eliminate the maker/taker exchange model
  • Introduction of order cancellation fee

The appendix to the book gives all the papers from Themis trading that were widely quoted in the media.

 

imageTakeaway:

US Markets are broken. What does that mean? When did it happen? How did it happen? Who allowed it to happen? These questions are succinctly answered in the book. This book needs to be read by anybody interested in markets as it shows how adverse changes in market microstructure can wreak havoc with the entire financial markets. Given that there have not been any significant regulatory changes since the flash crash; the next one could be any day from now. Reading this book makes you believe that many crashes will happen sooner than later (if nothing is done to remedy the current state of broken markets). Knight capital demise is a case in point.

image

With total silence around me and my mind wanting to immerse in a book, I picked up this book from my inventory. I came across a reference to this work in Aaron Brown’s book on Risk Management.

First something about the cover:

The young woman on the right is the classical Goddess Fortuna, whom today we might call Lady Luck. The young man on the left is Chance. Fortuna is holding an enormous bunch of fruits, symbolizing the good luck that she can bring. But notice that she has only one sandal. That means that she can also bring bad luck. And she is sitting on a soap bubble! This is to indicate that what you get from luck does not last. Chance is holding lottery tickets. Dosso Dossi was a court painter in the northern Italian city of Ferrara, which is near Venice . Venice had recently introduced a state lottery to raise money. It was not so different from modern state-run lotteries, except that Venice gave you better odds than any state-run lottery today. Art critics say that Dosso Dossi believed that life is a lottery for everyone. Do you agree that life is a lottery for everyone? The painting is in the J. Paul Getty Museum, Los Angeles, and the above note is adapted from notes for a Dossi exhibit, 1999.

The chapter starts with a set of 7 questions and hit is suggested that readers solve them before proceeding with the book.

Logic

The first chapter deals with some basic terminology that logicians use. The following terms are defined and examples are given to explain each of them in detail:

  • Argument: A point or series of reasons presented to support a proposition which is the conclusion of the argument.
  • Premises + Conclusion: An argument can be divided in to premises and a conclusion.
  • Propositions: Premises and conclusion are propositions, statements that can be either true or false.
  • Validity of an argument: Validity has to do with the logical connection between premises and conclusion, and not with the truth of the premises or the conclusion. If the conclusion is false, irrespective of whether the premises are true or false, we have an invalid argument.
  • Soundness of an argument: Soundness for deductive logic has to do with both validity and the truth of the premises.
  • Validity vs. Truth: Validity is not truth. It takes premises as true and proceeds to check the validity of a conclusion. If the premises are false, the reasoning can still be valid but not the TRUTH.

Logic is concerned only with the reasoning. Given the premises, it can tell you whether the conclusion is valid or not. It cannot say anything about the veracity of the premises. Hence there are two ways to criticize a deduction: 1) A premise is false, 2) The argument is invalid. So there is a division of labor. Who is an expert on the truth of premises? Detectives, nurses, surgeons, pollsters, historians, astrologers, zoologists, investigative reporters, you and me. Who is an expert on validity? A logician.

The takeaway of the chapter is that valid arguments are risk-free arguments, i.e. given the true premise; you arrive at a valid conclusion

Inductive Logic

The chapter introduces risky-arguments and inductive logic as a mechanism for reasoning. Valid arguments are risk-free arguments. A risky argument is one that is very good, yet its conclusion can be false, even when the premises are true. Inductive logic studies risky arguments. There are many forms of risky arguments like making a statement on population from a statement on sample, making a statement of sample from a statement on population, making a statement on a sample based on statement on another sample etc. Not all these statements can be studied via Inductive logic. Also, there may be more to risky arguments than inductive logic. Inductive logic does study risky arguments— but maybe not every kind of risky argument. The terms introduced in this chapter are

  • Inference to the best explanation
  • Risky Argument
  • Inductive Logic
  • Testimony
  • Decision theory

The takeaway of the chapter is that Inductive logic analyzes risky arguments using probability ideas.

The Gambler’s fallacy

This chapter talks about the gambler’s fallacy who justifies his betting on a red slot roulette wheel; given that last X outcomes on the wheel have been black. His premise is that the wheel is fair, but his action is against the premise where he is questioning the independence of outcomes. Informal Definitions are given for bias, randomness, complexity and no regularity. Serious thinking about risks, which uses probability models, can go wrong in two very different ways. 1) The model may not represent reality well. That is a mistake about the real world. 2) We can draw wrong conclusions from the model. That is a logical error. Criticizing the model is like challenging the premises. Criticizing the analysis of the model is like challenging the reasoning.

Elementary Probability Ideas

This chapter introduces some basic ideas of events, ways to compute probability of compound events etc. The chapter also gives an idea of the different terminologies used by statisticians and logicians, though they mean the same thing. Logicians are interested in arguments that go from premises to conclusions. Premises and conclusions are propositions. So, inductive logic textbooks usually talk about the probability of propositions. Most statisticians and most textbooks on probability talk about the probability of events. So there are two languages of probability. Why learn two languages when one will do? Because some students will talk the event language, and others will talk the proposition language. Some students will go on to learn more statistics, and talk the event language. Other students will follow logic, and talk the proposition language. The important thing is to be able to understand anyone who has something useful to say.

Conditional Probability

This chapter gives formulae for computing conditional probabilities. All the conditioning is done for a discrete random variable. Anything more sophisticated than a discrete RV would have alienated non-math readers of the book. A few examples are given to solidify the notions of conditional probability.

The Basic Rules of Probability & Bayes Rule

Rules of probability such as normality, additivity, total probability, statistical independence are explained via visuals. I think this chapter and previous three are geared towards a person who is a total novice in probability theory. The book also gives an intuition in to Bayes rule using elementary examples that anyone can understand. Concepts such as reliability testing are also discussed.

How to combine Probabilities and Utilities?

There are three chapters under this section. The chapter on expected value introduces a measure of the utility of a consequence and explores various lottery situations to show that cards are stacked against every lottery buyer and the lottery owner always holds an edge. The chapter on maximizing expected value says that one of the ways to choose amongst a set of actions is to choose the one that gives the highest expected value. To compute the expected value one has to represent the degrees of belief by probabilities and the consequences of action via utiles( they can be converted in to equivalent monetary units). Despite the obviousness of the expected value rule, there are a few paradoxes and those are explored in the chapter; the popular one covered being the Allais Paradox. All these paradoxes have a common message – The expected value rule does not factor in such attitudes as risk aversion and other behavioral biases and hence might just be a way to definite utilities in the first place. So, the whole expected value rule is not as water tight as it might seem. Also there are situations where decision theory cannot be of help. One may disagree about the probability of the consequences; one may also disagree about the utilities(how dangerous or desirable the consequences are). Often there is a disagreement about both probability and utility. Decision theory cannot settle such disagreements. But at least it can analyze the disagreement, so that both parties can see what they are arguing about. The last chapter in this section deals with decision theory. The three decision rules explained in the chapter are 1) Dominance rule 2) Expected value rule 3) Dominant expected value rule. Pascal’s wager is introduced to explain the three decision rules. The basic framework is to come up with a partition of possible states of affairs, possible acts that agents can undertake and utilities of the consequences of each possible act, in each possible state of affairs in the partition.

Kinds of Probability

What do you mean ?

This chapter brings out the real meaning of the word, “probability” and probably J the most important chapter of the book.

  1. This coin is biased toward heads. The probability of getting heads is about 0.6.
  2. It is probable that the dinosaurs were made extinct by a giant asteroid hitting the Earth.
    1. The probability that the dinosaurs were made extinct by a giant asteroid hitting the Earth is very high— about 0.9.
  3. Taking all the evidence into consideration, the probability that the dinosaurs were made extinct by a giant asteroid hitting the Earth is about 90%.
  4. The dinosaurs were made extinct by a giant asteroid hitting the Earth.

Statements (1) and (4) [but not (3)] are similar in one respect. Statement (4), like (1), is either true or false, regardless of what we know about the dinosaurs. If (4) is true, it is because of how the world is, especially what happened at the end of the dinosaur era. If (3) is true, it is not true because of “how the world is,” but because of how well the evidence supports statement (4). If (3) is true, it is because of inductive logic, not because of how the world is. The evidence mentioned in (3) will go back to laws of physics (iridium), geology (the asteroid), geophysics, climatology, and biology. But these special sciences do not explain why (3) is true. Statement (3) states a relation between the evidence provided by these special sciences, and statement (4), about dinosaurs. We cannot do experiments to test (3). Notice that the tests of (1) may involve repeated tosses of the coin. But it makes no sense at all to talk about repeatedly testing (3). Statement (2.a) is different from (3), because it does not mention evidence. Unfortunately, there are at least two ways to understand (2.a). When people say that so and so is probable, they mean that relative to the available evidence, so and so is probable. This the interpersonal/ evidential way. The other way to understand(2.a) is based on Personal sense of belief.

Statement (4) was a proposition about dinosaur extinction; (2 ) and (3) are about how credible (believable) (4) is. They are about the degree to which someone believes, or should believe, (4). They are about how confident one can or should be, in the light of that evidence.The use of word probability in statements(2) and (3) are related to the ideas such as belief, credibility, confidence, evidence and general name used to describe them is “Belief-type probability”

In contrast, The truth of statement(1) seems to have nothing to do with what we believe. We seem to be making a completely factual statement about a material object, namely the coin (and the device for tossing it ). We could be simply wrong, whether we know it or not . This might be a fair coin, and we may simply have been misled by the small number of times we tossed it. We are talking about a physical property of the coin, which can be investigated by experiment. The use of probability in (1) is related to ideas such as frequency, propensity, disposition etc. and the general name used to describe these is “frequency-type probability”

Belief-type probabilities have been called “epistemic”— from episteme, a Greek word for knowledge. Frequency-type probabilities have been called “aleatory,” from alea, a Latin word for games of chance, which provide clear examples of frequency-type probabilities. These words have never caught on. And it is much easier for most of us to remember plain English words rather than fancy Greek and Latin ones.

Frequency-type probability statements state how the world is. They state, for example, a physical property about a coin and tossing device, or the production practices of Acme and Bolt. Belief-type probability statements express a person’s confidence in a belief, or state the credibility of a conjecture or proposition in the light of evidence.

The takeaway from the chapter is that any statement with the word, probability carries two types of meanings, belief-type of frequency-type. It is important to understand the exact type of probability that is being talked about in any statement.

Theories about Probability

The chapter describes four theories of probability,

  1. Belief type – Personal Probability
  2. Belief type – Logical Probability – Interpersonal /Evidential probability
  3. Frequency type – Limiting frequency based
  4. Frequency type – Propensity based

Probability as Measure of Belief

Personal Probabilities

This chapter explains the way in which degrees of belief can be represented as betting rates or odds ratio. Let’s say my friend and I enter in to a bet about an event A, let’s say, “India wins the next cricket world cup“. If I think that India is 3 times more likely to win than to lose, then to translate this belief in to bet, I would invite my friend to take part in a bet where the total stake amount is 4000(Rs). My friend has agreed to bet 1000 Rs AGAINST the event and I should take the other side of the bet by offer 3000 Rs. Why is this bet according to my beliefs? My expected payoff is (1000*3/4)+(-3000*1/4=0. My friend’s expected payoff is (-1000*3/4)+(3000*1/4) = 0. Hence from my point of view it is a fair bet. There can be a bet ON the event too. I bet 3000 Rs on the event and my friend is on the other side of the bet with 1000Rs. This is again a fair bet from my belief system as my expected value is (1000*3/4)+(-3000*1/4) and my friend’s expected value is (1000*-3/4)+(3000*1/4). .By agreeing to place a bet on or against the event, my friend and I are quantifying out MY degree of belief in to betting fraction, i.e. my bet/total stake, my friend’s bet/total stake.

It is important to note that this might not be a fair bet according to my FRIEND’s belief system. He might be thinking that the event that “India wins the next cricketing world cup” has 50/50 chance. In that case, if my friend’s belief pans out, he will have an edge betting against the event and he will be at a disadvantage betting for the event. Why? In the former case, his expected payoff would be (-1000*1/2)+(3000*1/2) >0 and in the latter case, it would be (1000*1/2)+(-3000*1/2) <0. As you can see a bet in place means that the bet at least matches the belief system of one of the two players. Generalizing this to a market where investors buy and sell securities and there is a market maker, you get the picture that placing bets on securities is an act of quantifying the implicit belief system of the investors. A book maker / market marker never quotes fair bets, he always adds a component that keeps him safe, i.e., he doesn’t go bankrupt. The first ever example I came across in the context of pricing financial derivatives was in the book by Baxter and Rennie. Their introductory comments that describe arbitrage pricing and expectation pricing sets the tone for a beautiful adventure of reading the book.

The takeaway of this chapter is , 1) belief cannot be measured exactly, 2) you can think of artificial randomizers to calibrate degree of belief.

Coherence

This chapter explains that betting rates ought to satisfy basic rules of probability. There are three steps to proving this argument,

  1. Personal degrees of belief can be represented by betting rates.
  2. Personal betting rates should be coherent.
  3. A set of betting rates is coherent if and only if it satisfies the basic rules of probability.

Via examples, the chapter shows that any inconsistency in odds quoted for and against by a person will lead to arbitrate in gamble. Hence the betting fractions or the odds should satisfy basic rules of probability.

The first systematic theory of personal probability was presented in 1926 by F. P. Ramsey, in a talk he gave to a philosophy club in Cambridge, England. He mentioned that if your betting rates don’t satisfy the basic rules of probability, then you are open to a sure-loss contract. But he had a much more profound— and difficult— argument that personal degrees of belief should satisfy the probability rules. In 1930, another young man, the Italian mathematician Bruno de Finetti, independently pioneered the theory of personal probability. He invented the word “coherence,” and did make considerable use of the sure-loss argument.

Learning from Experience

This chapter talks about the application of Bayes rule. It’s basically a way to combine personal probability and evidence to get a handle of an updated personal probability. The theory of personal probability was independently invented by Frank Ramsey and Bruno De Finetti. But the credit of the idea— and the very name “personal probability”— goes to the American statistician L. J. Savage (1917– 1971). He clarified the idea of personal probability and combined it with Bayes’ Rule. The chapter also talks about contributions of various statisticians/scientists such as Richard Jeffrey, Harold Jeffrey, Rudolf Carnap, and L.J. Savage, and I.J.Good.

Probability as Frequency

The four chapters under this section explore frequentist ideas. It starts off by describing some deductive connections between probability rules and our intuitions about stable frequencies. Subsequently, a core idea of frequency-type inductive inference— the significance idea is presented. The last chapter in the section presents a second core idea of frequency-type inductive inference— the confidence idea. This idea explains the way opinion polls are now reported. It also explains how we can think of the use of statistics as inductive behavior. Basically all the chapters give a crash course on classical statistics without too much of math.

Probability applied to Philosophy

The book introduces David Hume’s idea that there is no justification for inductive inferences. Karl Popper, another philosopher agreed with Hume but held the view that it doesn’t matter as inductive inferences are invalid. According to Popper, “The only good reasoning is deductively valid reasoning. And that is all we need in order to get around in the world or do science”. There are two chapters that talk about evading Hume’s problem, one via Bayesian evasion(argues that Bayes’ Rule shows us the rational way to learn from experience) and the other one via Behavior evasion(argues that although there is no justification for any individual inductive inference there is still a justification for inductive behavior).

The Bayesian’s response to Hume is :

Hume, you’re right. Given a set of premises, supposed to be all the reasons bearing on a conclusion, you can form any opinion you like. But you’re not addressing the issue that concerns us! At any point in our grown-up lives (let’s leave babies out of this), we have a lot of opinions and various degrees of belief about our opinions. The question is not whether these opinions are “rational.” The question is whether we are reasonable in modifying these opinions in the light of new experience, new evidence. That is where the theory of personal probability comes in. On pain of incoherence, we should always have a belief structure that satisfies the probability axioms. That means that there is a uniquely reasonable way to learn from experience— using Bayes’ Rule.

The Bayesian evades Hume’s problem by saying that Hume is right. But, continues the Bayesian, all we need is a model of reasonable change in belief. That is sufficient for us to be rational agents in a changing world.

The frequentist response to Hume is:

We do our work in two steps: 1) Actively interfering in the course of nature, using a randomized experimental design.2) Using a method of inference which is right most of the time— say, 95% of the time. Frequentist says: “ Hume you are right , I do not have reasons for believing any one conclusion. But I have a reason for using my method of inference, namely that it is right most of the time.”

The chapter ends with a single-case objection and discusses the arguments used by Charles Sanders Pierce. In essence, the chapter under this section point to the conclusion of Pierce:

  • An argument form is deductively valid if the conclusion of an argument of such a form is always true when the premises are true.
  • An argument form is inductively good if the conclusion of an argument of such a form is usually true when the premises are true.
  • An argument form is inductively 95% good if the conclusion of an argument of such a form is true in 95% of the cases where the premises are true.

 

imageTakeaway :

The field of probability was not discovered; rather, it was created by the confusion of two concepts. The first is the frequency with which certain events recur, and the second is the degree of belief to attach to a proposition. If you want to understand these two schools of from a logician’s perspective and get a grasp on various philosophical takes on the word, “probability”, then this book is a suitable text as it gives a thorough exposition without too much of math.

bookcover_knitr

 

Link :

image Takeaway:

Imagine that you were using a clunky and a painful email service and suddenly one day you are shown gmail. Aren’t you thrilled ?. It’s elegant, quick and has a ton intuitive features. I had the same feeling with knitr after having painfully used Sweave for a long time.  I am certain that this package will stand out as the goto package for literate programming for a very long time to come because it is elegant, quick and has features that you were always trying to patch in via other packages. Should you read this book? Well, if you have the patience and time to go over the manual and a thousand posts from stackoverflow and other places to know the various features of the package, you don’t need this book. However if you are like me who is short of time, values content that is organized and prefers to know the key hacks from the package, this book is definitely worth it.

image

This book is written by Ustad Alauddin Khan’s great grand daughter, Sahana. She narrates the story of Ustad Alauddin Khan piecing together various handwritten manuscripts and stories from her grandparents house.

Tracing the family tree, Sahana discovers that Alauddin Khan’s ancestors were actually Hindus. Somewhere along the way, one of his ancestors converted to Islam and married a Muslim. AK’s father was a Sitar player and the music rubbed on to AK from a very young age. By the age of seven, AK had already decided to give up school and devote his life towards music. At the age of eight, AK stole money from his mom’s safe box and left home in search of a guru. He traveled ticketless to Calcutta and not knowing anyone in Calcutta survived on some food doled out to beggars, slept on the entrance of a dispensary for many days.

Finally Luck smiled on AK. A young boy approached him, listened to his tale and took him to his parents who were music lovers. After listening to AK’s voice, he was immediately taken to a known guru, Nulo Gopal, who was a court musician of the Raja of Calcutta. Nulo Gopal told AK that he has to practice sur sadhana for at least 12 years to become good at it. AK immediately agreed and so began a journey of learning Hindustani vocal for 8 long years before one day, something happened. AK’s family members somehow came to know about him and they requested his guru to send him for a few days to their village. AK was least expecting the already underway marriage preparations in his village. Despite his reluctance, AK is married to Madan Manjari. Then a strange thing happens. On the wedding night, AK seeing his wife sleeping, steals all her garments and runs away to Calcutta so that he could practice music with his guru. Why did he have to steal? He was anyway going to go Calcutta for his riyaz? Why cause misery to the poor Madan Manjari? Nobody knows what made him do such a thing. I guess god gave an answer to his mad act. By the time AK came back to Calcutta, he was told that his vocal guru had passed away.

Heartbroken and nowhere to go, he breaks down. Fortunately, his host guides him to another teacher, Habu Dutta, who was an instrumentalist. AK decides to give up vocal training and pursue instrumental music. Habu Dutta offered to teach AK violin and cornet. AK spends about 7 years taking instrumental lessons and by the end of it, becomes proficient in various indigenous and foreign musical instruments like the sitar, flute, piccolo, mandolin and banjo. Instead of being satisfied and complacent, AK became frantic to master every instrument that he could lay his hands on and eagerly looked for opportunities to further his music knowledge. He started finding gurus who could teach him Western instruments. He learnt the Indian style of Violin, Western style of Violin, learnt sanai, naquara, pakhawaj, mridangam, tabla etc. To fund his education of various instruments, he started giving his services to a theatre group. Despite being financially ok, thanks to the theatre job, AK decided that the light hearted nature of theatre was not the right environment for a serious in-depth musical experience.

Lucky smiled on him again and got him in close proximity with great sarode player, Ahmed Ali Khan. As things turned out, AK begged Ahmed Ali khan to be his sarode teacher and thus AK spent some years learning sarode. However AK’s relationship with Ahmed Ali Khan ended up in a bitter note mainly because the latter was jealous of his disciple’s amazing music repertoire. AK was soon without a guru and became lonely. He tried committing suicide. Thankfully at the crucial moment, a maulvi of a mosque rescues him and tells him a plan to meet and learn music from the then popular guru, Ustad Wazir khan. But things don’t work out with Ustad Wazir khan. Soon, AK started scouting for other gurus. Subsequently he learnt sarode from someone, dhrupad from somebody else. All the while he was content that he was learning music. He had no idea where his life was taking him.

Meanwhile Madan Manjari who was left alone by AK, was still a devout wife. Despite her family members pressurizing her to remarry, she never did so. Her family members tried contacting AK and they did so, by getting in touch with Ustad Wazir Khan. One thing leads to another and soon Wazir Khan takes AK as a disciple and teaches him all aspects of music for 33 long years. After this long tutorship, Wazir khan gave AK permission to play in front of audiences. On a side note, I wonder why things are changing nowadays. Nowadays students or aspiring artists want to practice and play an instrument with the primary aim of showcasing in an event. In fact my sitar guru says he gave up on at least 20 to 30 his students when they grew restless with the pace of their learning. They were primarily looking forward to playing at an event, not realizing that the real joy is in playing for the self. I guess someone should have handed over this book and made them realize that learning music is a lifelong task.

Anyways, coming back to the story. In one of the AK’s performance, the prime minister of Maihar, a small state located some 120 km from Khajurao in Madhya Pradesh, was in the audience and was thoroughly impressed with his rendition. He requested AK to come to Maihar and teach music and be a court musician. Maihar became a turning point in AK’s life. AK started teaching many students in Maihar. Some of his students went on to become stalwarts in Indian music. AK single-handedly created an entire gharana named after Maihar. The book also gives some idea about the rigor that AK followed while teaching music to disciples. Any aspiring musician or instrumentalist should read the section on practice to get an idea about how someone becomes good at an instrument.

Alauddin Khan passed away in 1972 after spending 100 years of life in learning, practicing and teaching music. He left behind a great enriched tradition of music, which has since been passed down to generations of musicians and music lovers, keeping his memory alive. Through this book, his great granddaughter gives a picturesque glimpse in to the various facets of maestro’s life.

 

book_cover

Here is the author’s bio from his website :

Tom Heany has been involved with musician his whole life, as a student, a teacher, a player, a writer and, yes, a practicer – for 13,000 hours, give or take a few.For 18 years he was the Director of Programming for the National Music Foundation, where he developed and ran the American Music Education Initiative and the Berkshire Music Festival. As a contributing editor for the National Guitar Workshop, he wrote about musical subjects ranging from the Grammy Awards to Tuvan throat-singing. For WorkshopLive, NGW’s online learning platform, he interviewed guitar, bass and piano teachers about their views on practicing, performing and playing.

This book distills his years of  wisdom in 90 odd pages. Learning any musical instrument involves a huge amount of time “practicing” and very small amount of time “playing”. What’s the difference between the two terms ? In the former you have a mechanic type of mindset that involves tearing apart long compositions, repeating difficult notes, improving and fixing your errors, trying to build muscle memory , listening to others renditions, writing notes etc. On the other hand, “playing” involves invoking muscle memory, working memory and long term memory to give a performance in front of a public or a private audience.

In any budding artist’s life, the proportion of practice time is obviously very high. But what constitutes a good practice session ? If you play effortlessly for let’s say 15 min daily, does it mean that you have had a good practice session ?  If you are not struggling in practice, then are you really learning anything new ? How does one deal with the inevitable frustrations that arise when one starts practicing any instrument on a regular basis ? These and many more questions are answered by this book. They say that visuals are the best way to store and retrieve information and this book does exactly that. For each of the components of a good practice session, it creates little symbols that can serve as anchor points for any person who is practicing / wants to practice an instrument on a regular basis. Some of the points mentioned in the book look obvious when you read them. But to be consciously aware of them before, during and after a practice session is the key.

Here are 7 key ideas from the book that are to be kept in mind before/during/after a practice session :

1_love

If you are not enjoying the practice,change it until you are

The author takes an example of a kid who takes a piano lesson unwillingly and leaps off the class in to a basketball session for practice. How is basketball practice different from practicing an instrument ? Using this analogy, he gives a few tips for people who think that practicing an instrument is tedious and boring.

2_move Practice movement – music will follow

Music is not what we do; music is the result of what we do. We play music, but we practice movement. We make music by moving our bodies – hands, fingers, arms, back, shoulders, legs, feet, breath – against an instrument. It is essential that we focus on every aspect of moment while we practice. Most of the times we become bored and mostly these are instances we’re focusing on music, when we should be focusing on movement. Make that one change, and everything else changes, too.

Focus on movement, and our frame of mind becomes analytical. We start thinking less like artists and more like mechanics and, when it comes to practicing, that’s a major step in the right direction.This makes our mind analytical.

Focus on movement, and immediately we’re relying on our eyes much more than our ears. Visual input is much more precise than auditory input for most of us. We can describe, analyze and understand what we see more easily than what we hear.

Focus on movement, and we’re dealing with something over which we have direct control. We can make a change and see the effect right away.

Focus on movement, and the enjoyment and satisfaction come from getting the moves right, not from hearing the music – from how it feels, not from how it sounds. Once the move feels right, we want to do it over and over to enjoy that feeling, not to wander off after the melody.

3_practice_play

Playing and Practicing are two different things

Practicing and Playing are as different as cooking and eating. In some ways they almost completely unrelated. Playing music is an artistic, emotional activity. Ideally, you’d like to get your conscious mind out of the way as much as possible when you’re playing – the less thinking and the more ‘flow’, the better. Practicing is an analytical activity. It requires a completely different mental attitude. When you practice, you break things down into parts and study them. You look at everything – the music you’re trying to learn, the movements of your fingers, the results you achieve, etc. – and try to understand how all the pieces work together. You have to think like a mechanic. Develop a mechanic mindset!

4_move_music

You know it when your hands know it

The main reason most of us practice is so that we can play music better. Nobody practices so that they can talk about music better, or think about it better. Practicing is primarily about playing, and that means practicing is also about the body. What about the mind? The mind is important, obviously, and practicing trains the mind as well. But to get the most out of your practicing, you need to put your brain in the back seat and let your hands drive.

The mind and the body learn in different ways. The mind learns by recognizing patterns that connect new information with things it has already learned.The body, on the other hand, learns by repetition. For your muscles to learn a motion they have to do it over and over again. There’s no capturing patterns and filling in the details later. The hands, especially, need the details now, and if you don’t pay attention to those details, your hands will learn the wrong ones and it will be very hard to unlearn them.

All that repetition takes time. There aren’t any shortcuts. You can’t hurry it – in fact, hurrying usually makes you move backwards instead of forwards. And you can’t decide in advance how long it should take; it’s going to take however long your fingers need it to take. That’s just how bodies work. What does this mean for us? It means that whenever we practice something, our heads learn it before our hands do.

The more we focus on making sure our hands know and understand what to do – in other words, the more we concentrate on how our hands have to move – the faster we’ll progress.

  • Pay attention to which fingers you use.
  • Pay attention your fingertips.
  • Pay attention to the whole Frank Wilson hand. What are your wrists doing? What are your forearms doing ? Where are your elbows ?
  • Pay attention to physical sensation.
  • Analyze the sound you make as you play to

5_focus

 You affect everything by concentrating on one thing.

The more specific you can be about the thing you’re practicing, the better your practicing will be. If you concentrate on making one thing better, other things will start to get better, too. Why? Because motion connects everything.

6_hard

Don’t worry about the hard parts.

People learning a new song or a new technique often get nervous or anxious when they reach “the hard part.” 90% of the time the hard part is hard for one (or both) of two reasons: 1) They’re playing it too fast, 2) They don’t understand what they’re doing yet.

Here’s how you can get rid of 90% of the so-called hard parts: 1) Slow down. 2) Cut them into smaller, easier-to-understand pieces and work on those first. Take the hard parts and slice them up and slow them down until you have parts that are manageable – parts that you can practice without struggling. They can be as small as two notes. Then, when you know them well, put them back together.

Practicing music that’s too hard is frustrating and discouraging, and it makes your body tense up. And if you practice frustration, discouragement and tension, you’ll start getting better and better at being frustrated, discouraged and tense. You’ll end up practicing poor performance – practicing your mistakes until you can make them reliably every time. Better to get a bunch of small pieces perfect first, and then put them all together. So, if you find yourself struggling, take that as a sign that you need to change your approach.

7_star

Get your hands and your ears used to “Perfect.”

“Perfect” doesn’t mean playing a 5 minute piece perfectly. It means playing a note as perfectly as you can. It means moving your hands from one note to the next as perfectly as you can. It means aiming for perfect tone and perfect timing. It means aiming for perfection note-by-note, instant-by-instant. Forget about playing a whole song perfectly. Put your effort and intention into practicing perfect hand motions, and perfect performances will follow.

Notice that the idea doesn’t say Play perfectly. That’s a big idea – a little scary, a little vague – that sounds impossible. The idea instead says, Get your hands and your ears used to “Perfect”. Raise your standards. Get your hands used to moving perfectly, and they won’t be satisfied with anything less. Get your ears used to hearing perfect notes and phrases, and anything less than perfect will sound like a mistake. Make perfection at the lowest level a habit. Reach the point where “Good enough” doesn’t sound or feel right, where “Good enough” isn’t good enough. If you aim for “perfect” every time you practice, your practicing and your playing will both improve.

 

Here are the 7 habits mentioned in the book that helps you practice better:

tools_1_comfortable

Be Comfortable :

It’s important to be comfortable while you practice. If you’re comfortable, you’ll practice longer. If you’re comfortable, your body will be relaxed, your breathing will be more natural, and your hands, arms, shoulders, etc. will be freer to move.

tools_2_balance

Be Honest :

When you practice, every sound is your doing – after all, you’re the only one there. Wrong notes, weak notes, bad tone, grunts, sighs – whether you mean them or not, they’re all yours. You have to control them, and to do that you have to know – and admit – that you’re making them.

tools_3_optimistic

Be Optimistic :

There are things you can do, and things you can’t do yet. Remember that people all over the world, for hundreds of years, have been learning to play musical instruments. They’ve all gone through the same frustration, and they’ve all learned to do what you’re learning to do. So don’t worry. You’ll get there. In an hour of practice you’ll make literally thousands of small motions and small decisions. They’ll add up to an inch of progress on a good day. The process is slow; it requires patience. But it’s real.

tools_4_persistent

 Be Persistent :

When you’re practicing, and you’re dealing with something that seems hard to do, the easiest thing in the world is to give up. That’s what most people do. Either they stop practicing completely, or they stop working on the things they can’t do yet and just play the things they already know. Sometimes they just go through the motions, stumbling through a piece or an exercise over and over without concentrating on it. That’s not practicing – that’s fooling around. And it’s not even good fooling around, because it’s a lousy way to spend your time. If you’re not enjoying it, change it until you do.

People who lift weights know that the real progress happens when they keep going after their bodies and minds tell them to stop. When you practice, you have to do the same. You have to be persistent.

Persistence is partly just making a plan for your practice session and sticking with it. But it’s more than just that. If you decide you’re going to practice for 15 minutes, you don’t stop at 11 minutes. You complete the tasks you set for yourself. But if you’re bored at 11 minutes, don’t stop, and don’t just soldier on, either – first, get un-bored. Find a way to make it interesting. If it’s too hard, don’t stop, don’t soldier on – find a way to make it simpler, and continue practicing. That’s persistence in a nut shell – Find a way, and continue.

tools_5_scheck

 Be Consistent :

Practice works best when you do it every day. Some people would say that’s pretty much the only way it works. Being consistent means practicing the right way, every day. Being consistent means being persistent every day.

Many of the things you’re trying to learn are small movements that are hard to describe in words. (“It sounds better when my fingers go like this instead of like that.”) In order to be able to repeat these movements, you have to be able to remember how your hands feel or look when you make them. You can’t really write them down, and that makes them hard to remember. If you let a few days elapse between practice sessions, you’ll forget them, and you’ll have to find them all over again. It will be as though your last practice session never happened. Being consistent means you have a plan, you stick to it, and you do it every day. Practicing every day is much more important than practicing for a particular amount of time. 20 minutes a day works better than 60 minutes every three days. Consistency makes all the difference. It’s the best way to get there.

Consistency works for another reason. You can’t be consistent without commitment. You can’t make a practice plan, and a schedule, and stick to it for any length of time, without making some kind of promise to yourself. Being consistent means you have jumped in with both feet. It means you have told yourself, “I want to be a musician, and if this is what it takes then I’m going to do it. No excuses, no distractions.” That’s a powerful promise.

tools_6_slow

Go Slow :

Practicing slowly makes it easier for you to focus on the mechanics – that is, on what your hands, arms, shoulders, breath, etc. are doing.

Practicing slowly makes it easier for you to focus on the mechanics – that is, on what your hands, arms, shoulders, breath, etc. are doing. This is because the slower you go, the less what you’re working on sounds like music. The less it sounds like music, the easier it is to ignore. And if you’re going to concentrate on the motion, you have to ignore the music.

Practicing slowly encourages relaxation in your hands and arms. Practicing fast does the opposite.

Practicing slowly makes mistakes impossible to overlook. 

tools_7_music

 Make Music :

Make Music means to build a musical intention, a musical attitude, a musical effect, into every note, every rest, every finger movement, every breath. When you hit that target, you’ll be unable to play any other way. You want musicality to be something automatic, requiring no thought. The only way to get there is note by note, every time.

 

image Takeaway :

The book contains some valuable ideas that one can keep in mind while practicing any instrument. I loved the idea of Music Journal for every practice session. By merely recording some specific aspects of every session, one can be aware of a ton of things as you go along. There’s also an iPad app (Music Journal) for the same.

image

This book is about a set of letters exchanged between Pascal and Fermat in the year 1654 that led to a completely different way of looking at future. The main content of the letters revolved around solving a particular problem, called “problem of points”. A simpler version of the problem goes like this:

Suppose two players—call them Blaise and Pierre—place equal bets on who will win the best of five tosses of a fair coin. They start the game, but then have to stop before either player has won. How do they divide the pot? If each has won one toss when the game is abandoned after two throws, then clearly, they split the pot evenly, and if they abandon the game after four tosses when each has won twice, they do likewise. But what if they stop after three tosses, with one player ahead 2 to 1?

It is not known how many letters were exchanged between Pascal and Fermat to solve this problem, but the entire correspondence took place in 1654. By the end of it, Pascal and Fermat had managed to do what was unthinkable till then – “Predict the future”, more importantly act based on predicting the future.

Pascal tried to solve the problem using recursion whereas Fermat did it in a simpler way,i.e. by enumerating the future outcomes, had the game continued. The solution gave rise to a new way of thinking and it is said that this correspondence marked the birth of risk management, as we know today.

The book is not so much as an analysis of the solution(as the author believes that today, anyone who has had just a few hours of instruction in probability theory can solve the problem of the points with ease) but more about the developments leading to 1654 and developments after the 1654. In the process, the book recounts all the important personalities who played a role in making probability from a gut based discipline to a rigorous mathematical discipline. The book can be easily read in an hour’s time and could have been a blog post.