September 2011


I went over this book to get some idea on STP. I was completely clueless on the things that happen at the back end, once the trade is placed with a broker. STP was introduced in Indian Markets by FT and few other vendors almost a decade ago. It has lead to a clear improvement in operational efficiency. Even though this book talks about .NET platform, I liked the chapter on STP that explained things using business case and .NET code.


  • STP(Straight Through Processing) providers have the responsibility of carrying and delivering STP-related instructions and messages. They provide connectivity between brokers, custodians and fund managers, and bring them to a common network. Once the entities enter the common network, they communicate with each other using standard messaging protocols like Swift 15022.
  • In India FT is the leader in this game, with STP gate solution being used by everyone.
  • ISO 15022 is an acceptable communication protocol for achieving STP interoperability in several countries
  • STP was initiated as a part of T+1 initiative by the equities market
  • Swift 15022 terminal is the source through which all the market participants get on to this platform.


  • Cryptography is critical in this entire STP layer
    • Substitution Cipher can be used
    • Transposition Cipher can be used
    • Symmetric algos and Asymmetric algos are used to protect the confidentiality of the data. I think asymmetric algo is used in FT case
  • Digital Signature addresses non-repudiation issues encountered in high risk situation.

STP Interoperability

  • If there are multiple STP platforms, SOA can be used as a model for different STP platforms to work.
  • Web services are being used so that STP platforms can talk to each other.SOAP is the messaging communication framework used by Web services to send or receive messages

The entire structure of the book can be summarized using this visual.


Will go over the contents in greater detail at sometime in the future.


The core message of this book by Seth Godin, is “Mass Marketing is dead and its an era of marketing to wierds”. Seth clarifies the word wierd by defining as “ people who have chosen to avoid conforming to the masses, at least in some parts of their lives”. In one sense the book is a rehash of Seth’s other books There are some examples and illustrations that might motivate a reader to go over this 90 odd page book. The book starts off with an example of a zoo in Belgium.The zoo facing with dwindling crowds and hence falling revenues, pulls off a mass marketing campaign by focusing on “ A pregnant elephant”.This sort of marketing brings back the crowds to the zoo, but such examples are an exception than the rule, says Seth.  He describes a few situations that capture the changing market place both in the online and offline world. Here is a sample :

I’m standing at the corner of First Avenue and 8th Street in Manhattan. Is there a more vibrant commercial corner in New York? Look, there’s the Islamic cultural center across the street, right next to the Atomic Wings buffalo chicken wings joint. Around the corner is Veniero’s, a fabled Italian bakery. I can see a group of twenty Chinese tourists out front, just hanging out. Behind me is Momofuku, a modern take on the traditional Japanese noodle bar, run by a Korean chef who grew up in Virginia. Crossing the street is a rich uptown lawyer walking hand-in-hand with a tattooed downtown kid (or maybe it’s a tattooed lawyer and a well-dressed tourist). The sidewalk is so crowded you have to stand in the street if you want to stand still. I haven’t even mentioned the place selling hydroponic herbs, the guy who sells fresh bulbs of turmeric, or the Nuyorican Poets Café, where poets, famous and unknown, come to jam while the owner heckles from the back. There are gluten-free bakeries, extra-gluten bakeries, vegan bakeries, and probably a few people selling hash brownies as well. There’s nothing going on around me that would be considered normal by a time-traveling visitor from 1965 (except maybe the brownies). Instead, there are collisions of ideas and cultures

Seth offers 4 reasons for the marketplace getting weirder:

  1. Creation is amplified.Publish a book or sell a painting or customize your car or design a house—whatever your passion, it’s easier to do it, it’s faster to do it, and it’s more likely that (part of) the world will notice what you do. The ability to reach and change those around you has been changed forever by the connections of the Internet and the fact that anyone, anywhere can publish to the world.
  2. Rich allows us to do what we want, and we want to be weird. Here rich means  rich enough to care about choice. Richness is not measured by financial wealth in this context.
  3. It’s easier than ever to reach particular pockets of weird people with stuff they’re obsessed with. Consider all the niche sites that are catering to the long tail and you will agree.
  4. Tribes are better connected. One hundred and forty-one brave firefighters in Barbados belong to the Bajan firefighters group on Ning. This online community connects them and reassures them that they are normal in their weirdness. They’re obsessed about their craft, and they want to do it right. Not right by our standards (most of us don’t understand why someone would spend 80 hours a week risking one’s life for free) but by the standards of the community of firefighters.

The gradual shift in the behavior is illustrated by a caption “ Bell Curve is spreading”.


1950’s- 1960’s : Distribution of behaviors was tightly grouped 


 1970-1980’s : Distribution of behaviors was spreading 



By 2010, the distribution of behaviors had spread to the point where there was more weird outside the box than normal inside it.

Now, as the curve spreads, the geeks are even geekier. They don’t want the new thing; they want the beta version. They don’t want the thing that’s new; they want the thing that’s so new that the other geeks don’t even know about it. And a geek with an eye for denim might be specializing—denim isn’t enough, it has to be selvedge denim, or even better, selvedge denim from Japan.

Seth warns people who cater to the mass by saying:

  • If you cater to the normal, you will disappoint the weird. And as the world gets weirder, that’s a dumb strategy.
  • The chance to become the next Wonder Bread/ Budweiser/Chevy is seductive, but no longer practical. The field is too crowded, and there’s not enough upside after you build a middle-of-the-road normal brand.
  • Average is for marketers who don’t have enough information to be accurate.
  • No mass. No center. Our culture is now is a collection of tribes, and each tribe is a community of interests, many of whom get along, some that don’t.

image Takeaway :

We have all become weird, our behavior, tastes and preferences, hobbies are becoming weirder. Be it in business / creating art, those who adopt a strategy of pursuing the weird have consistently outperformed those who seek to create mass.


The author of this book, Thomas M. Sterner, is a Piano technician for a major performing arts center. The author admits that , in his career spanning 25 years, he had faced a lot of challenges to keep himself disciplined and focused (considering the act of tuning and maintaining a piano , an instrument that has 88 notes , is repetitious and tedious by nature). Over a period of time, the author develops a mindset that has kept him productive.Through this book, Sterner tells his method of remaining disciplined.

Process not the Product

In any form of practice, it is important to focus on the process and not on the product. There are umpteen variations of this statement that one gets to hear.The author does acknowledge this fact but goes on to add his own flavor to this statement. He uses music as an example to show that focusing on the process takes care of the product , but not vice-versa. He says

Keep yourself process-oriented. Stay in the present. Make the process the goal and use the overall goal as a rudder to steer you efforts. Be deliberate, have an intention about what you want to accomplish, and be aware of that intention. Doing these things will eliminate the judgments and emotions that come from a product-oriented or results-oriented mind.

It’s How You Look At It

The author gives a beautiful analogy of a flower to shift one’s perspective towards everything in life. He says

Ask yourself: at what point in a flower’s life, from seed to full bloom, has it reached perfection? Let’s look at this right now and see what nature is teaching us every day as we walk past the flowers in our garden. At what point is a flower perfect? Is it when it is nothing more than a seed in your hand waiting to be planted? All that it will ever be is there in that moment. Is it when it first starts to germinate unseen under several inches of the soil? This is when it displays the first visible signs of the miracle we call creation. How about when it first pokes its head through the surface and sees the face of the sun for the first time? All of its energies have gone into reaching for this source of life; until this point, it has had nothing more than an inner voice telling it which way to grow to find it. What about when it begins to flower? This is when its own individual properties start to be seen. The shape of the leaves, the number of blooms are all unique to just this one flower, even among the other flowers of the same species. Or is it the stage of full bloom, the crescendo of all of the energy and effort it took to reach this point in its life? Let’s not forget that humble and quiet ending when it returns to the soil from where it came. At what point is the flower perfect? I hope you already know that the answer is that it is always perfect.

Using this example, the author suggests that by following present-minded approach , one can experience a tremendous relief from the fictitious, self-imposed pressures and expectations that only slow one’s progress.

Perception Changes Create Patience

Patience is probably at the top of everyone’s list  of most sought-after virtues. One of the reasons that we become impatient is we step out of the NOW  There is a saying that states that most of what we worry about never comes to pass. Thinking about a situation before you are in it only scatters your energy. The first step toward patience is to become aware of when your internal dialog is running wild and dragging you with it. If you are not aware of this when it is happening, which is probably most of the time, you are not in control. The second part is understanding and accepting that there is no such thing as reaching a point of perfection in anything. True perfection is both always evolving and at the same time always present within you, just like the flower.

Progress is a natural result of staying focused on the process of doing anything. When you stay on purpose, focused in the present moment, the goal comes to you with frictionless ease. However, when you constantly focus on the goal you are aiming for, you push it away instead of pulling it toward you. In every moment of your struggle, by looking at the goal and constantly referencing your position to it, you are affirming to yourself that you haven’t reached it. You only need to acknowledge the goal to yourself occasionally, using it as a rudder to keep you moving in the right direction.

Suppose you are trying to learn how to play a piece of music and you come from this new perspective. Your experience will be totally different than what we usually think of in terms of learning to play a musical instrument. In the old way, you are sure that you are not going to be happy or “successful” until you can play the piece of music flawlessly. Every wrong note you hit, every moment you spend struggling with the piece, is an affirmation that you have not reached your goal. If, however, your goal is learning to play the piece of music, then the feeling of struggle dissolves away. With each moment you spend putting effort into learning the piece, you are achieving your goal. An incorrect note is just part of learning how to play the correct note; it is not a judgment of your playing ability. In each moment you spend with the instrument, you are learning information and gaining energy that will work for you in other pieces of music. Your comprehension of music and the experience of learning it are expanding. All of this is happening with no sense of frustration or impatience. What more could you ask for from just a shift in perspective?

Four S words – Simplify, Small, Short and Slow

The author shows the interconnectedness between these words and the way these 4 words can be used to structure any work, be it following a fitness regimen / coding an algo / playing an instrument / cleaning up the house etc. Any task that is overwhelming at the first sight, might put us off and sometimes we tend to permanently shelve it. But once we simplify the goal, divide this simplified goal in small sections, do these sections in short durations at a slow pace, paradoxically , we get far more things done efficiently. This might sound all very obvious, but then it is relevant to ask oneself a question, “ When was the last time you did all the four things – Simplified a project, Took a small section, worked on it for a short time and more importantly slowly ? “ More than any other place, I can relate this to music. You can’t master a raaga without the 4 components mentioned. You have to simplify the goal of a 1.5 hr typical rendition of a raaga, Divide in to small sections ,Practice on one of the the small sections in a short interval , lets say 45 min, and more importantly practice it slowly. “ My Sitar guru was mentioning the other day about “Budhaditya Mukherjee”, a Sitar Maestro. In an interview, when asked about his practice regimen, he replied that he practiced 3 hrs in the morning and 3 hrs in the evening and more importantly , he does it EVERY SINGLE DAY. Unlike Ravi Shankar’s of the world who quote that they practice 15 hrs – 20 hrs a day, which seems to be practically impossible for a human being, Budhaditya Mukherjee’s ritual sounds more pragmatic and realistic. In the same interview, he also mentions that a combination of Simplify + Small + Short and Slow are quintessential to master a raaga.

Equanimity and DOC

Calmness and even-tempered are the words that go along with equanimity. These are the characteristics that are desirable when we are working on something. However there is an evil beast called “Judgement” that sometimes jumps upon us.Judgment is inevitable in our lives,  but it becomes pathological when we overdo it. Most of times we overdo it. We judge everything in life and most of it unconsciously. We imagine hypothetical scenarios and think about the possible outcomes and possible judgments that we would make / others would make in such scenarios. Its like running simulations to create parallel worlds and checking the parameter values. It is good in statistics as alternate worlds gives confidence intervals on parameters. It is detrimental when we are working on something as it robs us from the NOW. It happens to all of us. We are doing something, be it running/ reading / programming/ playing an instrument etc. Instead of just being in the present, we start judging it. We are thinking of the next activity that needs to be done / we think of some odd conversation with someone/ we imagine hypothetical situations etc. We try to engage ourselves unconsciously in things that have nothing to do with what we are CURRENTLY doing. The judgement gives rise to  emotions and they stem from a sense that “this is right and that is wrong” or “this is good and that is bad.” Right and good make us happy while bad and wrong make us upset or sad. We feel that right and good are at least approaching “ideal,” while wrong and bad are moving away from it. We all want to be happy and have an ideal life, but what constitutes right and wrong is neither universal nor constant. The evaluations and judgments we make unconsciously in every second of our lives jump-start our emotions and bring us so much anxiety and stress.What can be done about it ? Doing work with out judgement part is far more effective as the execution of the task would be that much more clinical.

The author suggests “meditation” as a means to get out of this ever-judgmental mode of mind. He also offers another adjunct method,which he calls “ DOC – Do + Observe + Correct”. He gives anecdotes and experiences from his work as a Piano technician that bring out the importance of DOC to perform tasks efficiently. Using DOC repetitively for most of the tasks that we do in our daily lives, it is possible to remove judgement that clouds our thinking and execution. The C in the DOC refers more to evaluation and not judgment. Evaluation comes before the action of passing judgement. After evaluation, you skip the judgement part and then go over DOC cycle, as judgment has no value in DOC mindset.

Teach and Learn from Children

Time perception is an integral part of the difference between adults and children. In general, Children don’t seem to have a sense of where they are going in life. There is today and that’s it. They live in the present moment, but not really by their own choice; it’s just how they are. So, making them do any activity like learning to play an instrument /  work on something which takes time and effort to master, is difficult as they don’t see a point in doing it. There is no instant gratification in learning math / learning an instrument. It is usually an activity which will involve a lot of struggle/failures etc. So,there is a paradox here. What’s frustrating as an adult, with regard to teaching them to stay in the present when they are engaged in something that requires perseverance, is that they can’t see the point. Why work at something that requires a long-term commitment, a perception of time outside the present moment? Children are always in the NOW and adults find it difficult to be in the NOW. Is there something that Parents and Children can learn and teach each other ? The author shares his experiences in dealing with her two daughters in this context.

image Takeaway:

A Piano technician’s job from the outside looks repetitive. This book provides a different perspective of the work and in doing so, provides valuable guidance for deliberate practice. 


This book by J.R.Norris is a classic reference for Markov Chains. It is unlikely that any paper on Markov chains in the recent times does not mention this book somewhere in the paper. I will try to attempt to summarize the main chapters of the book in this post.

The book starts off by saying that Markov process are random processes that don’t retain memory. When the state-space is restricted to finite or countably many, these processes are called Markov Chains. The intro gives a great clarity for any reader in listing the kind of chains that would be dealt in the book. The following gives a basic flavor of Markov chains that are dealt in the introductory chapters:

image_thumb[3] image_thumb[5] image_thumb[7]

Fig 1

Fig 2

Fig 3

Fig 1 is that of a finite Markov chain with three states 1,2,3. Since the movement of the random variable is restricted to discrete states, the chain is termed as discrete chain. In Fig 2, the random variable moves from 0 to 1 after waiting for a time T(exponential density). In Fig 3, the random variable moves from 0 to 1 and then 1 to 2 and so on, but the time that the variable has to wait to jump from one state to another is a poisson process with rate lambda. The latter two chains correspond to continuous Markov chain.

A very useful diagram which summarizes a lot of terminology is the following Markov chain.


The features of this Markov chain are the following :

  • {0},{1,2,3},{4,5,6} are communicating states
  • {1,2,3} and {4,5,6} are closed states. These closed states are recurrent states
  • {0} is transient state
  • {4,5,6} is periodic chain
  • {1,2,3} is aperiodic chain

All these terms become intuitively clear in the context of the above Markov chain. Clear definitions of these terms are presented in subsequent chapters. A few questions are posed for the above Markov chain so as to motivate the reader to think about ways to solve various kinds of problems that arise in this context of a Discrete Markov chains.

Chapter 1: Discrete-Time Markov Chains

This chapter forms one of the core chapters of the book and the important theorems on the following are stated and proved:

  • Hitting times
  • Strong Markov Property
  • Characterizing recurrence and transience
  • Invariant Distributions
  • Positive recurrence
  • Convergence to equilibrium
  • Reversibility
  • Long run averages

Firstly necessary terms are defined such as state-space, initial distribution, transition matrix, stochastic matrix. Using these objects, Markov chain is defined. The symbolic representation of the Markov chain essentially captures the memory-less property of the Markov chain. The chapter then introduces the method to calculate the probability of remaining in a particular state after n transitions. One of the obvious but dumb method that one can think of , is to keep multiplying the transition matrix n times and then use the initial position and this powered matrix to calculate the probability of remaining in the same state. There is a better method and this sections explains it by Linear algebra where powers of the matrix are easily calculated using diagonalization method. Basically one finds the Eigen values of the transition matrix, writes the power of matrix in terms of Eigen values and then calculates the probability of remaining in the same state after n transitions. This method is computationally efficient in finding the probability of remaining in the same state after n transitions. Once the Eigen vectors are found, any element in the transition matrix can be found by the following equation:


The chapter introduces the “class structure”, way to identify different states and categorize them. Two states are said to be “communicating” when there is positive probability to reach one from another. If a set of states are in such a way that each communicates with the other, then such a set of states are called “closed state”. A closed state is one from which there is no escape. “communicates” is actually an equivalence relation and set theory tells us that we can categorize the sets based on the equivalence relation. If there is only one single class in the chain then such a chain is called “irreducible”. If there is only a single state in an equivalence relation, such a state is called “absorbing state”. This means that if a chain enters this state, it remains there forever.

Hitting times and absorption probabilities are introduced subsequently. The first time I came across hitting times was in Marek Capinski’s book and subsequently in Shreve’s book. I could not appreciate those concepts at that time as much as I should have. Hitting times were defined and various distributions were computed, some in closed form, some using numerical analysis. I did not have the faintest clue on its application in real world, except in American options case. Years later when I was developing a pairs model, things became clear. Modeling the hitting times for the spread and estimating the parameters of the model made me realize the importance of the practical application of hitting times. This book has given me a lens to view the hitting times , i.e via Markov chain. The key takeaway from this section is that the vector of hitting time probabilities (mean hitting times) are found by solving a set of linear equations and this solution is the minimal non-negative solution. The fact that the solution is a minimal non-negative equation plays an important role in determining the solution to the system of equations involving hitting time probabilities(mean hitting times). This is showed using gambler’s ruin example and a simple birth-death chain.

Strong Markov Property is introduced subsequently. To summarize in plain English, the intention behind using “strong” is as follows: Markov property is memory-less conditional on the chain taking a particular value, at a time interval m. To expand this property to different situations, one of the first generalizations that one can think of is ,”Is the memory-less property applicable to random times, instead of a fixed time,as given in the definition of the Markov chain?”. The answer to this question is yes, but in the context of specific time variables , called, stopping times. “Stopping time “ variables are those variables whose value is only dependent on the history of the path and not based on the future of the chain. In this context, first passage time + first hitting time are defined and are used to prove the “Strong Markov Property” theorem. The theorem states that Markov property is preserved at stopping times. Ok, so what if Markov property holds good ? Well, this property gives us the flexibility to compute the entire probability generating function for the hitting times. If the chain is in state(i), the probability of hitting 0 , let’s say, can be computed by a recurrence relation and then using the condition that the set of linear equations are minimal non-negative solutions. However if one has to compute the entire set of probabilities at once, then the generating function approach works like a charm. Strong Markov property is used to come up with closed forms for the probability generating functions for gambler’s ruin and other examples. The reader can at once see the importance of the knowing generating function, as it is basically the signature of the random variable. You can calculate hitting probabilities starting from any state, you can compute the mean hitting time by simply finding the derivative of the generating function. So, the main takeaway from this section is that , “Strong Markov Property” helps in computing more general mathematical objects like generating functions which provide much more information about the hitting time probabilities.

A State in the Markov chain state space can be transient or recurrent based on the whether the probability of hitting the specific state as number of steps decreases to 0 or tends to 1. The section on transient and recurrence behavior is very important as it provides definitions for these terms, links these terms to class structure. Some of the main theorems mentioned in this section are

  • Every recurrent state is closed
  • Every finite closed set is recurrent
  • If C is a communicating class, either all states are recurrent or all states are transient

First passage time decomposition problem given as an exercise problem is very useful to get another perspective towards characterizing a recurrent state based on probability of return in the nth step. The chapter then moves on to using the following conditions for state classification:


The above conditions are applied to random walk in 1D, 2D and 3D space so that a reader can get an idea to use the same for other situations.

The chapter then shifts to explaining invariant distributions. Invariant distributions are those distributions specific to a transition matrix that have the property of reaching a stationary value. Meaning, if you start with some random distribution of initial states, after taking infinite steps, the Markov chain starts moving across states in such a way that the probability distribution of values it takes, mirrors the invariant distribution. Or in other words, the probability of moving from statei to statej becomes constant as the number of steps in the Markov chain increases. The calculation of invariant distribution becomes easier if one chooses to use linear algebra methods. If one computes the eigen vectors for the transpose of the transition matrix and pick the vector that corresponds to eigen value 1, then the vector is nothing but the invariant distribution of the Markov chain. The chapter then introduces the expected time spent between recurrent visits to a state. This expected time is very useful in classifying the state space based on the value. If the value is finite , then the state can be termed as “positive recurrent state” and if the value is infinite , then the state can be categorized as “null recurrent state”. Thus states can be classified in transient states, positive recurrent states and null recurrent states. Irreducible chain is a chain where each state is positive recurrent state. Computation of invariant measure also leads to easy calculation of expected return times.

Another related concept discussed is the periodicity. In an irreducible chain, either every state is periodic or every state is aperiodic. Basically a state is aperiodic if one can revisit the same state for all sufficiently large number of steps. Well it is easy to definite periodic state. A state is periodic if the chain revisits the same state in a set pattern. The relevance of periodicity is that , the invariant distribution exists for a irreducible and aperiodic chain. An extremely important aspect relevant to invariant distributions is discussed, i.e, time reversibility. One of the sufficient conditions for the existence of invariant measure is “detailed balance” condition. This condition is defined and applied to a few examples. The chapter ends with proving ergodic theorem, the pivotal achievement of A.A. Markov

In real world , no one hands us the transition matrix. One ought to estimate it in some manner. Given a Markov chain realization and a set of states, it is important to estimate the transition matrix. The chapter shows an approach that is typically used in most of the situations, i.e MLE method. With MLE method, one writes the likelihood function and use the Lagrangian method to compute the transition probabilities. The problem with this approach is that the transition probabilities are not consistent estimates for transient states. Hence this approach is modified so that the transition matrix estimates are consistent.

Chapter 2 : Continuous Time Markov Chains – I

Continuous Markov chain is more involved and hence the author has decided to split the concepts in to two chapters, first being written to equip the readers with the machinery to understand continuous time processes and another chapter using this machinery to discuss continuous Markov chains. This chapter discusses the machinery needed to understand Continuous time processes.

The first thing the chapter touches upon is the infinitesimal generator or the Q matrix. This generator is defined and various properties are explored. Once this is done, the crucial link between transition matrix and Q matrix is provided using the following theorem:

This connection between P and Q matrix gives the flexibility of creating a Markov process at fine grid points. A few examples are provided to illustrate the computation of transition matrix given the Q matrix. The chapter then moves on to giving the first introduction to continuous-time random processes. A very important result from measure theory says that Right continuous processes can be determined from its finite-dimensional distributions. Such right continuous processes are chosen so that the probability can be summarized with the help of jump times and holding times. The most important thing to notice is the summation of a stochastic process using jump times and holding times. If you carefully look at the basic stochastic processes, what one assumes about holding times and jump times is the basis for the random process realization. For example in a poisson process ,the holding times are exponentially iids, i.e the process holding time before a jump is independent of the holding time after the jump. The chapter then introduces exponential distribution and shows the connection between exponential distribution and memory-less property. An important theorem stated in the context of this section is : Sum of independent exponential random variables is either certain to be finite or certain to be infinite based on the harmonic mean of the parameters of the exponential random variables. Another term introduced in this section is the “first explosion time”, that is defined as the supremum of all jump times. A process satisfying the requirement that if t > first explosion time, then it takes on the value infinity, is called “minimal” process. The minimal does not refer to the state of the process but to the interval of time over which the process is active.

The section moves on to defining poisson processes based on holding times and jump times. A simple way to construct a Poisson process is to simulate a sequence of exponential random variables with poisson rate lambda, and define the jump chain such that it jumps by 1 unit after every holding time. Three equivalent conditions are stated to identify any poisson process and the proof involves forward equations. Subsequently a few properties of Poisson chains are explored. A counting process is characterized by the distribution of increments. The increments might be independent OR they can be stationary increments where it is only dependent on the length of the interval. So the first question one must ask when applying a counting process to a problem is , “Does the context have independent increments or stationary increments ?”

The section then introduces Birth processes, which can be thought of a generalization of poisson process where the holding times are distributed with an exponential distribution but with different rates. Like in the case of Poisson process, the section proceeds in building the Q matrix, formulating the forward equation for computing transition probabilities from the Q matrix and finally defining Birth process in three equivalent ways.

The most interesting part of this chapter is the one on Jumping times and holding times where continuous Markov chains are formulated using the joint distribution of holding times and jump chain. It goes on define a Markov chain as follows :


It then goes on give three constructions to Markov chain. The one I liked was the one with a good analogy , i.e

Imagine a state-space I as a labyrinth of chambers and passages, each passage shut off by a single door which opens briefly from time to time to allow you go through in one direction only. Suppose the door giving access to chamber j from chamber I opens at the jump times of a Poisson process of rate q(i_j) and you take every change to move that you can, then you will perform a Markov chain with Q matrix-Q

The above analogy then is used in taking a set of Poisson processes to construct a Markov chain. I found this kind of analogy very appealing as such things are always easy to understand and recall in various situations. Actually this whole construction of Markov processes using holding times and jump times itself is very interesting and makes tremendous sense in getting a general idea of the way random processes work. Sections on explosion times , forward and backward equations , Non minimal chains are to be read selectively as the difficult levels shoots up exponentially.
In author’s words, “There is an irreducible level of difficulty here.”Smile

Chapter 3 : Continuous Time Markov Chains – II

With the necessary mathematical tools and framework covered in the previous chapter, the author introduces the continuous time Markov chain concepts in the same order as they were done for the discrete case. Basic terms are introduced like generator matrix, the initial distribution, transition matrix and transition probabilities. Class structure for the states is discussed where in the states are categorized based on their communication status. In this context, the system of equations needed to solve the hitting times and expected hitting times are introduced. Depending on the Q matrix, the system of equations help one determine the minimal solution for hitting time probabilities and expected hitting times of various states. Subsequently recurrent and transient states are defined in the context of continuous time processes and conditions for verifying the recurrence or transience of a state are given. They are basically in the integral form whereas they are in the summation form in Chapter 1 for the discrete Markov chains. Invariant distributions and ergodic law are discussed.

The basic funda that this chapter and the previous chapter for the discrete case teaches is that, one must analyze a Markov chain in the following manner, by posing questions to the problem at hand

  • What is the transition matrix ?
  • What are the states of the process ?
  • Is there an underlying Q matrix for the process ?
  • What is the transition matrix for the jump process, if it is a continuous Markov chain ?
  • What are the types of states ?
  • How can one identify or verify whether a state is recurrent or transient ?
  • What are the first passage time probabilities and expected hitting times for various states ?
  • Does the Markov chain converge to an equilibrium state ?
  • Is the Markov chain irreducible?
  • Is the Markov chain periodic or aperiodic ?
  • Does time reversal hold good ? This is crucial for MCMC applications where detailed balance is one of the conditions for chain convergence(though it is only a sufficient condition and not a necessary condition)

The first 3 chapters comprise ~ 60% of the content of the book. I found chapter 4 to be very dense and overwhelming , as the author tries to extend the Markov chains and cram concepts relating to Martingales, Potential theory, Electrical networks and Brownian Motion, all in just 40 pages. The last chapter gives a flavor of applications of Markov chains by giving sample models in biology,queuing theory, resource management, Markov decision processes and MCMC.

image_thumb[27]Takeaway :

The book gives an intuitive yet rigorous introduction to discrete and continuous Markov chains. The decomposition of a Markov chain in terms of holding times and jump chain to explain various concepts of a continuous Markov process is the highlight of this book.