### June 2013 One can conveniently describe a linear programming problem by stating the goal as minimizing / maximizing a function given a set of constraints. The word “linear” restricts the objective function and constraints to have a linear form. Despite this seemingly harsh restriction, LP can be used to attack a LOT of problems. This book is a Linear Programming 101 book and a book with visuals. I am a big fan of books with visuals. The first thing the author emphasizes is

It is rather difficult to optimize or even to determine an overall measure of effectiveness, and thus, one attempts to suboptimize— break up the problem into tractable subproblems, each subproblem consisting of an applicable measure of effectiveness and associated constraints.

Given this background, the author urges to look for subproblems and frame a mathematical model around it. The book gives some textbook type examples to illustrate the basics of LP. Subsequently it gives a description of a few generic problems that LP can solve:

• Problems based on Graphs/Networks
• Traveling Salesman Problem
• Game theory problems that don’t have a saddle point

The appendix flushes out the math behind the simplex, integer programming, mixed integer programming etc. The theorems mentioned in the appendix can serve as a good motivation to look up more math oriented books on LP. This book can serve as a good supplementary read for a course on Linear/ Integer programming.

A nice quote from the book relevant to teachers :

Do not try to satisfy your vanity by teaching a great many things. Awake their curiosity. It is enough to open their minds, do not overload them. Put there just a spark. If there is some good inflammable stuff, it will catch fire.

— Anatole France The book is a story recounted by Turney Duff about his rise and fall on the Buy side. Turney Duff graduates from Ohio university in 1994 and starts hunting for a job on Wall Street. Thanks to his uncle Tucker and a random conversation about a soap opera with a HR lady, he gets a job at Morgan Stanley in Private Wealth group as a Sales assistant. Three years later, his dream of getting in to trading seems like a mirage until one day he gets a chance to organize a party. He writes a newsletter that catches the imagination of everyone. The party meant to boost the morale of the employees is a grand success and Turney becomes an instant hit among all the dealers and the top management. He thinks that this event might get him a trading job. Instead, two years pass and he is still a sales assistant at Morgan. His ex-Morgan colleagues help him in getting a job at Galleon where he starts trading health care stocks.

Life at Galleon is disheartening for the first few years. He is humiliated, ridiculed by many of the top bosses and he takes everything in his stride. At Galleon, he learns about all the shady ways that the firm makes money: the way money is flipped back and forth between clients account and their private admiral accounts, reliance of insider information to make a ton of money, flipping an 100% long portfolio in to a 100% short portfolio within a span of few hours based on some hushed conversations with outsiders. The first time Turney makes a half a million dollars is via a stranger’s call giving him inside info on a stock. Turney’s party organizational skills are in demand at Galleon too and as always he delivers it. All the employees, clients, brokers love the party hosted by Galleon. His life at Galleon begins to look up. In 2000 Turney makes the first million dollar trade, again thanks to his information channels (insider info).

In the book, Turney describes the work environment at Galleon and maybe it is atypical of a wall street hedge fund. Raj Ratnam creates an unhealthy competition amongst his own employees, pounds the traders to get additional information about stocks from whatever channels they can, pushes the traders to meet sales brokers, analysts, anybody who can give them prop information. Luck favors Turney when the head trader from Galleon leaves and he gets a chance to run the Health care fund. Turney follows the philosophy, “It is not what happens BETWEEN the trading hours that matters. Information flows to you based on what you do AFTER trading hours”.

In 2001, one of the founding members of Galleon leaves to start a hedge fund, Argus partners. Turney is poached from Galleon by his ex-boss. His life changes in Argus partners where he soon starts managing a billion dollar portfolio comprising health care stocks.  He is known on the street as a part of Health care Mafia, a set of  hedge funds that run a multibillion dollar health care funds.

An incident at a party changes his life FOREVER. He snorts cocaine casually and his story takes a irreversible twist. Turney gives a spicy description of all the excesses of a successful buy side trader. Is it the cocaine or is the money from trading that makes him do all crazy things? Well, may be its both. He marries a singer and tries settling down. But cocaine has already taken a toll on him. He goes to a rehab thinking that a month away would cure his addiction. He comes back from the rehab and vows that he would stay away from cocaine and would lead a life of a responsible husband/parent. Life as always never goes as planned. He is caught up in the addiction again, loses his trading job, switches companies, his wife walks out, and goes to rehab again. By the end of 2009 after 10 years of making his first million dollar trade at Galleon, Turney lands up in rehab again. This time though, after returning from rehab, the buy side has shut its doors. There aren’t many Python books that talk about Data Structures. One of the reasons could be that Python provides a rich set of collections like list, dict, queue, set etc. that writing a new Data Structure might be unnecessary. That being the case, it is always helpful to know the time complexity of various operations in list, dict,etc. For some of the algorithms based on trees and graphs, there is no option but to build an abstract data structure to suit the problem one is trying to solve. This book discusses all types of data structures using Python.Even though the book uses Python and the code eats up a lot of paper in the book, there are bugs in several code fragments and there is no errata anywhere on the web. In any case, let me list down some of the points from the book. and certainly not on the author’s website.

Chapter 1 : Abstract Data Types :

This chapter introduces ADTs, a programmer defined data type that specifies a set of data values and a collection of well-defined operations that can be performed on those values. ADTs can be viewed as black boxes with the set of operations grouped in to four categories :

1. Constructors
2. Accessors
3. Mutators
4. Iterators

• One can focus on solving the problem at hand instead of getting bogged down in the implementation details
• One can reduce logical errors that can occur from accidental misuse of storage structures and data types by preventing direct access of the implementation
• The implementation of the abstract data type can be changed without having to modify the program code that uses the ADT
• It’s easier to manage and divide larger programs in to smaller modules, allowing different members of a team to work on the separate modules

Common terms defined

• Collection : Group of values with no implied organization or relationship between the individual values
• Container : An ADT that stores and organizes a collection
• Sequence : Container in which the elements are arranged in linear order
• Sorted Sequence

The chapter describes a situation where one needs a bag ADT and then uses list to implement the bag ADT. In doing so, it explains the various things one needs to consider while implementing an ADT. Also there is a section that explains the details of implementing an iterator for an ADT.

Chapter 2 : Arrays :

Python has a built in collection object, list that can be used for a wide variety of tasks. There is one downside to it. The physical space allocation of a list is usually twice as large of the user defined declarative size. To work around this redundancy, the chapter defines Array in one and two dimensions using the ctypes module in Python. The chapter also gives a prototype for Matrix class that essentially builds on the 2D Array class. The main learning from this chapter is that list is not always the best data structure to use. Here is the Big-O efficiency for the most common list operations: Chapter 3 : Sets and Maps

This chapter defines a set ADT and uses a list to implement this structure. I guess these elementary data structures are meant to give the reader some kind of practice in creating data structures. Somehow I felt that this chapter could have been easily clubbed with the previous chapter as there is really nothing new here.

Chapter 4 : Algorithmic Analysis

The Big-O efficiency for dictionary is : The Big-O efficiency for list is : The above table cites the worst case time complexity of the operations. One can also compute the amortized cost. Amortized analysis is the process of computing the time-complexity for a sequence of operations by computing the average cost over the entire sequence. Amortized cost is not average case time. In the case of append operation for a list, the amortized cost is O(1) . The chapter ends with an example of Sparse matrix implementation. Chapter 6 : Searching and Sorting

Most of the ADTs need a search function. This chapter introduces binary search and improves upon the code from the previous chapters by making it a O(log n) operation vis-à-vis O(n^2). Another way to improve the search operation is sort the embedded data in the data structure. This chapter introduces bubble sort, selection sort and insertion sort algos. These sorting algos are then used in the ADTs explained in the previous chapters to store data in an sorted manner. The set ADT implemented in Chapter 3 is enhanced using binary search and sorted data structure to improve the efficiency for a few operations. The efficiency gains are shown below : If you want to implement a bag ADT, you can also use a linked list for the data storage. The only advantage of using Linked list is that it is O(1) for add operations as compared to O(n) for a list structure. Also if the data is very dynamic in nature, it is better to use linked list. The chapter has an implementation of single linked list and doubly linked list for a polynomial function that mimics all the operations of polynomials.

Chapter 7 : Stacks

This chapter provides a list based implementation and linked list based implementation of Stack data structure. Single linked list based implementation is better as the Big O efficiency for all the operations are O(1). Stack can be used in evaluating postfix expressions and the chapter shows basic algorithm for the same. The chapter ends with a nice maze puzzle that emphasizes the importance of the stack data structure.

Chapter 8 : Queues

One can implement a Queue ADT using list data structure. However the dequeue and enqueue operations are both O(n). Because the latter operations are O(n),alternate data structures are tried out such as circular linked list and linked list. In the case of circular linked list, even though the operations are O(1), there is a problem with the fixed size of the circular queue. The linked list overcomes the size limitation and provides O(1) efficiency for enqueue and dequeue. There is also an implementation of bounded priority queue using linked list based Queue. The disadvantage with bounded priority queue is, obviously the length of the Queue is fixed. In cases where such a restriction is natural, the bounded priority queue can be used to efficiently code the problem. Discrete event simulation can be done using Queue ADT. Skeleton code is given toward the ends of the chapter and the reader is encouraged to work through filling up the details in order to simulate a queue. Obviously the implementation is pretty naïve as it is meant for illustration purpose only. If you want to study lets say M/M/K priority type of Queuing system, the code needs to be substantially improved.

Chapter 10 : Recursion

This chapter explains recursion using various examples, i.e

• Printing numbers in reverse
• Computing factorials
• Fibonacci sequence
• Binary Search
• Towers of Hanoi
• Tic-Tac-Toe
• Eight Queen problem

The above examples gives the reader good enough idea about the power of recursion. Personally I liked the Eight queen problem. I remember coding it some years ago , rather inefficiently in R. But coding the same in Python is so much more convenient from the collection objects in Python and the ADTs that can be built on top of them. I think the biggest learning from this book is a good programmer has a set of custom built data structures ready in his/her tool box to solve various problems.

Chapter 11 : Hash Tables

In a comparative based search algorithm, one cannot do better than O(log n). One way to improve the efficiency is by using Hash tables. By creating a explicit map between the key and the index value of its storage, the efficiency of search can be vastly improved. However not everything is Nirvana here. The biggest issue one must handle is that of “collisions”. The chapter gives a flavor of the various functions that can be used for linear probe to reduce clustering effects. The functions discussed are Modified linear probe, Quadratic probing and Double hashing. Also there is a mention of Rehashing technique. The efficiency of a hash table depends on the hash function, the size of the table and the type of probe used to resolve collisions. Pseudo code for Hash map ADT is provided. I have noticed a lot  of bugs in the code provided. The chapter ends with applying Hashmap ADT to displaying histogram.Again, the reader might not mistake that this is the real life implementation of a histogram. A histogram implementation in statistical packages involves pretty sophisticated math with FFT being the popular mode of implementing the histogram algorithm. One might be amazed to know that there are quite a few concepts that one needs to master to REALLY understand histograms, some of them being, “kernel density estimation”, “cross validation principle”, Fast Fourier Transformation etc. Sometimes I think there is no limit to the depths that you can plunge to understand anything. But the deeper you go, the better you understand things and the more confident you are , in solving real life problems.However there is always the risk of getting lost and I guess the art lies in knowing when to stop.

Most of the sorting algorithms can be divided into two categories : comparison sort and distribution sort. In a comparison sort, the data items can be arranged in either ascending or descending order by performing pair wise logical comparisons between sort keys. The pairwise comparisons are typically based on numerical order or lexicographical order. A distribution sort, on the other hand, distributes or divides the sort keys in to intermediate groups or collections based on the individual key values. This chapter begins with merge sort, a recursive sort procedure which has O(n log n ) efficiency. It is followed up with a discussion on Quick sort, that has an average efficiency as O(n log n) but O(n2) in the worst case. What algo is used in Python list data structure ? As per the book, quick sort was followed in earlier versions but the newer versions combine insertion and merge sort algorithms. By this point in the book, 5 sorting algorithms are covered, bubble, selection , insertion, merge and quick sort. The first three have a worst case time of O(n2) while the merge sort has a worst case time O(n log n). The quicksort, the more commonly used algorithm in language libraries, is O(n2) in the worst case but it has an expected or average time of O(n logn). In a comparison based sort, one can do no better than O(n log n).

The chapter introduces RadixSort algorithm that drastically cuts down the sorting time for certain kind of contexts. It is a distributive sort algorithm where the values are distributed in to various bins and regrouped from the bins in such a way that after some iterations, the resulting list is a sorted list. There are also insertion sort and merge sort algorithms to data stored in the form linked lists. The insertion sort algorithms used with linked lists is O(n2) in the worst case whereas merge sort is O(n log n).

Chapter 13 : Binary Trees

Tree structure is introduced by defining the following terms

• Nodes
• Edges
• Root Node
• Path
• Parent Node
• Child Node
• Interior Nodes
• Leaf Node
• Subtree
• Depth of a Node
• Height of a tree
• Width of a tree

Binary Trees are defines and explained via rich visuals. Four kinds of tree traversals are mentioned, i.e Preorder Traversal, Inorder Traversal, Postorder Traversal, Breadth-First Traversal. Heaps are introduced in this section with min-heap and max-heap properties. Python code for max-heap is provided. Like many other codes given in the book, the code has a few bugs. Bounded priority queue can be implemented via Heaps. Here is the comparison for various implementations The logical sorting in this context, heap sort is a trivial extension of heaps. The chapter ends with a modified version of heap sort that is an inplace sorting algo.

Data Structures + Algorithms = Programs. This book talks more about Data Structures and less about Algorithms.  The book can be useful to a Python programmer who wants to understand Big O efficiencies of various inbuilt collection objects in Python. In the process, one will inevitably realize there is a need to built Abstract Data Types to implement various algorithms. Laplace Transform is a very useful technique as it can be used in a ton of problems in various contexts. Here’s a random sample of the places where it’s use makes computations elegant :

• Solving Difference equations in various Queueing configurations.
• Renewal theory – Solving Key Renewal equation.
• Solving specific type of ODEs.
• Solving specific type of PDEs.
• Computing convolution of two functions.
• Moment generating function computations.

In most of the applied math books, all one sees is a table, tucked away in the Appendix, that lists out function and its Laplace transforms. Books such as these helps one see the real math behind the Laplace transform. The chapters are sequenced in an increasing order of difficulty. It starts off with the basic definition of the transform and discusses the convergence aspects. The following points are covered in the first chapter of the book

• Laplace transformation exists for functions that have two properties, Piece wise continuity and Exponential order. However these properties do not characterize the entire class of functions (L) that have Laplace transform, i.e there are functions that do not satisfy one of these properties but still have a Laplace transformation.
• Linearity property of Laplace transforms.
• Necessary conditions for swapping the summation operator and Laplace operator.
• Translation theorems.
• Differentiation and Integration of Laplace transforms.
• Using Partial fractions and applying Inverse Laplace transform to retrieve the original function.

The second chapter deals with functions that are a bit more difficult than the plain vanilla functions mentioned in the first chapter. Laplace transform for the following functions are derived :

• Functions of the form tν where ν is a real number. Gamma function is introduced and a beautiful connection between Gamma and Beta function is explored.
• logarithmic function
• Periodic functions
• Derivative functions
• Dirac delta function
• Bessel function
• Error function

To understand the Laplace transform of the above functions, the chapter introduces more subtle ways of dealing with the convergence. For example the Laplace transform of Dirac delta function is defined in terms of Riemann-Stieltjes integral as the ordinary Riemann integral does not make sense. The applications covered in the chapter include:

• Solving an ODE
• Solving Systems of Differential equations
• Solving Integro differential equations
• Solving Differential equations with polynomial coefficients
• Determining the asymptotic values of a function
• Convolution
• Solving Difference equations.

Along the way , I have learnt a nifty way to prove the following Euler’s equation. i.e. by taking Laplace transform on Beta function and using Convolution theorem The first two chapters (114 pages) of the book give enough ammunition to problems in various contexts.

The difficulty level rises steeply from the third chapter onwards as the author takes the reader in to complex algebra. The third chapter contains all the relevant principles of complex algebra that are used in inverting a Laplace transform. Found it hard to follow as I had long forgotten complex analysis. Instead of going over at a 10,000 ft. view, I chose to go over the book on Complex Analysis by Howie. Having spent some time understanding the principles behind complex analysis, this chapter was a breeze. The chapter starts off with introducing complex numbers and techniques to evaluate limits, derivatives and integrals of complex functions. The techniques are very different from what one sees for real valued functions. Some of the key techniques to evaluate integrals of complex functions are

• Cauchy integral formula
• Cauchy residue theorem
• Laurent series
• Transform the integral to an integration around a contour.

In a sense, all of the third chapter’s content is geared towards giving enough background to work with inverse Laplace transforms.

The fourth chapter is on “Complex Inversion Formula”, a formula that uses contour integration to compute the inverse Laplace transformation. For a function having finite poles, the application of Cauchy residue theorem gives the inverse Laplace transform. It is a straightforward computation, once you are comfortable with complex analysis. However for a function with infinite poles, the problem is vastly more difficult. This chapter made me feel that inverting Laplace transform is more an art than science. Why do I feel that way ?Well, you have to choose the right kind of contours to encompass all the poles, prove that the function is bounded in a certain way and then use the Cauchy residue theorem.Using Cauchy residue theorem gives the right answer ONLY when one has ensured that all the prerequisites are taken care of(which I think is very tricky). After slogging through 150 pages of the book, I am kind of disappointed as I was expecting a magic technique that can be used to solve any inverse Laplace transform. Unfortunately such a method doesn’t exist. In any case, I will console myself that at least I can use contour integration for compute the inverse Laplace transform for simple functions. I think I should start looking around for a numerical procedure for inversion. The last chapter deals with using Laplace transforms to solve PDEs, most of them are standard PDEs. Looking at the basic formula for Laplace transformation, one might mistake it to be a trivial operator. The real math lies in cracking the inverse Laplace transformation of a general function. This book gives  a rigorous treatment of the Laplace operator and is probably one of the few books out there that give this kind of in depth treatment with out overwhelming the reader. Complex Algebra crops up in a ton of places in applied math. I came across an application of complex algebra in the context of Inverse Laplace transforms and could not understand some aspects of it. Looked around a bit and stumbled on this book that seemed to explain the principles of complex analysis in an elegant way. Let me summarize some of the main points in the book.

Chapter 1: What do I need to know?

The chapter provided a quick recap of all the prerequisites needed for working through the book. All the elementary results pertaining to set theory and calculus are provided, for formality sake.

Chapter 2: Complex Numbers

Why does one need complex numbers? In one sense, the set of complex numbers is the natural extension of the sets N, Z, Q(the set of natural numbers, integers and rational numbers) . Denoted by C, this set encompasses all the roots of a polynomial equation. The Fundamental Theory of Algebra says there is no need to look beyond C , for this set contains all the roots of a polynomial equation. The proof of this theorem is given in Chapter 7 of this book after enough theorems are stated and proved. This chapter is brief and lists down all the basic properties of complex numbers that one comes across in a typical high school level math curriculum.

Chapter 3: Prelude to Complex Analysis

All the important concepts developed on the real line depends on three properties of R, i.e. 1)it is a field, 2) there is a notion of distance between two points and 3) there are no gaps(Least Upper Bound Axiom).In C, the concept of order is tweaked to compare the two complex numbers, i.e. use absolute value of the complex numbers to compare one another. Like R, the Completeness property does make sense, i.e. every Cauchy sequence in C is convergent.

To create the “interval” equivalent in C, a few terms are defined such as

• Neighborhood of point c in C
• Open subset in C / open disc
• Closed subset of C / closed disc
• Boundary of a subset
• Punctured disc

Functions in the complex plane need to be visualized differently from those on the real line. One needs to draw two planes, one the z plane and the second f(z) plane, so as to see the effects of transformation. The concept of limit for a function at any point on C, is defined in terms of punctured disc at that point.

Chapter 4: Differentiation

The notion of derivative in the complex plane comes with an additional requirement that the rate of change of function has to be the same in possible directions. This requirement is formalized by Cauchy – Riemann equations for certain functions. The Cauchy-Riemann equations arise from the requirement that the rate of change of the function at a point c must be the same in the x and y directions. So, what about the rate of change in other directions? Well, Cauchy Riemann conditions serve as a “necessary and sufficient condition” for ONLY those functions, for which the partial derivatives exist and are continuous throughout the open disc.

The chapter subsequently introduces the terms holomorphic and entire function, the former denotes the property of differentiability in a subset of C and the latter denotes the differentiability over the entire C. One of the most important theorems of real analysis is the Mean value theorem. One cannot directly translate the mean value theorem on to complex plane C. It has to tweaked a bit and this tweak goes by the name, “Goursat’s Lemma”.

A brief introduction to Power Series is given and one finds that most of the ideas in the real line space and can be easily extended to complex plane. All the ideas such Radius of convergence, procedures to compute radius of convergence in C are analogous to the ones on R. The radius of convergence plays a key role in computing higher order derivatives, for they exist only within the radius of convergence. The exponential function, sine, cos, cosh, sinh have infinity as the radius of convergence. This means that they are infinitely differentiable everywhere on C. Hence these functions fall under entire function category

Logarithmic function of complex number is vastly different from the logarithm of a real valued function. The log function in C is a multi valued function. To pin down and work with a single value, one computes the principal logarithm. Also basic identities involving logarithms fail unless one takes in to consideration the multiple-valued nature of the logarithmic function. Basically one needs to keep in mind that functions such as (1+i)i are multiple valued functions and be careful about using the identities from R. The formula for the differentiation of the real function log(x) is easily extended to the complex plane as all the values of the multifunction logarithm on C differ by a constant.

The chapter introduces branch points and cuts. A branch of a multiple valued function f is any single valued function F that is analytic in some domain and at each point z in the domain the value F(z) is one of the values of f (z). A branch cut is a line or curve that is removed from the complex plane to define a branch of a multiple valued function. A branch point is a point that is on all branch cuts for a particular function. The two most common multi-functions used in the book are Log z and z(1/n), whose branch points are 0 and infinity. The idea of defining branch points is that on any contour that does not go through branch points, the functions changes continuously and returns to its original value.

There is a section on singularities that deals with explaining the terms such as pole, order of the pole, meromorphic in a subset etc. These terms are useful for describing functions that are differentiable in a subset of C, at all points excepting the poles.

Chapter 5 : Complex Integration

The chapter starts off by proving Heine Borel theorem that says that for a closed bounded subset of C, every open covering of S contains a finite sub covering of S. This theorem is then used to prove the existence of supremum and infimum of a complex function whose domain is a closed bounded set. In real analysis, integration involves computing the integral value over an interval/ set. In complex analysis, integration is computed along a curve or a path. In order to represent a path, parametric representation is much better than the standard representation of x and y coordinates. Why? If a curve crosses itself or becomes vertical, the standard approach has problems. The complex integration is done over a curve and hence the right parameterization of the curve can make the computations quick and easy. The chapter introduces terms to verbalize the various curves around which the integration can be performed

• Simple curve : One that doesn’t cross itself
• Closed curve : end points are the same
• Simple closed curve : One that doesn’t cross itself and whose ends points are the same
• Rectifiable curve: A curve whose length can be defined in terms of supremum. For any rectifiable curve, one has a convenient expression for computing the length.
• Smooth curve : The parameterized function has continuous derivatives in the domain
• Contour : Piecewise smooth ,simple and closed
• Interior of a contour
• Exterior of a contour
• Convex contour: All the line segments joining two points in the interior of the contour lie completely in the interior of the contour.

After introducing the terminology and types of curves around which a complex function might be integrated, the chapter takes a detour and talks about various bounds of a complex integral. Instead of evaluating explicitly, there are certain inequalities that are stated and proved. The chapter ends with a discussion of uniform convergence that is a key idea behind swapping differential and summation operators on an infinite series.

Chapter 6: Cauchy’s Theorem

An entire chapter on this theorem obviously means that it is one of most important theorems in complex analysis. The theorem states that if one integrates a holomorphic function on contour, the resulting value is 0. The proof is easy to follow for contours that are triangles and polygons. For a generic contour, the proof requires a greater effort from the reader. The other highlight of this chapter is the deformation theorem. It states the integral of function of a curve between two end points is independent of the smooth curve between the end points. Also if a contour A lies inside contour B, the integral around both the contours gives the same value.

Chapter 7: Some consequences of Cauchy’s Theorem

This chapter reveals a fundamental difference between complex analysis and real analysis. For an analytic real function f, we can make no deduction at all about the values of the function in (a,b) from its values at a and at b. In complex analysis it is possible to compute the value of a holomorphic function inside a contour by its values on the contour. The theorem that makes this computation possible is Cauchy’s Integral formula. The following expression enables one to compute the value of a function at any point a in the interior of the contour. One can also derive an expression for the nth order derivative of the function at a point, i.e. The chapter has a section that proves the fundamental theorem of algebra by using Liouville’s theorem that states any bounded entire function is essentially a constant function. The chapter ends with a discussion of Taylor series expansion of a holomorphic function. Also the Taylor series of the function is unique (once one chooses the center around which expansion is desired).

Chapter 8: Laurent Series and the Residue Theorem

If a function has a singularity at a point c, then it cannot have a Taylor series centered on c. Instead it has a Laurent series, a generalized version of a Taylor series in which there are negative as well as positive powers of (z-c). The connection between Laurent series and contour integrals exists via residues. Once the function has been expanded as Laurent series, one reads off the coefficient of 1/z to compute the residue and hence the relevant integral. The concept of singularities can be understood better from a Laurent series perspective. If the coefficients of the series do not die out, then the singularity is called essential singularity. Given this background, the chapter states the residue theorem where an integral around a contour is evaluated using the residues at various poles of the function being integrated. A few tricks are shown to compute the residues of various residues of a function. The chapter will give the reader enough practice to compute complex function integrals around contours whose interior comprises the poles of the function.

Chapter 9: Application of Contour Integration

This chapter brings the ideas and principles of all the previous chapters in solving various contour integration problems. The first application is solving real integrals that are reasonably tedious to evaluate. In all such cases, one can transform the integral to an integral on a semi circular contour and use Cauchy residue theorem to evaluate it elegantly. The second application that the chapter shows is that of rewriting an integral around a unit circle. This is a familiar trick that one usually comes across in elementary calculus. The third application discussed in the chapter is the use of Jordan’s lemma to expand the range of real integrals that can be evaluated using Cauchy residue theorem. The fourth application involves choosing special contours to make integral computations easier. The chapter ends with the application of residue theorem to summation of infinite series.

The last three chapters give an introduction to some advanced topics in complex analysis. I have skipped the last three chapters of the book, but a serious math buff might want to go over those chapters as they give an exposure to the open problems, yet to be cracked in this field.

Complex Analysis, contrary to its name, makes computations easier in a variety of applied math problems. This book is structured in a way that it highlights all the essential theorems of the subject and interlaces them with abundant examples and illustrations.