I have enjoyed reading Nicholas Carr’s previous book titled “The Shallows”, that discusses at length various ways in which Internet impacts our brain. In this book, the author takes “automation” as the subject and explores its human consequences. In this blog post, I will summarize the main points of various chapters.
Alert for Operators
This chapter opens with a terse notice issued by Federal Aviation Administration on Jan 2013. This note was addressed to all the US. Airlines, commercial air carriers and said,
The FAA encourages operators to promote manual flight operations when appropriate.
Airline cockpits in today’s world are totally automated, reducing the role of pilot to a baby-sitter. Why does FAA want operators to do things manually? What was lost in the process of completely automating the flight operations? These are some of the teaser questions that motivates a reader to start reading the book.
The author narrates his experience of making a transition from geared car to an automatic car. After the initial euphoria subsided, he realized that driving a car became a boring task as he was reduced to a passenger in his own car.
Most of the knowledge that we gather over our lifetime can be categorized as tacit and explicit knowledge. It is easy to argue that explicit knowledge can be programmed and can be automated. Tacit knowledge on the other hand, one might think, is hard to automate. The author argues that it is no longer the case. Google’s self-driving car that can travel on busy streets without any human aid is a case in point. The distinction between tacit and explicit knowledge is being blurred.
There are many disciplines where automation is changing the workflow. The flip side to “automation bringing in efficiency and cost savings” is that it is marginalizing the role of humans. Why do we allow automation in the first place? There seems to be an increasing trend towards people wanting things to be automated. The author traces this behavior to a study by Mihaly Csikszentmihalyi and Judith Lefevre who stumble upon a strange behavior in their social science experiment involving hundreds of blue-collar and white-collar workers. They found that people are happier, feel more fulfilled by what they were doing, while they are at work than during their leisure hours. In their free time, most of the workers tend to feel bored and anxious. And yet they didn’t like to be at work. When they were at the job, they expressed a strong desire to be off the job, and when they were off the job, the last thing they wanted was to go back to work. This is a paradox; people having more positive feelings at work than in leisure, yet saying that they ‘wish to be doing something else’ when they are at work, not when they are in leisure. Psychologists have given a poetic name to this behavior, “miswanting”.
We are inclined to desire things we don’t like and to like things we don’t desire
This type of behavior is one of the reasons, the author argues, that people welcome automation. By reducing the burden of their work, they think that they can indulge in doing better things at work. On the contrary, automation when it goes the whole hog, removes complexity from jobs, diminishing the challenge they provide for the person at work. Automation intrinsically is not bad. The problem is that we don’t know when to say enough. The deck is stacked, economically and emotionally, in automation’s favor. We overestimate automation benefits and that coupled with our bias of leisure over work makes it easy for companies to adopt automation rapidly.
The Robot at the gate
This chapter traces the history of automation on the machine floor. If you think robots are still years away from finding place on the factory floor, just watch the video on CBS 60 minutes :
Link : Are robots hurting job growth?
The author kind of summarizes the content of above video in this chapter adding some additional references to the content. The basic theme running through this chapter is the automation is growing at a rapid pace and displacing people from their jobs. Job creation is not happening at the same pace with which the jobs are being automated.
This chapter is about the way automation has affected the airline industry. It traces back the history of automation in the context of airlines and discusses various issues that have become a part of current day’s reality. It was on June 18, 1914 that the story of flight automation began. Elmer Sperry and Lawrence Sperry (the father-son duo) designed “gyroscopic stabilizer apparatus”, an instrument that could sense with remarkable precision, a plane’s orientation along its three axes of rotation. An aerial display of the capabilities of this instrument took everyone’s imagination by storm.
It took another twenty years before US military could refine the ideas and make a gyroscopic autopilot, the first commercial flight that flew three hours without any human aid. The introduction of gyroscopic autopilot set the stage for a momentous expansion of aviation’s role in warfare and transport. In the next 40 years several technological advances lead to the launch of A320, a 150-seat plane, a first commercial aircraft that was truly computerized. Dials and gauges gave way to glass screens and the pilot’s role changed forever. His cockpit to begin with, was a glass cockpit, that showed various aspects of the running plane. The computer in the plane was in charge of everything and the human’s role was significantly marginalized. The essence of pilot’s job consisted in sending digital inputs to computers and monitoring computers digital outputs—while the computers govern the plane’s moving parts and chose its course. The computer had its final say in times of extreme maneuvers. The commercial pilot lost the aura of romance and adventure.
What was the result of this automation? The first outcome of severing the tactile link between the pilot and plane resulted in pilots’ motor skills becoming rusty. Many studies have shown that the automation has led to a gradual erosion of pilot’s expertise, have dulled their reflexes, diminished their attentiveness, degraded their situational awareness and have weakened hand-flying skills. If you pause for a while, it seems logical that increased automation would result in all these things. But then what is the role of pilot? Should there be human pilots at all? If so, how should they be trained given that computer has completely taken over?
The Degeneration Effect
Retrieving a concept from the memory changes the memory associated with that concept. This observation has led to many interesting study hacks and an umbrella term used to capture this effect is called the “generation effect”. The act of active retrieval of information enhances the way the information is stored in our brains. The fact of generating ideas before you encounter something new, before you reread something leads to better retention and comprehension. The author says usage of more automated tools actually leads to a reverse of “generation effect” and terms it as “degeneration effect”.
When we tackle a task with the aid of computers, we run the risk of two things:
Automation complacency: This is our tendency to trust the systems so much that we drift away and lose attention
Automation bias: This is our tendency to ignore things when system goes wrong. We somehow believe that the system is still working and working well.
The author is of the opinion that these two aspects of automation will rob us from deep learning a task. Diminished intensity of thinking leads to poor encoding of information in our brains. I don’t agree fully with the content of this chapter. Let’s say there is a tool that generates a boxplot automatically. Does it rob the analyst of getting deep learning? Not really. The painful task of sorting the data, picking the first, second and third quantile data values and drawing the outliers is automated by visualization software. This leaves the data analyst with more time to actually understand the data. Same is the case with an IDE. Some might think that IDE dulls one’s brains. This argument is also stretched. Yes IDE with code completion features makes one type less, makes workspace management easy etc. Indeed there is some learning in doing all these things manually but for many, IDE is to write some code to get things done. For getting things done, how does it matter if I use a convenient IDE or use some really complicated stuff like emacs or use command line for doing things? Does IDE make me one a dull boy? I don’t think so. Look at the rise of iPython in the Python community or RStudio for the R community. By relieving the analyst of many chores that a programmer does, it gives far more time to actually analyze data and build models. Somehow I find it difficult to buy the argument that automation goes against deep learning. Automating the painful tasks is good and should always be welcome.
Interlude, with Dancing Mice
The author talks about Yerkes-Dokson Law. The law was formulated by two social scientists after doing a ton of experiments on mice. The experimental setup included mice choosing one of the two paths. One of the paths came with an electric shock treatment. When the mice were given a barely noticeable shock in one of paths, they barely noticed the difference between the two paths. Same is the case with using high intensity electric shocks. Only when the shock level was moderate were the mice able to differentiate between the two paths. Since the publication of this research in 1908, there have been many fields where this law has been found to be true. The author uses this law to explain the effects of automation. When the automation reduces the workload too much, humans tend to drift away from work and lose their attention. This is similar to moving to the left of Yerkes-Dokson curve, under stimulation reduces performance. Similarly if the automation increases the need to take care of many things (especially in times of crisis), humans tend to buckle down. This is similar to moving to the right of Yerkes-Dokson curve. The ideal is somewhere in between. The automation should reduce our burden just enough so that we are not under -stimulated or over stimulated.
The author talks about specific domains and discusses the effects of automation. Medical records automation was heralded as an event that would change the face of US Health care industry. Till date there is no evidence that a decade long project involving government money and tax payer’s money has yielded significant reduction in healthcare costs in US. Most of the medical record systems are operating as silos and the only people who are going to benefit at least in the near future are the IT industry personnel who will write code for the disparate systems to talk to each other. The frequency of medical tests done by a patient was supposed to have gone down post automation. However empirical studies show that doctors with easy accessible medical history are recommending more tests to be done than the pre-automation era.
In this age of big data, machine learning, predictive models are pervading every domain; it is very likely that algorithms will be used for many of the tasks that were previously considered complex and required human personnel to work on it. In the area of finance, traders and dealers are being replaced by computers. Execution algorithms have taken over the sell side of Wall Street and in many countries. These algorithms have turned “sell side traders” in to “sell side computer operators”. Wall Street and many other financial firms love this, as they have to deal less and less with the human side of things. This automation is also visible on the “Buy Side” where the fund managers have tools to directly send orders to the exchange using the broker’s infrastructure. At least most of the quant shops and HFT players largely employ computer professionals who love automated systems. A machinist is turning in to a machine operator. The author talks about the legal profession too, where automation is reducing the job of several hundred man hours to merely hours, by using pattern matching and statistical learning techniques. Given the way technology is going, the title of the chapter is apt. We will increasingly see white-collar jobs being done by a computer.
World and Screen
Location aids have always been improving as technology evolved. Simple maps, trail markers , star maps , nautical charts, compasses, sextants, lighthouses, buoys , signs posted on highways, have made our lives easier by helping us navigate the external world. These have been “aids” for us to navigate spaces. One might think that GPS is yet another step in the evolution of location aids. The author argues that GPS is a fundamentally different type of aid. In fact he desists from calling it an aid since GPS gives you step by step instruction to move from your origin to destination and you tend to follow it rather blindly. Unlike the analog world of digital aids, there is no “figuring out” to be done. The whole process of going from point A to point B is rather mechanistic. Ok, so what? Is it not ok to reach destination and let the details of the travel be taken care of by some software. Paper maps do not shepherd us from one place to another. They teach us how to think about space.
The author cites many research articles on memory and says the more we use GPS, the less we build cognitive maps and hence it affects our memory building skills. There are specific areas in our brains that grow if we put effort in wayfinding. The more automatic wayfinding becomes, there is lesser chance of a growth in “space cells” and “grid cells”. Does it matter if we do not have enough of those cells?
In a 2013 article in Nature Neuroscience, Edvard Moser and his colleague György Buzsáki provided extensive experimental evidence that “the neuronal mechanisms that evolved to define the spatial relationship among landmarks can also serve to embody associations among objects, events and other types of factual information.” Out of such associations we weave the memories of our lives. It may well be that the brain’s navigational sense — its ancient, intricate way of plotting and recording movement through space — is the evolutionary font of all memory.
The author presents a scary scenario in which he compares our loss of spatial sense with the onset of Alzheimer’s disease. If we keep using automated tools for wayfinding, will we lose our spatial sense completely and be vulnerable to Alzheimer’s? The author cites a few experts who are of the view that in the years to come, dementia will strike people earlier and earlier.
Another area where automation has started showing its harmful effects is design and architecture. The author’s target is CAD, the software that has indeed changed the way architects, engineers design buildings. CAD in its initial versions was meant to translate design in to execution plans. Physical act of drawing has been replaced by software clicks. The whole task of designing has changed. Architects used to take many years of apprenticeship before they could build deep knowledge of various design elements. CAD after its many versions has come to embody so many features. In the current version, it enables a young architect to put in specifications, play with the software generated design and come up with all funky designs. The young architect might be thinking that he is developing some innovative design for a mall or building. However in many cases, he might just be creating a more standardized design that the software is designed to churn out. Somehow the divorce of physical action from the design process seems to have had a perverse effect on the design community as a whole. The author cites many examples where the building architecture is becoming more standardized.
Automation for the People
The author talks about the need for more humane approach to designing software. One can infer from Yerkes-Dokson law that any automation should not under stimulate or over simulate the human operator. This means that sometimes it might be good for the software to hand over the reins to the human in charge so that his skills do not become rusty. This is easier said than done as companies world over, do not want to compromise on efficiency and profits. Learning always entails some inefficiency and hence the firms might not be willing to invest time and effort in developing humane interfaces. This tension can be clearly seen in the way passenger flights are designed. Airbus takes a completely technology oriented approach where the pilots are meant to “baby-sit” in the cockpit. Boeing takes a more human centered approach in its cockpit design where pseudo tactile feedback is given to the pilots so that they do not run the risk of tuning out because of automation complacency or automation bias. Michael Lewis’s book, Flash boys narrates the story of IEX, an exchange that is trying to bring humane approach to trading by slowing down the execution times. The chapter ends with the author taking a not so rosy view of the future, i.e. even though human-computer interfaces need to be more like human-human interaction, it is unlikely that such interfaces will be built. The force of automation in many fields is marginalizing the role of a human and making it a passive one.
Your Inner Drone
In any activity where computers replace humans, there are moral aspects of the work that need to be taken care of. This would not be an issue if the automation has been introduced merely to replace mundane tasks. In today’s world, computers are doing the entire gamut of activities in many diverse fields such as military, design and architecture, commercial airlines, agriculture, finance, health care etc.
The author talks about the use of LARs (Lethal Autonomous Robots) being used in wars and says that LARs takes decision completely based on the past patterns and real time data. A LAR is different from a drone. The latter is automatic only to a certain extent and humans have a say in the location and the time of attack. When a LAR sees a civilian, how does it calculate the probability whether to bomb or not? After all, the basic software is written by humans; one would assume that humans can write the logic for morality too. Typically in any software design of such nature, one follows a top down approach or bottoms up approach or a hybrid. It is abundantly clear that in programming LARs, it has to be a hybrid approach. But will hybrid approach work? What is the extent to which such issues could be programmed? What is the extent to which LARs can be allowed to take decisions? These are all moral dilemmas and with the ongoing pace of automation, these will become more and more important issues to think about. There is a possibility that increasing usage of LARs will change the way we think about wars. Since LARs will decrease the number of human casualties in the war, may be the threshold for launching in to a war with rival nation will go down. “Change the tool – change the war” is a possible reality.
That Love that lays the Swale in Rows
The last chapter of the book explores human relationship with tools. Tools are extensions of human thought. We show our creativity in designing tools and more importantly in using tools. It is this nature that is misunderstood by many for various reasons. Somehow the thinking out there amongst many is that tools are slaves and we are the masters. The more we allow our tools/slaves to work, the more time we get to work on things that we like. This line of thinking forces to automate all kinds of tasks assuming that it will free our time. Once the automation exceeds the peak point of Yerkes-Dokson, we soon realize that the computer is the master and we are the slave. We are reduced to passive observers and become watchmen, become spectators rather than players. So, What? Shouldn’t outcome is all that matters? The author argues that tools and the way we create and use the tools define us. If we are actively using tools to learn something deep, immerse in an activity with them that is productive as well as exhausting, use tools to fail as well as learn, we get to know ourselves and the world in a better way. This kind of thinking “tools as an extension of us” is a better frame of mind to have, than a “master-slave” type of thinking. This will make us better designers and better users of the automation tools.
The book discusses the consequences of automation in various fields. Airline cockpits, medical record systems, work flow in the factories, trading in the financial markets, automobile maneuvering, military warfare are some of the areas mentioned in the book. The effects of automation in these fields has fundamentally changed the way people work. From a firm’s perspective, automation is no brainer as it brings in efficiency and reduces costs. However from the perspective of the person whose job is getting automated, the consequences are dreadful. If you think that your job has a decent chance of getting automated, you might want to read this book and ponder over various points discussed by the author.
In the last few decades, enormous computational speed has become accessible to many. Modern day desktop has good enough memory and processing speed that enables a data analyst to compute probabilities and perform statistical inference by writing computer programs. In such a context, this book can serve as a starting point to anyone who wishes to explore the subject of computational probability. This book has 21 puzzles that can be solved via simulation.
Solving a puzzle has its own advantages. Give a dataset with one dependent variable and a set of predictors to a dozen people asking them to fit a regression model; I bet that you will see at least a dozen models, each of which could be argued as a plausible model. Puzzles are different. There are constraints put around the problem that you are forced to get that ONE RIGHT solution to the problem. In doing so, you develop much more sophisticated thinking skills.
In the introductory chapter of the book, the author provides a basic framework for computational probability by showing ways to simulate and compute probabilities. This chapter gives the reader all the ammunition required to solve the various puzzles of the book. The author provides detailed solutions that includes relevant MATLAB code, to all the 21 puzzles.
Some of my favorite puzzles from the book that are enlightening as well as paradoxical are :
The Gamow-Stern Elevator
The Pipe Smoker’s Discovery
A Toilet Paper Dilemma
How Long Is the Wait to Get the Potato Salad ?
The Appeals court Paradox
Here is the link to my document that flushes out the details of all the 21 puzzles in the book:
What’s in the above document?
I have written R code that aims to computationally solve each of the puzzles in the book. For each puzzle, there are two subsections. First subsection spells out my attempt at solving the puzzle. The second subsection contains my learning from reading through the solution given by the author. The author provides extremely detailed MATLAB code that anyone who has absolutely no exposure to MATLAB can also understand the logic. In many cases I found that the code snippets in the book looked like elaborate pseudo code. There are many good references mentioned for each of the puzzles so that interested readers can explore further aspects. In most of the cases, the reader will realize that closed form solutions are extremely tedious to derive and simulation based procedures make it easy to obtain solutions to many intractable problems.