One of the stories of the early days of operational research during the second world war relates to the study of where warplanes were damaged when they returned to the U.K.. Each plane which landed was examined, and the places where there were holes in the wings and fuselage were plotted on a schematic. Once sufficient data had been collected, it was clear that the holes were clustered in certain sections of the warplanes. The story goes that someone in the O.R. team pointed out that the important lessons were not about the location of the holes, but about which parts of the schematic had no holes. These were the places where no planes had survived to return, and therefore indicated the vulnerable parts of the planes. These were those which needed extra protection.
I was reminded of this when I read an article in the newspaper about the American surgeon Atul Gawande. He is obsessed with failure in the medical services, and especially surgery. Most operations in hospitals go successfully, but the interest should be concentrated on those operations which go wrong. He asks the question: "Why?". Atul is especially concerned about surgery in the developing world, with the aim of saving lives. So he has written about failure, how it happens, how we learn from it, how we can do better. And he is working with the World Health Organisation to develop tools to help surgeons.
The simplest tool he has popularised is a checklist, that should be followed before every operation "Is this the right patient? Is this the right limb?". It takes two minutes, but saves lives and complications. However one item in the list is expensive; an oxygen monitor. So, Atul has identified this as the obstacle to implementing the checklist, and has persuaded a company to make them cheaply and there is a charity Lifebox which helps provide them.
So how can we learn from this in O.R.? Gene Woolsey has written about lessons that he learnt from some mistakes, but generally we crow about our successes and say little about our failures. Maybe practitioners ought to examine their failures more closely? I remember a couple of my projects which came to nothing because ai took the textbook attitude that the initial description indicated there was very little relevant data, and I said so. The clients reached the conclusion that the project was doomed from the outset. Maybe academics can also learn from mistakes. I advised my research students to document their "Dead ends" in the research programme.
No comments:
Post a Comment