Monday 15 December 2008

The British Post Office

At the end of last week, many news bulletins in the UK carried a story that British Postal Delivery workers ("Postmen" or "Postwomen") were unhappy with a new computer system which had changed their delivery routes. According to the stories, some postpersons were having to walk at 4mph for three and a half hours. (Example story.)

The stories were short on detail about the "computer system" except that it had details of 27 million addresses and came from Canada with the name of Pegasus Europe. It seems to have been around for several years, but evidently there have been recent changes in the system which have led to the unhappiness.

Reading the story, it was obvious that there was a mathematical model behind Pegasus of a familiar kind -- optimal (or efficient) route planning for vehicles (postpersons) with constraints (loads, time). Someone, somewhere had been using operational research to help the postal service. And as an operational researcher, I ought to feel proud.

But I don't. The news reports showed that something was lacking in the O.R. process. I can recognise several possibilities for what was wrong, but without further information I can't give a full diagnosis. Maybe someone from the post office can help. So, in no particular order, here are my observations and questions:

(1) Was this system tested and developed for the UK postal service, or was it an off-the-shelf system into which British data was inserted? If it was the latter, then did anyone verify the assumptions that had been made by the designers?
(2) Given the size of the database, it seems likely that the system is largely, if not wholly, deterministic. If so, what sensitivity analysis was carried out? And if so, what changes were made to the data and the model as a result? If not, why not?
(3) How much communication was there before, during and after the development of the system? Who with? Did the creators/users of the system discuss what they were doing with staff at all levels of the system?
(4) Was the objective simply cost-based? Or were there other criteria?
(5) Did anyone concerned with the data collection, input, modelling and recommendations actually go out and test the results? Or, to put it bluntly, would the modeller trust the model's output if they were asked to do a postperson's job?
(6) Did the modellers collect feedback from the postal service about the implementation?

For years, I have taught that an essential part of O.R. is a feedback loop, that an O.R. project is not properly implemented until it has been accepted (and possibly welcomed) into the practice of the organisation. It seems, from the press, that this project lacked this.

We are often reminded of places in companies where there is "OR inside". I feel that this is a story of "OR Inside" which omitted the essential, friendly face of OR outside.

Where was Genchi Genbutsu?

No comments: