Pages

Thoughts on ways to improve the management of professional services firms

Saturday, February 10, 2007

Lessons for the Professions from Evidence Based Medicine 3 - Problems with Evidence Based Management 2

In my last post I introduced the concept of evidence based management as one outcome from the spread of evidence based medicine. This post looks at some of the problems associated with the approach.

To start with a somewhat slippery distinction, that between management based on evidence and evidence based management.

All managers try, or should try, to base their decisions on the best information available to them. Further, they should review results and adjust their actions accordingly. We can call this management based on evidence. However, it is not quite the same as evidence based management, although it forms a key part of it.

To understand this, look at the first two core elements in evidence based medicine as I defined them in my introductory post on the lessons from evidence based medicine.

Element was research evidence, essentially what works and why. Element two was professional expertise, our capacity to understand and apply our professional knowledge in the circumstances of the particular case.

If you look at much of what passes for evidence based management in practice, it focuses on the second element, management and the act of managing in a particular organisation, largely ignoring one.

The world is presently obsessed with measurement.

Since the Second World War we have seen wave after wave of measurement related movements. Quality management, program evaluation, benchmarking, key performance indicators, performance measurement, performance improvement, process re engineering.

These movements have affected every aspect of life, not just management. As I write, there is a political dispute in Australia over the best way of measuring school performance. Evidence based medicine itself is another example of the measurement focus.

Basing things on tangible measures seems intuitively sensible and scientific. The problem, however, is that many of the measurement approaches used in management do not appear to work and may even have adverse consequences.

Take performance appraisal as an example. There are many books on appraisal systems. Yet my experience as a consultant and manager has been that most appraisal systems simply do not work. That is, they have no discernible positive effect on organisational performance . Further, they continue in place even though staff at all levels will tell you that they are not working! Why is this so?

I think that part of the answer lies in management inertia, part lies in the fact that appraisal systems in bigger organisations are in fact devices for control and information collection rather than performance improvement. But part of the answer also lies in the difficulty of evaluating the contribution of the process itself, distinguishing between faults in design and faults in implementation.

We now have a number of different problems that can be linked to the two knowledge areas I talked about before.

I suggested that the first element was research evidence, essentially what works and why.

To be effective, evidence based management depends upon broader research intended to test what works and why. So in the case of performance appraisal we need research addressing the question of what works and why.

This then needs to be linked to element two, professional expertise, our capacity to understand and apply our professional knowledge in the circumstances of the particular case. This is not always easy. Material can be widely spread and not always easily accessible.

Here I want to pay a compliment to Bruce MacEwen's Adam Smith Esq. One of the reasons why Bruce's blog is so widely read in law and should be read more broadly is that he does report on the results of management research that he sees as relevant to practice management.

Even where the results of research and practice are linked, there is a further problem, actually evaluating outcomes at organisational level. Now here another problem comes in, one linked to the problem of professional silos, the definition of the appropriate evaluation path.

I first entered management consulting from a Government background with both Government and private sector clients. In talking to private sector clients, I was astonished to discover that they had never heard of program evaluation. I suspect that this is still true.

Program evaluation began in the 1960s "as a disciplined way of assessing the merit, value, and worth of projects and programs". It provides a complete tool kit of evaluation techniques. Yet because it comes out of a Government environment, its application in private sector management and by consultants and advisers helping private sector clients has been very limited.

I will extend this analysis in my next post in this series.

No comments: