In my last post on evidence based medicine, I suggested that professional training focused on the transfer of existing knowledge and skills to the new professional, knowledge and skills that the professional then applied. The professional subsequently built on this base through practice, at the same time using various professional development activities to try to keep in touch with new developments.
I then posed the question what happens if that existing knowledge base is in fact wrong, looking briefly at the reasons why this proved to be the case in medicine, a discovery that had led to the development of evidence based medicine. I concluded that, as with any other approach, evidence based medicine had its own methodological problems. However, it also had important lessons for other fields of professional practice.
The Quality Movement and Quantification
In a series of posts on my personal blog I explored some of the changes that had taken place in public administration since the war, looking at the influences on those changes.
In one of those posts I looked in part at the way in which standards, the Quality Movement and the importance of measurement had become major global influences. I also suggested that the outcomes here had not always been positive.
Evidence based medicine forms part of the global standards and quantification revolution and suffers from some of the same weaknesses. These weaknesses need to be recognised.
Problems with Evidence Based Medicine: Perception Bias
The first problem can be called simply perception bias.
In another post on my personal blog on science and political correctness I looked at ways in which dominant views acted to exclude alternatives.
Evidence based medicine is neither value nor perception free. The questions selected for test and evaluation, a process that can be very expensive, are influenced by prevailing views. Valuable alternatives may be excluded simply because they fall outside conventional wisdom. As evidence based medicine becomes the dominant mode, the effect may, as it has been in other areas, to actually narrow fields of investigation and action.
This links to a second problem, one that I have discussed before, the tendency for all professions to see answers within a frame or world view set by their profession.
A lawyer will give you a legal answer to a problem, a doctor a medical answer. If you have a back problem and see a surgeon, he/she is likely to think about surgical solutions. Go to a chiropractor with the same problem and he/she is likely to recommend spinal manipulation. So professional background helps determine the way the problem is defined, the solution applied.
This flows through into the application of evidence based approaches because the things tested are generally set within the frame of the tester. So evidence based medicine focuses on the efficacy of medical treatment and may leave non-medical alternatives aside.
Problems with Evidence Based Medicine: Causation
As part of my history honours year in my first degree I studied philosophy of history under Ted Tapp. Ted was a reflective man who required us to think about, to debate, the philosophical underpinnings of science and scientific method.
One core problem was the difference between correlation (a and b) as compared to causation (if a then b).
This problem applies in evidence based approaches. Just because a study shows an apparently strong relationship between a treatment and positive patient outcomes does not necessarily say anything about the causal relationship between the two. This has to be deduced and further tested.
Problems with Evidence Based Medicine: Problems of Epidemiological Studies
The problem of correlation vs causation links to another group of problems with evidence based medicine.
By its nature, evidence based medicine deals with large groups, populations.
As trials become larger and more complex, it becomes more difficult in statistical terms to establish significant relationships, to separate the effects of different variables.
This creates another problem, the establishment of a clear relationship between the outcomes of trials at population level and subsequent application at individual level.
As the Wikipedia article notes:
Critics of EBM say lack of evidence and lack of benefit are not the same, and that the more data are pooled and aggregated, the more difficult it is to compare the patients in the studies with the patient in front of the doctor — that is, EBM applies to populations, not necessarily to individuals.
This can create very real difficulties for individual clinicians, leading Tonelli to argue in The limits of evidence-based medicine that:
the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand.
Tonelli concludes that proponents of evidence-based medicine discount the value of clinical experience (source Wikipedia).
Problems with Evidence Based Medicine: Impact of the Observer
Another problem with evidence based medicine, one often seen in all evidence based approaches, is the way the observer affects the observed. This happens at several levels.
The first problem is that the simple act of participation in the trial may have some and not clearly seen impact on individual outcomes. In medicine, this is usually managed by use of a control group using a placebo. The efficacy of the treatment is then measured by the difference in outcomes between the control group and those receiving the treatment.
A second linked problem is the impact on patient behaviour of the trial itself. By their nature, clinical trials are closely managed. This means that patient compliance with the treatment routine is likely to be high.
This need not hold in subsequent clinical use since ordinary patients are more likely to fail to follow treatment processes by, for example, failing to take medication exactly as prescribed. This means that actual patient outcomes may not be as good as the trial results.
Problems with Evidence Based Medicine: Limitations in Application
A further problem is that the most rigorous gold standard approaches dictated by evidence based medicine can only be applied in narrowly defined circumstances, leaving a range of medical approaches that have to be tested by less rigorous means.
This should not matter so long as the limitations are recognised. In practice, it risks introducing two distinct distortions into the medicine and the health system.
The first is the risk that investigation may be biased towards those things that can be measured through more rigorous techniques, reducing thought and investigation in areas less amenable to measurement.
The second related risk is that treatment itself may become biased.
At clinician level, this links back to my earlier point about perception bias. Doctors trained in evidence based medicine may, consciously or unconsciously, come to focus in treatment terms on those things that can be measured, ruling out other less easily measured options.
This tendency may be reinforced by actions from those managing or funding the provision of health care services who may refuse to allow/pay for certain types of services notwithstanding the views of individual clinicians.
I have focused in this post on problems associated with evidence based medicine. In my next post I will look at the lessons of evidence based medicine for other professions.
Previous Posts in the "Towards a Discipline of Practice" Series
- 21 September 2006 Towards a Discipline of Practice
- 22 September 2006 Role of the diagnostic in professional practice - medicine vs law
- 8 January 2007 Reflections on Professional Practice and Practices
- 16 January 2007 Towards a Discipline of Practice - Evidence Based Medicine 1
- 22 January 2007 Towards a Discipline of Practice - Evidence Based Medicine 2