If you’re a therapist, you can get frustrated by the lack of research into easy-to-use techniques that can be readily integrated into your existing set of techniques. Most research into psychotherapy requires use of a specific “program” or manual of instructions or exercises that most therapists, in the real world, have trouble sticking to. Because if you’re been practicing for 5, 10 or 20 years, you’re not likely to throw out everything you’re currently doing just because some new research suggests a different technique might be more effective.
Researchers, on the other hand, have a hard time understanding what it’s like to be a clinician. Most researchers work within a very small niche or section of psychology, studying just one well-defined aspect or treatment protocol. They often spend their entire careers in the same niche, becoming experts in that area, and often publishing a lot of research to back up their hypotheses about the importance of their niche or treatment protocols.
Research studies are designed to try and remove or account for all the variables that might have an impact on what they’re measuring, so they can say, “Treatment X caused this positive gain in psychotherapy.” But by doing so, they often setup conditions that are rarely seen (or understood) in the real world.
Researchers who work with psychotherapy treatments often find themselves stymied by the lack of clinicians using or trying out their research-proven techniques. They wonder, “Look, the research says this works. How come no one’s using it?”
One reason is that, nowadays, you have to become a bit of a marketer and self-promoter to cut through the noise that is research. Clinicians get bombarded with new treatments to try (and their accompanying workshops and continuing education courses to teach them). They sometimes feel overwhelmed by it all, because to be a good clinician means having to continue to learn long after graduate school. That is, of course, in addition to seeing the 20 or 30 patients a week.
But perhaps more importantly, clinicians have a difficult time incorporating significant new treatments or techniques into their toolbox because (a) their toolbox is already overflowing with past empirically-proven techniques; and (b) the technique was performed in a vacuum completely unlike the patient population they see.
Michael Nash, a professor at the University of Tennessee, believes he has an answer. He has developed a simple “user’s guide” to try and help clinicians better apply scientific research in their day-to-day work:
The authors describe a research method known as the case-based time series design which can be applied to one or just a few patients.
In essence, the time-series design involves tracking the patient’s symptoms very closely before, during and after treatment, and then applying specialized statistical analyses to detect whether there is reliable improvement.
Nash thinks the issue is one of lack of knowledge about how to conduct simple and empirically-sound single case studies. But most psychologists learned of such designs in graduate school, and in some programs, such designs may have actually been practiced with actual clients while the psychologist was in training.
But I’m not sure that’s really the problem. I think the problem is far more complex and involves psychologists’ own motivations in therapy and their careers.
Clinicians have little incentive to track their client’s outcome, whether they improve or worsen with therapy. Why not? Don’t professionals care if their patient improves or not?
Most do, but not to the point of being responsible for a possible outcome measurement showing their therapy is actually hurting the patient. The results could be demoralizing to therapists. Instead, many clinicians rely mostly on their own clinical judgment (with an occasional objective measure thrown in from time to time to track specific symptom progress). The key is that if one doesn’t conduct such efforts in a rigorous empirical manner and gets negative results, one can always say, “Well, it’s not like this is scientific research study or anything.”
There are, of course, no easy answers to this dilemma. Clinicians’ only incentive to help a client get better are intrinsic to the job — that’s why most got into the field, to help people get better. (The old cynicism that a therapist will see a client for as long as they have the ability to pay leaves out the fact that most therapists have a waiting list, meaning there is rarely a shortage of people willing to pay.) Clinicians can help people get better, faster, if they can find a way to meaningfully incorporate key research findings into their practice. But until researchers find a way to make their protocols and techniques more digestible to the complex chaos that is most therapists’ case loads, the problem will remain.
Read the full article from Psych Central News: New Method Tracks Psychotherapy Effectiveness
This post currently has
You can read the comments or leave your own thoughts.
Last reviewed: By John M. Grohol, Psy.D. on 18 Feb 2008
Published on PsychCentral.com. All rights reserved.
Grohol, J. (2008). Psychotherapy and the Divide Between Practice and Research. Psych Central. Retrieved on November 28, 2014, from http://psychcentral.com/blog/archives/2008/02/18/psychotherapy-and-the-divide-between-practice-and-research/