It’s not everyday you wake up and find your profession in midst of a holy war.
Yet that seems to be what’s happening in the profession of clinical psychology. A new journal article to be published next month by Timothy B. Baker, Richard M. McFall, and Varda Shoham (2009) suggests that psychology is falling apart. Specifically, the researchers argue that graduate training programs for psychologists studying to become psychotherapists has taken a wrong turn and needs to be turned around before it’s too late.
So what steps could be taken to fix the apparent problem? Funny you should ask, because not only do the authors have a prescription, they actually started implementing their prescription more than a year ago.
Is Psychology Like Medicine?
Baker et al.’s argument largely relies on comparing psychology to medicine. After all, they both help people get better. To me, though, such a comparison belies some psychologists’ inferiority complex — always wanting to be “real” doctors, garnering the same kind of respect as “real” doctors do.
The crux of the argument hinges on whether it’s a fair comparison — is psychology like medicine? If so, then perhaps there’s some merit in looking at the medical model for its training. If not, then looking at how medicine trains doctors — while an interesting intellectual exercise — is engaging in a logical fallacy.
The human body is a complicated piece of plumbing and electrical work all put together in one messy piece of organic material. But it’s solid, real. You take a scalpel to the skin and you know exactly how much pressure to apply to make an incision. We now know to scrub our hands before surgery to prevent infection from organisms that live on our hands.
We still basically have no idea how the brain works, however. We can’t flowchart working through someone’s imagination, or an emotional reaction to a traumatic event. Sure, we can treat these things, but is it the same as what we know about and how we treat the human body?
It would seem to be a far more fair and an “apples to apples” comparison to not look at how doctors train in medicine (since very few doctors do anything like psychotherapy), but rather look at how other professions train their students to become psychotherapists. After all, you wouldn’t look toward an electrician’s training to understand how to train a good programmer (although both share many commonalities, such as good problem solving skills and the ability to design complex systems).
Despite the fact that other professions provide more psychotherapy than psychologists do, these researchers apparently believe that other professions don’t have much to offer psychology’s training programs. “Master’s level” training is just assumed to be inferior by definition.
If Psychology is Like Medicine, Is Medicine Creating Good Science Practitioners?
Let’s say that the researchers’ comparison is somehow valid. Is medical training really the “gold standard,” creating good doctors who keep up on the research and their medical training throughout their career? Do most doctors use evidence-based procedures in their profession?
The answers are not at all clear. Medical science advances at such a great rate (there are over 5,000 biomedical journals in publication and over 400,000 research citations added to MEDLINE every year), it would be irrational to suggest that most medical doctors keep up with the research. If they did, the medical profession wouldn’t only just now be getting around to practitioners actually following evidence-based medicine guidelines. If the medical model of training was one worth modeling, why has it taken 60 or more years for doctors to actually start doing what the research tells them what works?
Research suggests that many physicians don’t practice what their training supposedly preaches anyway. Buchbinder et al. (2009), for instance, found that in a study of 3,381 general practitioners who actually have a special interest in back pain, the physicians held pain management beliefs contrary to the best available evidence.
Hay et al. (2008) noted in a different survey of physicians that, “Physicians reported that when making clinical decisions, they more often rely on clinical experience, the opinions of colleagues and evidence-based medicine summarizing electronic clinical resources rather than refer directly to the evidence-based medicine literature.” Sounds familiar, doesn’t it? The medical literature is littered with similar examples. Medicine is not exactly doing an exemplary job of training scientist-practitioners after all — they study one thing, and practice another.
Even if we take it that some physicians do keep up with the research, is that inherently a good thing? With research that’s been ghostwritten by pharmaceutical companies and clinical trials that bear no relationship to reality, it’s legitimate to ask — What research can we trust and generalize from? Most research studies have been designed and conducted in such a manner as to minimize other factors that may influence the results. But because of this, most real-life patients don’t resemble the people used in most research studies. There’s no way to know whether a particular research study is going to stand up to the test of time.
A Solution to the Imaginary Problem
A straw man argument is when one side creates a position that distorts or exaggerates the other side. I’d argue that, sadly, this is exactly what Baker and his colleagues have done.
Psychology isn’t failing to churn out good therapists so much as it’s failing to churn out psychologists that meet the authors’ own arbitrary definition of what constitutes a “good clinician” — those with a rigorous background in research. Would you expect any different argument from three Ph.D. academic researchers?
I say “arbitrary definition” because the researchers make a weak case for explaining why they believe a good researcher equals a good psychotherapist. Stewart et al.’s (2007) survey of 591 psychologists found that while indeed most psychologists surveyed strongly said they relied largely on their own past clinical experiences, they also relied on treatment outcome research too (it wasn’t an “either/or” type of question). And cognitive-behavioral therapists, followed by those who used an eclectic approach, were more likely to do so than those who used other approaches like psychodynamic. Stewart et al. wrote, “Clinicians also indicated that they often use the following: treatment materials informed by psychotherapy outcome research findings, treatment materials based
on clinical case observations, and discussions with colleagues.” Does that sound like clinicians in the field today are ignoring or aren’t using the research?
Perhaps one of the reasons clinicians don’t use empirically supported treatments as often as some would like is because, as Stewart et al. (2007) note, the research supporting their use over treatment as usual is “in its infancy.” Is it really a wise idea to start retooling all of psychology training based upon a largely unproven area of psychology, one with many, many holes?
A New House of Cards
Baker et al. (2009) seem to be arguing from a position of elitism rather than the more basic question: How do we train top-notch clinicians that result in better and faster client outcomes? Their entire article centers around how to make graduate school programs more elite, in order to grant them yet another new credential (to add to the existing credential soup that already confuses most consumers and even many professionals).
Indeed, when you see the article for what it is — a sales pitch for the brand-new PCSAS accreditation process — you understand why the argument was crafted in the manner it was. This isn’t about training psychologists to become better psychotherapists, it’s about offering a new credential to training programs that train psychologists to meet the authors’ definition of what makes a good clinician.
Left out of the article (or at least the version I have) was any conflict of interest statement. Two of the three researchers work for the PCSAS organization, and the person who wrote the accompanying editorial praising the study (Walter Mischel) is on the PCSAS advisory board. Is it any wonder that the article finds that the solution to the “problem” is an organization two of the three authors work for?
The researchers’ belief is that if we just do a better job of training psychologists in research at the beginning of their careers, they are more likely to utilize said research throughout their careers. But if all of this were simply about reforming clinician psychologists “for the public good,” it seems haphazard to stop at just psychology. Wouldn’t the public, therefore, benefit from most therapists being trained in this manner? If this the best way to guarantee positive client outcomes more quickly, shouldn’t we be asking virtually all professions to train under this model?
The authors also make a false dichotomy argument — that there are only two possible roads on which to train good clinical psychologists: a greater research emphasis or the status-quo. That’s it. I would argue there are many other models of legitimate training for psychotherapists.
I also can’t help but wonder what happens if such accreditation becomes used amongst some new psychologists? Existing clinical psychologists will apparently be left out in the cold. And such a process would likely create a two-tiered system of mental health care. If you’re well off and can afford to see someone graduating from one of these elite training programs, you do. But if not, you’re stuck seeing the same old psychologist who doesn’t have the “elite” training. Yet another divide in an already fragmented profession and model of care.
I don’t think anyone will argue that being aware of and using more research-validated treatments (or empirically supported treatments, as some research call them) is a bad idea. But I also don’t believe that trying to create a two-tiered level of training programs is going to do much to help the profession. Instead of bringing more psychologists together and trying to bridge the gap between science and practice, it’s likely to drive an even greater wedge between those who support greater use and promotion of such treatments, and those who do not.
Baker, T.B., McFall, R.M. & Shoham, V. (2009). Current Status and Future Prospects of Clinical Psychology Toward a Scientifically Principled Approach to Mental and Behavioral Health Care. Perspectives on Psychological Science, 9(2).
Buchbinder R, Staples M, Jolley D. (2009). Doctors with a special interest in back pain have poorer knowledge about how to treat back pain. Spine, 15;34(11), 1218-26.
Hay MC, Weisner TS, Subramanian S, Duan N, Niedzinski EJ, Kravitz RL. (2008). Harnessing experience: exploring the gap between evidence-based medicine and clinical practice. J Eval Clin Pract., 14(5), 707-13.
Mischel, W. (2009). Connecting Clinical Practice to Scientific Progress. Perspectives on Psychological Science, 9(2).
Stewart, R.E., & Chambless, D.L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology, 63, 267–281.
You can also read Newsweek’s uncritical take on the article, Why Psychologists Reject Science.
This post currently has
You can read the comments or leave your own thoughts.
Last reviewed: By John M. Grohol, Psy.D. on 1 Apr 2011
Published on PsychCentral.com. All rights reserved.
Grohol, J. (2009). Is Psychology Rotten to the Core?. Psych Central. Retrieved on March 8, 2014, from http://psychcentral.com/blog/archives/2009/10/03/is-psychology-rotten-to-the-core/