There are a lot of misunderstandings when it comes to the future of psychotherapy — especially therapy that involves a computer. Won’t we all just be talking to computers in the future for our therapy needs? Isn’t it unethical to conduct computerized therapy?
Here are the top 5 myths of computerized therapy — and why they’re wrong.
- In the future, everyone will see their therapist using Skype.
Computerized therapy and online interventions are closely related, so when I tell people about my research interests a common misconception is that I’m investigating ways to deliver traditional therapy through the Internet.
I believe that the field of teletherapy is overhyped. I’m not claiming that it doesn’t work, or that it doesn’t have some real advantages. In fact, when living in Australia I had regular Skype sessions with clients who were located in remote and isolated areas of the outback. This worked out well for everyone involved. However, those needing to overcome geographical barriers and mobility problems are a minority.
In my experience, Skype sessions appeal primarily to therapists who dream of working from home (or perhaps somewhere more exotic). For the majority of patients teletherapy offers little added value over face-to-face sessions. I do not see real-world clinics joining video rental shops in the graveyard of obsolete business models anytime soon.
- Advances in artificial intelligence will lead to virtual therapists that look and behave like humans.
I have to admit that I learned this lesson the hard way. In a previous article I talked about ELIZA, which is an early example of a chatterbot — a computer program designed to have an “intelligent” conversation with a human. There is an annual competition called the Loebner Prize that gives an award to the chatterbot that is deemed to be most human-like. A panel of judges sits in a room, and they try to figure out if they are in (typed) conversation with a real person or a computer.
I attended the Loebner Prize in 2011 and 2012. In 2011, I had high expectations and was eager to see how well I would fare at distinguishing human from machine. To say I was disappointed would be an understatement, as it quickly became clear that none of the entries could fool anyone into thinking they were human. In fact, a single question was usually enough to make the chatterbot reply with a non sequitur.
Over the past few decades an important lesson has been learned time and time again in the artificial intelligence (AI) research community: in order to solve a challenging problem, it is rarely the best strategy to approach the problem in the same manner as a person would. Computers and humans have different strengths. For example, IBM didn’t design the world’s top chess player by reverse engineering the brain of a grandmaster. Rather, they used a super-computer’s unrivalled number-crunching abilities to evaluate 200 million positions per second.
The same reasoning holds for computerized therapy. If automated therapy is to be an effective form of treatment, it will succeed by focusing on areas where computers have advantages over people; it will not be by designing an artificial mind that also does therapy. Fortunately, great strides already have been made, and in another article I’ll discuss some of the exciting developments that are on the horizon.
- Computerized therapy is unethical.
I’ve been confronted with this objection several times. The accusation often is directed toward Internet-based interventions in general. What surprises me most is that some critics are prominent academics who have written widely-read self-help books. When I ask why a self-guided treatment delivered through a website is less ethical than the same treatment in a book, I usually get the response “they’re just different.” My best guess is that this attitude is due to a lack of understanding of the technology involved.
There are some legitimate ethical questions that deserve attention. Who is liable if a depressed person commits suicide after completing an online or computerized therapy program? This thorny issue needs to be addressed from both legal and ethical perspectives, and I don’t claim to have the answer. However, I would argue that this is not a new problem – is the author of a self-help book responsible for the actions of its readers?
It is also worth considering risks vs. rewards, and the consequences of inaction. There are countless people worldwide who are living with debilitating, but treatable, mental illnesses due to lack of access to treatment. With that in mind, I would argue that it is unethical not to pursue the development of novel delivery methods.
- Computerized therapies are only for young people.
Some people assume that computerized therapies are primarily for the young and tech-savvy. This could not be further from the truth. Thanks to Apple, intuitive user interfaces have come a long way in recent years, and minimal technical knowledge is necessary to operate the latest devices.
Some of the best results I’ve seen for online and computerized therapy systems are for older demographics, including those over 70 years old. It may be that older people who have lived with a problem for many years are more likely to have the motivation, patience, and discipline necessary to work through computerized therapies by themselves.
- New technology will take jobs away from human therapists.
I’ve met several therapists who oppose online therapy and computerized treatments for reasons unrelated to any ethical considerations. Overall I think professionals are becoming more comfortable with the idea, but there are still pockets of resistance. I assume that these people are either afraid of losing their jobs, or offended by the very idea that they can be replaced by an algorithm. In both cases, the concerns are unfounded.
As noted in myth no. 2, the goal of computerized therapy is not to replicate and replace humans, but to find new and effective ways to disseminate treatments to those who need them. Second, there will always be situations where computerized therapy is not an appropriate alternative to live therapy. Third, history has taught us that new technologies often complement traditional techniques rather than replace them. Finally, I believe that mental health treatment as a whole will benefit from the knowledge gained through researching and developing computerized systems.
There is a large demand for new developments in this area. In my view, the biggest risk we face is that the void will be filled by unqualified opportunists. As mental health professionals, we cannot afford to be Luddites. Rather, we need to be the ones who are driving the technology forward so we can ensure a high standard of care.