Robotherapy, Freud 2.0 and Automation

The Issue:

One of the profound changes brought about by the pandemic has been a realization of the number of helping professions whose training hasn’t equipped them for pandemic challenges of thinking and working. Counsellors, critical and long-term care nurses, clergy, and others are struggling to adapt. Looking into the future, how best do we put back into the bottle new genies— for example, therapy by teleconferencing, nursing care delivered by robots, and counselling using artificial intelligence. What will the new normal look like?

A Generation into the Future:

Imagine the year 2050. You’ve been feeling depressed for some time now, but you’ve resisted seeing a therapist. You might not feel right doing so for a variety of reasons: maybe you worry about meeting someone new and important in your life. Maybe you feel generally anxious about taking what is, after all, a potentially challenging first step into the workings of your own mind. You thought about doing it during your time on the permanent Moon base, but the whole situation there seems monitored and therefore less than confidential. The entire solar system could find out the fine print, for heaven’s sake.

Or maybe you just haven’t done it before, and it feels downright scary. Really scary. Don’t despair: the artificial intelligence and robotics of 2050 have combined to provide you with a practical and even affordable solution: Jimmy Psyche, the android therapist.

Jimmy is a perfect replica of a human therapist. You can order him or her in any shape or form that you like, selecting for gender, age, and speaking style. For a bit more, there’s even a line of psychotherapist lookalikes, fashioned in the likeness of great therapists from the past like Jimmy Freud 2.0, Jenny Bowlby 2.0, and Jimmy Berne 2.0. Choose your mentor, knowing that the robotherapist before you has been skillfully programed in accordance with that mentor’s school of therapy. It is a supreme example of a social robot.

You can see Jenny as many times a week as you like, in the chair or on the couch, and she will never be sick, go on vacation, or double-book you. To top it all off, after the original purchase price, you won’t have to worry about monthly fees for service, although you may like to upgrade her software periodically. You may even have grown fond of a sexbot well before 2050—stay tuned—so why not push the robotic envelope just a bit further with a robotherapist? Sounds robotically perfect, doesn’t it? Or maybe not.

As both a mental health practitioner and a philosopher of technology, I have wondered if either one of my two sides is on the verge of automated obsolescence. I’ll focus on the mental health practitioner side of me for now.         

Robotherapy: Future vs Present:

I’m not convinced that a large portion of society will come to see robotherapy as a desirable or even generally acceptable substitute for the real organic thing. I hold this to be true, even though the current COVID-19 lockdown has vastly increased the amount of time many millions of people around the world have spent working and socializing online, courtesy of the countless bots that go into the internet. Yes, the post-lockdown world will be more accustomed to online activities of all sorts. And equally, there may well be a greater openness to the digitization and automation of many activities. But that doesn’t mean that anything goes, in this field, and the same is true for others. I teach philosophy of technology and data studies to some of the most high-tech people on our planet–American information studies undergraduates in London. Whenever I raise the topic of them seeing a robotic counsellor or psychotherapist, the reaction is far from enthusiastic: “Doesn’t sound reassuring,” “No, thanks,” and “Not the same thing,” are a few of the most consistent reactions that I get. This isn’t a scientific survey, but I suspect that they are not alone.

 Let’s take a look at the last one. Why isn’t it the same thing? Can I rest assured that I will still be seen as at least largely preferable to a robot, organic imperfections and all? A strong tendency towards anthropomorphism with robots has been clearly and consistently noticed by contemporary social robotics researchers, including Kate Darling, and it may well be something with which we increasingly must work.

 In the 1960s, MIT’s Joseph Weizenbaum decided to push the envelope of robotics by developing a computer psychotherapist named ‘Eliza.’ Eliza’s programmed responses were modelled on the technique of Carl Rogers, a famous person-centered therapist.  Weizenbaum was perturbed by the degree and speed with which participants in his study tended to anthropomorphize Eliza, in other words, to behave as if the program was truly a conscious therapist. This was true even though Eliza was projected through a box-like 1960s computer monitor, rather than a realistic android. Imagine if it had been a realistic android…

Human Choices and Preferences:

Yet, how is any of this different from talking to a wall? If the AI is genuinely indistinguishable from the real therapist, would you care whether you were working with a human being or a machine? Would not knowing that the therapist is an AI be acceptable—as far as both ethics and effectiveness go? At the very least, it would seem only honest and clinically transparent to require full disclosure to the client as to whether they will be talking to a human being or to a machine. If machine ethics, an increasingly important field, is to realize its full scope, then the sort of honesty that we expect around service for fees should apply to the potential growth industry of robotherapy. This is especially recommended when the subtle and private intricacies of the human mind are revealed voluntarily by clients in therapy. People have a right to know to whom or what they are talking.

Various comparisons could be made to robotherapy, as far as artificial intelligence goes. Notably, a useful comparison might be made to the growing use of robot care-givers in various institutions.  One of the effects of the current COVID-19 lockdown is an awareness of how long-term care homes can increase safety for caregivers and residents. In this case, at what point do you say that you think that a particular task should be performed by a flesh and blood human being? Why would you say that, especially if the AI or robot could perform that task more efficiently, with greater safety than by a human being? Is this a reasonable preference, or an illogical prejudice?

I think that people are likely to choose sides on this. For some of us, certain tasks would be fine, if performed by robots, especially if considerations of safety and optimal efficiency are considered more important than the emotional angle. Call them life and death functions. For example, it might be hard to find people who would insist that bombs laid out by terrorists must be defused by human sappers risking their lives in the process—why would anyone not wish for full robotization for that function? Or consider any other dangerous task in which humans could be replaced by robots with either no loss of efficiency, or even an improved standard. Examples include toxic fume detectors, guided fire-fighting equipment, and arguably, closely-guided police battering rams in hostage-taking situations.

But what of situations that involve great danger to humans, but which also require a human touch or bedside manner? I’m thinking of the current situation in locked-down hospital care-homes today. Robots neither contract nor spread viruses. Front-line-care home workers do both, and this is why they are rightly praised for their courage in the line of duty. If the workers exposed to the most dangerous levels of exposure could be replaced with robots, then we could potentially and significantly reduce the spread of viruses like COVID-19, at a possible sacrifice of some patient and resident reassurance. In other words, what we gain in infection reduction we may have to lose in an increase of emotional and social distance from health-care providers on the part of the infected.

What makes this different from a fully automated psychotherapist, an AI performing a task that also requires a human touch? It really is different. In the case of the life and death functions, the human touch isn’t required; you don’t have to be especially empathetic in your specific function to be a good sapper or fire fighter, and no one loses out psychologically if some of your most dangerous functions are performed by machines. In fact, the psychological benefit to humans would be increased by automating these functions, if only because people—especially the loved ones of the workers—would know that the workers’ odds of survival in the face of danger have been increased by advanced technology.

The Impacts of Lockdowns:

In the case of an imagined automated therapist, even in lockdown, it’s likely that the very opposite would hold. Until and unless we develop AIs that we can be certain are conscious and fully empathetic, knowing that your psychotherapist is really a machine will not be conducive to trust, building a therapeutic relationship, and/or personal growth. And if our robotherapists are so convincing as simulacra as to fool clients, then are we being honest in our work by not telling them unless there’s a glitch in the algorithms? Say, seven months into therapy? If the robotherapist keeps repeating ‘tell me about your siblings’ dozens of times, it is very likely to affect your perception of it.

One controversial exception might be explicitly automated text messaging apps for what might be termed casual interaction in times of stress. One on the market now, Woebot, is a chatbot that is programed along the lines of cognitive-behavioural therapy (CBT), a rival to psychodynamic therapy. Journalist Erin Brodwin found it helpful for her own anxiety, while being clear in reporting on it that its creator, Alison Darcy, does not herself see it as a full replacement for a human therapist—as opposed to an aid.

It is best not to start along certain paths to begin with. In lockdown, this will mean therapy by teleconferencing, which although controversial, at least involves a human therapist who is undoubtedly conscious, with everything that implies, including empathy and a genuine inter-personal connection. If there’s a shortage of mental health-care professionals, then the answer to that problem may be to train more humans—not to automate the pros.  Full automation can be a very good thing for a variety of tasks, but psychotherapy just isn’t one of them. At least I hope not.

Need More Answers?

Knowledgebase - Industry Sectors
Click here to view
Click here to view

Subscribe to the EthicScan Knowledgebase for in-depth research and the opportunity to share information with industry experts, policy-makers and other health-care professionals.

Sign up for New Blog Alerts

Sign up here for free new blogs to be sent to you

Further Reading

Brodwin, Erin. “I spent 2 weeks texting a bot about my anxiety — and found it to be surprisingly helpful”. Business Insider, 30 January, 2018. Accessed on 16 April, 2021 at:
https://www.businessinsider.com/therapy-chatbot-depression-app-what-its-like-woebot-2018-1?r=US&IR=T

Corbyn, Zoe. ‘AI Ethicist Kate Darling: Robots Can be our Partners’. The Guardian, 17 April, 2021. Accessed on 18 April, 2021 at:
https://www.theguardian.com/technology/2021/apr/17/ai-ethicist-kate-darling-robots-can-be-our-partners?CMP=Share_iOSApp_Other

Duncan, David Ewan. Talking to Robots: A Brief Guide to our Robot-Human Futures. New York: Dutton, 2019.

Jordan, John. Robots. Cambridge, Massachusetts and London: The MIT Press, 2016.

Lin, Patrick, Abney, Keith, and Bekey, George A., editors. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, Massachusetts, and London: The MIT Press, 2012.

Morris, Margaret. Left to our Own Devices: Outsmarting Smart Technology to Reclaim our Relationships, Health, and Focus. Cambridge, Massachusetts, and London: The MIT Press, 2018.

Obituary, The Independent. Professor Joseph Weizenbaum: Creator of the ‘Eliza’ program. 18 March, 2008. Indicates Weizenbaum’s perturbance at reactions to Eliza.
https://www.independent.co.uk/news/obituaries/professor-joseph-weizenbaum-creator-eliza-program-797162.html

Turkle, Sherry. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Books, 2015.

Latest posts by Eric Litwack (see all)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
X

Forgot Password?

Join Us

0
Would love your thoughts, please comment.x
()
x