Artificial intelligence used in psychiatry: soon robot-psychiatrists?

Spread the love

Artificial intelligence used in psychiatry: soon robot-psychiatrists?

Corine/Moriou Beware of biases and risks.

In a context where psychological and psychiatric disorders and pathologies are on the rise, artificial intelligence seems useful to meet a strong demand, particularly in terms of diagnosis. But beware, despite the proliferation of disorders that have arisen during the Covid-19 crisis, the success of AIs, and their acceptance by health professionals, is not easy, because the algorithms are always biased.

Psychological and psychiatric help applications have particularly developed during confinement, when people were immersed in situations of stress and anxiety, without easy access to health professionals. Applications are used either to combat loneliness and provide support, or to manage stress and facilitate teleconsultation. They are also involved in supporting healthcare staff.

The applications and digital tools for mental health are therefore diverse and varied. While for some it is easier to confide in machines, the limits of exchanges with artificial intelligence for the collection of sensitive data are still at the heart of the debate concerning algorithmic psychological assistance.

Robot-psychiatrists, what advantages?

Robots have already replaced waiters in restaurants, journalists, employees in factories, and even customer services or certain administrative positions. Will they one day also be able to replace our psychiatrists? Automatic software and applications guarantee availability 24 hours a day, 7 days a week, but on the other hand, they cannot ensure the ideal of neutral expertise, or more objective than humans, contrary to what the 'we could think. Indeed, according to Vincent Martin, doctor in computer science at the University of Bordeaux, and Christophe Gauld, child psychiatrist and sleep doctor at the University of Paris 1 Panthéon-Sorbonne, we must never forget that artificial intelligences only reproduce the bias, and are therefore never neutral.

Lack of empathy, lack of objectivity, vulnerability of sensitive data… The disadvantages are always numerous

For Thomas Gouritin, the main problem would be the lack of empathy. Expert in chatbots, Gouritin explains that the assistants work with a system of detection by keywords. The chatbot is programmed to respond to certain signal words, such as “sadness” or “loneliness”, but it may not provide a truly empathetic and tailored response. And the fact of not being understood by a chatbot, as for a human, can generate frustration and increase the feeling of anxiety…

As Vincent Martin and Christophe Gault explain, the lack of empathy in artificial intelligence prevents them from adapting to patient responses in real time. They therefore remain limited to a pre-established and rigid scheme. In addition, gestures, the position of bodies in space, reading emotions on the face or recognizing non-explicit social signals are generally impossible for them, and all this constitutes significant losses in the patient-caregiver relationship, which is also an important part of care.

Finally, by storing the answers and data of patients who interact with machines, this information is always susceptible to being stolen. In Finland, for example, the Vastano Psychotherapy Center, present in 300 cities across the country, was the victim of an unprecedented hack. Victims now think twice before entrusting their data and psychological problems to machines.