AI could soon replace psychiatrists

Could artificial intelligence, after sex, football and even art, soon replace our doctor? The idea is not new. It appears in many science fiction films, but tends to democratize the IRL. This time it is our psyche that the robots want to attack. The first conversational chatbot invented in 1966 has come a long way since the 2000s, and the idea of ​​an AI that can preserve our sanity is no longer as dystopian as it seems.

Will robots get smaller soon?

According to several recent studies, human patients are no longer as reluctant as before to entrust their moods to a robot. Mental health is an increasingly popular topic, and AI can provide several important benefits. First, the absence of a true therapist can cause patients greater freedom of expression, without fear of human judgment. So report what works, AI would be the only one capable of making truly objective decisions while remaining available at any time of day or night.

Note, however, that if the future of psychiatry is about robotic therapists, at the moment it is mainly about chatbots and more rarely interactive videos. The lack of a physical interface with the human aspect could therefore be a real barrier to creating a bond between the patient and his own “Practitioner”. Additionally, several healthcare professionals are already concerned about an AI’s inability to detect certain warning behaviors, especially in the case of a suicide attempt, for example.

Right now, and despite all the advances in AI over the past few years (to the point where a former Google employee sees a chatbot as a true colleague), robots are still a long way from being the same as humans, both in empathy and internally. Models. Especially for a healthcare professional, these unconscious mental patterns integrate all past experience and knowledge acquired to be able to make a human diagnosis.

Especially since the objectivity announced by the defenders of the AI ​​has yet to be proven. Robots remain programmed by fallible humans and therefore do not escape certain prejudices, either sexist, racist or social. Recall that Tay, Microsoft’s artificial intelligence launched in 2016, took only a few hours to utter Nazi slurs after spending some time on Twitter.

Leave a Reply

Your email address will not be published. Required fields are marked *