February 17, 2011 - Technology media outlet The Decoder published a blog post yesterday (February 16) reporting that the latest research says OpenAI ChatGPT Passed the therapist fieldTuring Test,The results showed that people had difficulty distinguishing between ChatGPT and the therapeutic advice provided by a human therapist, and that the AI's responses were generally perceived as more empathetic.
Note: Applying the concept of the Turing test, the researchers asked 830 participants to distinguish between ChatGPT and human therapist responses. The results showed that participants identified correctly at a slightly higher rate than random guessing: the probability of correctly identifying the human therapist response was 56.11 TP3T, and the probability of correctly identifying the ChatGPT response was 51.21 TP3T.
The study says ChatGPT responses scored higher than human experts in the areas of therapeutic alliance, empathy, and cultural competence, and that their responses were typically longer, had a more positive tone, and used more nouns and adjectives to make them appear more detailed and empathetic.
The study revealed a bias: participants gave lower ratings when they thought they were reading an AI-generated response, regardless of who the actual author was. Conversely, AI-generated responses received the highest ratings when they were mistakenly thought to have been written by a human therapist.
This is not the first study to demonstrate the potential of AI in an advice role. Research from the University of Melbourne and the University of Western Australia found that ChatGPT provided more balanced, comprehensive and empathetic advice on social dilemmas than human columnists, with preference rates ranging from 70% to 85%, and that the majority of participants continued to express a preference for human counselors, despite higher ratings of AI responses.