March 23, 2026
Chatbots, Clients, and the Conversations We Need to Be Ready For
Four anchors to frame conversations about AI & mental health
The first time I heard mention of AI use in session, I froze.
I was not prepared for the conversation. Then it happened again.
This month, I'll be joining an upcoming panel discussion on conversational AI and mental health in university-settings, along with co-panelists, including Etienne Brisson from The Human Line Project and I've been thinking about those moments ever since.
AI chatbots are already part of how many young people search for information, reflect, cope, and sometimes seek something that feels like support.
23% of Canadian young adults aged 16-29 reported using AI for mental health support (MHRC, 2025)
This is an emerging area. The technology is moving quickly, the research is still developing, and the clinical implications are not yet settled.
We may not, as therapists, have a polished response to every comment about using AI for mental health - I certainly do not. But we are often among the first to hear how these tools are showing up in people's lives, including when something has gone wrong. Our profession is on the front page for this - in recent reporting by the New York Times, clinicians describe dangerous emergencies like psychosis or thoughts of self-harm linked to chatbot use.
My response, for now, is a set of working notes on what seems worth keeping in view as AI tools become more present in mental health contexts. I've derived them from the still-limited emerging research, clinical experience, consultation with thoughtful fellow psychologists also actively learning about AI and mental health, and input from Hassan (our data science collaborator at TIL).
While phrased directly for ease of reading, these are suggestions, not directives, as we continue learning how best to support those seeking mental health care. Clinical judgment remains central.
I've shaped them into four anchors I can return to when framing conversations about chatbot use.
I keep the goal in mind and share it when possible: facilitate open discussion, reduce harm, and help the person stay connected to their social network. This helps me stay anchored in widely shared values and in what we already know is protective for mental health, such as social connection.
Four anchors for conversations about chatbot use
1. Start from your foundation
The technology is new, but we can still rely on our training for conversations about it. Approaches like motivational interviewing and behavioural analysis, as well as safety planning, boundaries, referral, and clinical judgement still apply here.
Clinicians already understand patterns like reassurance-seeking, avoidance, dependency, emotional over-reliance, and reinforcement. Those same dynamics can show up in interactions with chatbots. For example, if a client is repeatedly turning to a chatbot for certainty, you'll want to assess reassurance-seeking behaviours.
2. Lead with openness but assess carefully
Experts suggest that clinicians start asking about AI use, starting at intake, and in an ongoing manner. We can do so with openness, and the intent to clarify: (1) whether they are using it (and which kind of chatbot); (2) what role it plays, and (3) what effect it has on their mental-health and well-being.
Have you been using AI tools like ChatGPT or other chatbots? What do you use them for, and when do you tend to turn to them? What do you get from them in those moments? And how do you usually feel afterward?
If useful, you can also invite the person to summarize a recent interaction or to share an excerpt.
3. Be clear about what the tool is, and is not
A chatbot is a consumer product that generates plausible language in response to a user's input. It is not a therapist, doctor, or crisis responder or a reliable replacement for these professions. It can be overly agreeable and reinforcing. It also carries risks, such as security and privacy risks, and a tendency to become unreliable in long chats and in high-stakes settings such as crisis evaluation. Even with models attaining high accuracy, the helpfulness of its responses depends on the quality of the user's prompts and interactions.
Part of your conversation with clients may involve helping them distinguish the feeling of support from the presence of actual care.
4. Assess vulnerability-specific risks and signs of unsafe or unhelpful use
Watch for factors such as the intensity and timing of use, especially prolonged or late-night interactions, and reliance on unmoderated chatbots. Warning signs may include social withdrawal, sleep disruption, secrecy, a growing sense that the chatbot is real, and worsening paranoia or reduced contact with reality. For a complete overview of risks and vulnerabilities, see Hudon and Stip (2025).
Consider not only harm reduction, but also keeping people connected to real relationships and appropriate supports.
Becoming literate in AI
Staying informed about the field of AI and mental health can go a long way in having better conversations and directing the client efficiently toward safer use of AI.
It helps to keep up with major platform changes (including changes to safety features) - these are generally available on the website of the vendors (e.x., OpenAI updates: https://openai.com/news/), advisories and guidelines related to the topic (see APA's health advisory on chatbots), the apps and their research base (if any), and the research base for the performance and safety of models in the provision of mental health services, including limitations. Fundamental knowledge may include but is not limited to knowledge of AI fundamentals, the different kinds of chatbots, and common risks of generative AI.
Clinicians may feel intimidated by not knowing enough about AI compared with more technically savvy clients. Yet our job is to provide therapy after all, not technical advice. Keeping these considerations in mind helps me feel more prepared for these conversations.
Thanks for reading! If you learned something useful from this post, help us grow by sharing it with a like-minded colleague.