This article offers practical guidance to ensure therapists and coaches make safe, ethical, and professionally responsible choices when considering AI with their client work.
AI tools can support reflection, organisation, and learning - but used poorly, they create ethical, legal, security, and clinical risks. Your primary responsibility is to your client's wellbeing, safety, and rights. AI does not change this.
Before using any AI tool, ask:
- Does this align with my professional code of ethics and Unmind's Practitioner terms of agreement?
- Would I feel confident explaining this use of AI to my client, supervisor, or regulator?
- Does this use protect client autonomy, confidentiality, and rights?
1. Protect Client Confidentiality and Data Privacy
This is imperative.
- Never input any client information, including personal data into AI tools unless you have explicit consent and a clear legal basis (under GDPR and other applicable data protection regulations).
- Do not use tools that train AI models on your data input - this compromises privacy and trust.
- Choose tools that clearly state how data is handled, stored, and protected.
How to check if a tool trains on your data:
- Look for "Privacy" or "Data Control" settings with toggles to disable "training" or "model improvements". If there isn't a clear toggle to disable "training" or "model improvements," assume your inputs are being used to develop the tool.
- Search the Privacy Policy for terms like "improve," "training," or "develop". If the text says your data helps "improve our services," the tool is likely learning from your information.
- Be cautious with free tools - paid "Enterprise" or "Business" tiers are more likely to offer "no-training" guarantees.
If you cannot clearly explain where client data goes, who can access it, if it's used for training, and how long it's kept - do not use the tool.
2. AI is an Assist Tool Only - Not for Decision-Making
AI is a prompt for your own professional thinking, not a decision‑maker.
Appropriate uses:
- Drafting session summaries or notes (with careful review)
- Generating psychoeducational materials
- Supporting practitioner reflection or learning
Never use AI to:
- Assess risk
- Diagnose mental health conditions
- Decide interventions
- Replace supervision or consultation
Your responsibilities:
- Review and edit all AI outputs carefully
- Apply your clinical or coaching expertise
- Take full responsibility for the final content
- Maintain human judgment and accountability
Rule of thumb: If the outcome could harm a client, AI should not be in the decision loop.
3. Obtain Informed Consent and Be Transparent
If AI plays any role in your work with a client, you must be transparent.
Informed consent should include:
- Clearly explaining what the AI tool is used for and what it is not used for
- What data is shared with the tool
- The client's right to decline AI involvement without penalty
Consent is not a one-time checkbox - revisit it as your use of AI evolves.
4. Maintain Professional Standards
- Be aware of bias: AI systems can reproduce cultural, gender, racial, and socioeconomic biases that can harm clients. Actively question AI suggestions.
- Discuss AI use in supervision or peer consultation.
- Stay up to date with guidance from professional bodies.
- Document your rationale for using (or not using) AI and review whether AI is genuinely helping your practice.
- If AI reduces your curiosity, presence, or ethical clarity - pause and reassess.
- Does my professional indemnity insurance require me to disclose AI use? (For example, if you're using AI for clinical records, check whether your insurance policy requires disclosure.)
Conclusion: Use AI Ethically, Transparently, and Thoughtfully
AI can be a helpful assistant, but it is not a therapist, coach, or ethical agent. When in doubt, prioritise client safety, transparency, and human responsibility.
Using AI responsibly is essential because it directly impacts client safety, confidentiality, and the trust that underpins your relationship with them. Responsible AI use also ensures that ethical and legal accountability is maintained, rather than delegating critical clinical judgments to systems that lack human understanding, moral reasoning, and the ability to respond to the nuanced needs of vulnerable individuals.