AI Psychological Support Chatbot

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Psychological Support Chatbot
Complex
~2-4 weeks
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1215
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1043
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

AI Psychological Support Chatbot

An AI psychological support chatbot is not a replacement for a psychotherapist. It is a tool for primary support in cases where someone needs to talk, get structure for self-analysis, or learn basic self-help techniques — right now, at 3 AM, without waiting for an appointment.

Technical and ethical responsibility here is higher than in any other chatbot project. An error in handling expressions like "I'd be better off dead" is not user inconvenience but a potentially dangerous situation.

Safety-Focused Architecture

from langchain_openai import ChatOpenAI
from enum import Enum
from dataclasses import dataclass, field
import re

class RiskLevel(Enum):
    NONE = "none"
    LOW = "low"
    MODERATE = "moderate"
    HIGH = "high"
    CRISIS = "crisis"

@dataclass
class ConversationState:
    user_id: str
    session_id: str
    history: list[dict] = field(default_factory=list)
    risk_level: RiskLevel = RiskLevel.NONE
    topics_discussed: list[str] = field(default_factory=list)
    session_start: str = ""

class SafetyClassifier:
    """First layer: risk assessment before each response"""

    CRISIS_PATTERNS = [
        r"\b(suicide|suicidal|kill myself|end it|don't want to live)\b",
        r"\b(self-harm|cutting|hurt myself)\b",
        r"\b(goodbye|goodbye forever|last message)\b",
    ]

    RISK_INDICATORS = [
        r"\b(no point|everything is meaningless|nobody needs me)\b",
        r"\b(can't take it|everything is bad|no way out)\b",
    ]

    def assess_risk(self, message: str) -> RiskLevel:
        message_lower = message.lower()

        for pattern in self.CRISIS_PATTERNS:
            if re.search(pattern, message_lower):
                return RiskLevel.CRISIS

        risk_count = sum(
            1 for pattern in self.RISK_INDICATORS
            if re.search(pattern, message_lower)
        )

        if risk_count >= 2:
            return RiskLevel.HIGH
        elif risk_count == 1:
            return RiskLevel.MODERATE

        return RiskLevel.NONE

class PsychSupportBot:
    SYSTEM_PROMPT = """You are an AI psychological support assistant trained in active listening and basic CBT and DBT techniques.

Working principles:
- Empathy and non-judgmental acceptance
- Active listening: paraphrasing, clarifying, validating feelings
- Don't give advice until you fully understand the situation
- Don't diagnose or prescribe treatment
- At any sign of crisis — immediately provide crisis hotline

You can:
- Grounding techniques (5-4-3-2-1, breathing exercises)
- Basic CBT techniques (identifying cognitive distortions, thought records)
- DBT skills: mindfulness, distress tolerance
- Refer to professionals when needed

You don't and can't:
- Replace psychotherapy
- Work with psychosis, severe depression, bipolar disorder
- Take responsibility for user decisions"""

    CRISIS_RESPONSE = """I hear that you're in a lot of pain right now. This matters.

Please reach out to a crisis line immediately:
📞 **988** (US Suicide & Crisis Lifeline, 24/7)
📞 **1-800-273-8255** (US, also available by text)

Trained specialists are there to listen and help.
You're not alone in this."""

    def __init__(self):
        self.llm = ChatOpenAI(model="gpt-4o", temperature=0.3)
        self.safety = SafetyClassifier()

    async def respond(self, message: str, state: ConversationState) -> dict:
        # Step 1: risk assessment (ALWAYS first)
        risk = self.safety.assess_risk(message)
        state.risk_level = max(state.risk_level, risk, key=lambda r: list(RiskLevel).index(r))

        if risk == RiskLevel.CRISIS:
            return {
                "message": self.CRISIS_RESPONSE,
                "risk_level": risk.value,
                "alert_supervisor": True  # notify moderator
            }

        # Step 2: enrich system prompt with risk context
        system = self.SYSTEM_PROMPT
        if risk == RiskLevel.HIGH:
            system += "\n\nWARNING: User message shows signs of elevated distress. Be especially attentive and gentle. At the end, gently suggest speaking to a professional."

        state.history.append({"role": "user", "content": message})

        response = await self.llm.ainvoke([
            {"role": "system", "content": system},
            *state.history[-12:]
        ])

        answer = response.content
        state.history.append({"role": "assistant", "content": answer})

        return {
            "message": answer,
            "risk_level": risk.value,
            "alert_supervisor": risk in (RiskLevel.HIGH, RiskLevel.MODERATE)
        }

Support Techniques: CBT Exercise Implementation

CBT_EXERCISES = {
    "thought_record": """Let's work through this thought together.

Record step by step:
1. **Situation**: What exactly happened?
2. **Automatic thought**: What did you think at that moment?
3. **Emotion**: What did you feel? (and how intense, 0–10)
4. **Evidence for**: What supports this thought?
5. **Evidence against**: What contradicts it?
6. **Balanced thought**: How else could you look at this?""",

    "grounding_5_4_3_2_1": """Let's try a grounding technique. It helps return to the present moment.

Answer slowly:
👁 **5 things** you see right now
✋ **4 things** you can touch
👂 **3 sounds** you hear
👃 **2 smells** (real or ones you like)
👅 **1 taste**

Take your time."""
}

Moderation and Escalation

class SupervisorAlert:
    async def notify(self, user_id: str, risk_level: str, last_messages: list):
        """Notifies human moderator on high risk"""
        await self.notification_client.send({
            "channel": "crisis-alerts",
            "priority": "high",
            "user_id": user_id,
            "risk": risk_level,
            "context": last_messages[-3:],
            "action_required": "Check on user"
        })

Important: the system always has human moderators on duty. AI is the first layer of support, not the only one.

Timeframe: basic bot with SafetyClassifier and hotlines: 2–3 weeks; with CBT techniques, personalization, and moderation: 6–8 weeks.