A LangGraph-based mental support chat agent that interacts with users via the command line. It analyzes user status, detects suicide risk, retrieves knowledge from a local database, and generates supportive responses using a large language model (LLM, default: Gemini Pro 2.5).
- Status Classification – Analyzes the user’s current mental state using a local
status_classificationmodel. - Suicide Risk Detection – Detects potential suicide risk using a local
suicide_detectionmodel. - Knowledge Retrieval – Fetches relevant information from a local knowledge base.
- Response Generation – Sends user input, classification results, and retrieved knowledge to an LLM to generate supportive responses.
- Command-line Chat – Users can interact with the agent directly in the terminal.
- Clone the repository:
git clone <repository_url>
cd <project_directory>- Install dependencies:
pip install -r requirements.txt- Start the command-line agent:
Copy code
python main.pyChat with the agent directly in the terminal.
Node Workflow
User Input → status_classification_node → Determine user status
User Input → suicide_detection_node → Detect suicide risk
Knowledge Retrieval → Fetch relevant information
Integrate Information → Send to LLM (gemini_client) → Generate response
Return Response → Display to user
Configuration
LLM Model – Default is Gemini Pro 2.5. Can be changed in gemini_client.py.
Local Models – Paths configurable in status_classifier.py and suicide_detector.py.
Knowledge base
Place knowledges in data\knowledge\jsons
Notes
Ensure local models and the knowledge base are properly deployed.
Suicide risk detection is for reference only. In case of high-risk detection, contact professional help immediately.