Implements smart tool selection based on query content: - Adds query_needs_tools() function to detect tool-requiring queries - Sets tool_choice="required" for queries needing GitHub/time/weather/search - Sets tool_choice="auto" for general conversation - Adds debug logging for tool choice decisions This fixes the issue where MCP tools were configured but not being used because tool_choice defaulted to "auto" and the model opted not to use them. Query detection keywords include: - Time/date operations (time, clock, date, now, current) - Weather queries (weather, temperature, forecast) - GitHub operations (repo, code, file, commit, PR, issue) - Search/lookup operations (search, find, get, fetch, retrieve) - File operations (read, open, check, list, contents) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
LiteLLM Discord Bot
A Discord bot that interfaces with LiteLLM proxy to provide AI-powered responses in your Discord server. Supports multiple LLM providers through LiteLLM, conversation history management, image analysis, and configurable system prompts.
Features
- 🤖 LiteLLM Integration: Use any LLM provider supported by LiteLLM (OpenAI, Anthropic, Google, local models, etc.)
- 💬 Conversation History: Intelligent message history with token-aware truncation
- 🖼️ Image Support: Analyze images attached to messages (for vision-capable models)
- ⚙️ Configurable System Prompts: Customize bot behavior via file-based prompts
- 🔄 Async Architecture: Efficient async/await design for responsive interactions
- 🐳 Docker Support: Easy deployment with Docker
Prerequisites
- Python 3.11+ (for local development) or Docker (for containerized deployment)
- Discord Bot Token (How to create one)
- LiteLLM Proxy instance running (LiteLLM setup guide)
Quick Start
Option 1: Running with Docker (Recommended)
- Clone the repository:
git clone <repository-url>
cd OpenWebUI-Discordbot
- Configure environment variables:
cd scripts
cp .env.sample .env
# Edit .env with your actual values
- Build and run with Docker:
docker build -t discord-bot .
docker run --env-file scripts/.env discord-bot
Option 2: Running Locally
- Clone the repository and navigate to scripts directory:
git clone <repository-url>
cd OpenWebUI-Discordbot/scripts
- Install dependencies:
pip install -r requirements.txt
- Copy and configure environment variables:
cp .env.sample .env
# Edit .env with your configuration
- Run the bot:
python discordbot.py
Configuration
Environment Variables
Create a .env file in the scripts/ directory with the following variables:
# Discord Bot Token - Get from https://discord.com/developers/applications
DISCORD_TOKEN=your_discord_bot_token
# LiteLLM API Configuration
LITELLM_API_KEY=sk-1234
LITELLM_API_BASE=http://localhost:4000
# Model name (any model supported by your LiteLLM proxy)
MODEL_NAME=gpt-4-turbo-preview
# System Prompt Configuration (optional)
SYSTEM_PROMPT_FILE=./system_prompt.txt
# Maximum tokens to use for conversation history (optional, default: 3000)
MAX_HISTORY_TOKENS=3000
System Prompt Customization
The bot's behavior is controlled by a system prompt file. Edit scripts/system_prompt.txt to customize how the bot responds:
You are a helpful AI assistant integrated into Discord. Users will interact with you by mentioning you or sending direct messages.
Key behaviors:
- Be concise and friendly in your responses
- Use Discord markdown formatting when helpful (code blocks, bold, italics, etc.)
- When users attach images, analyze them and provide relevant insights
...
Setting Up LiteLLM Proxy
Quick Setup (Local)
- Install LiteLLM:
pip install litellm
- Run the proxy:
litellm --model gpt-4-turbo-preview --api_key YOUR_OPENAI_KEY
# Or for local models:
litellm --model ollama/llama3.2-vision
Production Setup (Docker)
docker run -p 4000:4000 \
-e OPENAI_API_KEY=your_key \
ghcr.io/berriai/litellm:main-latest
For advanced configuration, create a litellm_config.yaml:
model_list:
- model_name: gpt-4-turbo
litellm_params:
model: gpt-4-turbo-preview
api_key: os.environ/OPENAI_API_KEY
- model_name: claude
litellm_params:
model: claude-3-sonnet-20240229
api_key: os.environ/ANTHROPIC_API_KEY
Then run:
litellm --config litellm_config.yaml
See LiteLLM documentation for more details.
Usage
Triggering the Bot
The bot responds to:
- @mentions in any channel where it has read access
- Direct messages (DMs)
Example:
User: @BotName what's the weather like?
Bot: I don't have access to real-time weather data, but I can help you with other questions!
Image Analysis
Attach images to your message (requires vision-capable model):
User: @BotName what's in this image? [image.png]
Bot: The image shows a beautiful sunset over the ocean with...
Message History
The bot automatically maintains conversation context:
- Retrieves recent relevant messages from the channel
- Limits history based on token count (configurable via
MAX_HISTORY_TOKENS) - Only includes messages where the bot was mentioned or bot's own responses
Architecture Overview
Key Improvements from OpenWebUI Version
- LiteLLM Integration: Switched from OpenWebUI to LiteLLM for broader model support
- Proper Conversation Format: Messages use correct role attribution (system/user/assistant)
- Token-Aware History: Intelligent truncation to stay within model context limits
- Async Image Downloads: Uses
aiohttpinstead of synchronousrequests - File-Based System Prompts: Easy customization without code changes
- Better Error Handling: Improved error messages and validation
Project Structure
OpenWebUI-Discordbot/
├── scripts/
│ ├── discordbot.py # Main bot code (production)
│ ├── system_prompt.txt # System prompt configuration
│ ├── requirements.txt # Python dependencies
│ └── .env.sample # Environment variable template
├── v2/
│ └── bot.py # Development/experimental version
├── Dockerfile # Docker containerization
├── README.md # This file
└── claude.md # Development roadmap & upgrade notes
Upgrading from OpenWebUI
If you're upgrading from the previous OpenWebUI version:
- Update environment variables: Rename
OPENWEBUI_API_BASE→LITELLM_API_BASE,OPENAI_API_KEY→LITELLM_API_KEY - Set up LiteLLM proxy: Follow setup instructions above
- Install new dependencies: Run
pip install -r requirements.txt - Optional: Customize
system_prompt.txtfor your use case
See claude.md for detailed upgrade documentation and future roadmap (MCP tools support, etc.).