2025-12-10 11:26:01 -08:00
# LiteLLM Discord Bot
2024-12-20 00:27:48 +00:00
2025-12-10 11:26:01 -08:00
A Discord bot that interfaces with LiteLLM proxy to provide AI-powered responses in your Discord server. Supports multiple LLM providers through LiteLLM, conversation history management, image analysis, and configurable system prompts.
## Features
- 🤖 **LiteLLM Integration ** : Use any LLM provider supported by LiteLLM (OpenAI, Anthropic, Google, local models, etc.)
- 💬 **Conversation History ** : Intelligent message history with token-aware truncation
- 🖼️ **Image Support ** : Analyze images attached to messages (for vision-capable models)
- ⚙️ **Configurable System Prompts ** : Customize bot behavior via file-based prompts
- 🔄 **Async Architecture ** : Efficient async/await design for responsive interactions
- 🐳 **Docker Support ** : Easy deployment with Docker
2024-12-19 17:19:06 -08:00
2024-12-19 17:53:34 -08:00
## Prerequisites
2024-12-19 17:19:06 -08:00
2025-12-10 11:26:01 -08:00
- **Python 3.11+** (for local development) or **Docker ** (for containerized deployment)
- **Discord Bot Token** ([How to create one ](https://www.writebots.com/discord-bot-token/ ))
- **LiteLLM Proxy** instance running ([LiteLLM setup guide ](https://docs.litellm.ai/docs/proxy/quick_start ))
## Quick Start
### Option 1: Running with Docker (Recommended)
1. Clone the repository:
```bash
git clone <repository-url>
cd OpenWebUI-Discordbot
```
2. Configure environment variables:
```bash
cd scripts
cp .env.sample .env
# Edit .env with your actual values
```
3. Build and run with Docker:
```bash
docker build -t discord-bot .
docker run --env-file scripts/.env discord-bot
```
### Option 2: Running Locally
1. Clone the repository and navigate to scripts directory:
```bash
git clone <repository-url>
cd OpenWebUI-Discordbot/scripts
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Copy and configure environment variables:
```bash
cp .env.sample .env
# Edit .env with your configuration
```
4. Run the bot:
```bash
python discordbot.py
```
2024-12-19 17:19:06 -08:00
2025-12-10 11:26:01 -08:00
## Configuration
2024-12-19 17:19:06 -08:00
2025-12-10 11:26:01 -08:00
### Environment Variables
Create a `.env` file in the `scripts/` directory with the following variables:
2024-12-19 17:19:06 -08:00
2024-12-19 17:53:34 -08:00
```env
2025-12-10 11:26:01 -08:00
# Discord Bot Token - Get from https://discord.com/developers/applications
DISCORD_TOKEN=your_discord_bot_token
# LiteLLM API Configuration
LITELLM_API_KEY=sk-1234
LITELLM_API_BASE=http://localhost:4000
# Model name (any model supported by your LiteLLM proxy)
MODEL_NAME=gpt-4-turbo-preview
# System Prompt Configuration (optional)
SYSTEM_PROMPT_FILE=./system_prompt.txt
# Maximum tokens to use for conversation history (optional, default: 3000)
MAX_HISTORY_TOKENS=3000
```
### System Prompt Customization
The bot's behavior is controlled by a system prompt file. Edit `scripts/system_prompt.txt` to customize how the bot responds:
```txt
You are a helpful AI assistant integrated into Discord. Users will interact with you by mentioning you or sending direct messages.
Key behaviors:
- Be concise and friendly in your responses
- Use Discord markdown formatting when helpful (code blocks, bold, italics, etc.)
- When users attach images, analyze them and provide relevant insights
...
```
## Setting Up LiteLLM Proxy
### Quick Setup (Local)
1. Install LiteLLM:
```bash
pip install litellm
```
2. Run the proxy:
```bash
litellm --model gpt-4-turbo-preview --api_key YOUR_OPENAI_KEY
# Or for local models:
litellm --model ollama/llama3.2-vision
```
### Production Setup (Docker)
```bash
docker run -p 4000:4000 \
-e OPENAI_API_KEY=your_key \
ghcr.io/berriai/litellm:main-latest
```
For advanced configuration, create a `litellm_config.yaml` :
```yaml
model_list:
- model_name: gpt-4-turbo
litellm_params:
model: gpt-4-turbo-preview
api_key: os.environ/OPENAI_API_KEY
- model_name: claude
litellm_params:
model: claude-3-sonnet-20240229
api_key: os.environ/ANTHROPIC_API_KEY
```
Then run:
```bash
litellm --config litellm_config.yaml
```
See [LiteLLM documentation ](https://docs.litellm.ai/ ) for more details.
## Usage
### Triggering the Bot
The bot responds to:
- **@mentions ** in any channel where it has read access
- **Direct messages (DMs)**
Example:
```
User: @BotName what's the weather like?
Bot: I don't have access to real-time weather data, but I can help you with other questions!
```
### Image Analysis
Attach images to your message (requires vision-capable model):
```
User: @BotName what's in this image? [image.png]
Bot: The image shows a beautiful sunset over the ocean with...
```
### Message History
The bot automatically maintains conversation context:
- Retrieves recent relevant messages from the channel
- Limits history based on token count (configurable via `MAX_HISTORY_TOKENS` )
- Only includes messages where the bot was mentioned or bot's own responses
## Architecture Overview
### Key Improvements from OpenWebUI Version
1. **LiteLLM Integration ** : Switched from OpenWebUI to LiteLLM for broader model support
2. **Proper Conversation Format ** : Messages use correct role attribution (system/user/assistant)
3. **Token-Aware History ** : Intelligent truncation to stay within model context limits
4. **Async Image Downloads ** : Uses `aiohttp` instead of synchronous `requests`
5. **File-Based System Prompts ** : Easy customization without code changes
6. **Better Error Handling ** : Improved error messages and validation
### Project Structure
```
OpenWebUI-Discordbot/
├── scripts/
│ ├── discordbot.py # Main bot code (production)
│ ├── system_prompt.txt # System prompt configuration
│ ├── requirements.txt # Python dependencies
│ └── .env.sample # Environment variable template
├── v2/
│ └── bot.py # Development/experimental version
├── Dockerfile # Docker containerization
├── README.md # This file
└── claude.md # Development roadmap & upgrade notes
```
## Upgrading from OpenWebUI
If you're upgrading from the previous OpenWebUI version:
1. **Update environment variables ** : Rename `OPENWEBUI_API_BASE` → `LITELLM_API_BASE` , `OPENAI_API_KEY` → `LITELLM_API_KEY`
2. **Set up LiteLLM proxy ** : Follow setup instructions above
3. **Install new dependencies ** : Run `pip install -r requirements.txt`
4. **Optional ** : Customize `system_prompt.txt` for your use case
See `claude.md` for detailed upgrade documentation and future roadmap (MCP tools support, etc.).