All checks were successful
OpenWebUI Discord Bot / Build-and-Push (push) Successful in 1m2s
Major improvements to LiteLLM Discord bot with MCP (Model Context Protocol) tools support: Features added: - MCP tools discovery and integration with LiteLLM proxy - Fetch and convert 40+ GitHub MCP tools to OpenAI format - Tool calling flow with placeholder execution (pending MCP endpoint confirmation) - Dynamic tool injection based on LiteLLM MCP server configuration - Enhanced system prompt with tool usage guidance - Added ENABLE_TOOLS environment variable for easy toggle - Comprehensive debug logging for troubleshooting Technical changes: - Added httpx>=0.25.0 dependency for async MCP API calls - Implemented get_available_mcp_tools() to query /v1/mcp/server and /v1/mcp/tools endpoints - Convert MCP tool schemas to OpenAI function calling format - Detect and handle tool_calls in model responses - Added system_prompt.txt for customizable bot behavior - Updated README with better documentation and setup instructions - Created claude.md with detailed development notes and upgrade roadmap Configuration: - New ENABLE_TOOLS flag in .env to control MCP integration - DEBUG_LOGGING for detailed execution logs - System prompt file support for easy customization Known limitations: - Tool execution currently uses placeholders (MCP execution endpoint needs verification) - Limited to 50 tools to avoid overwhelming the model - Requires LiteLLM proxy with MCP server configured Next steps: - Verify correct LiteLLM MCP tool execution endpoint - Implement actual tool execution via MCP proxy - Test end-to-end GitHub operations through Discord 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
214 lines
6.2 KiB
Markdown
214 lines
6.2 KiB
Markdown
# LiteLLM Discord Bot
|
|
|
|
A Discord bot that interfaces with LiteLLM proxy to provide AI-powered responses in your Discord server. Supports multiple LLM providers through LiteLLM, conversation history management, image analysis, and configurable system prompts.
|
|
|
|
## Features
|
|
|
|
- 🤖 **LiteLLM Integration**: Use any LLM provider supported by LiteLLM (OpenAI, Anthropic, Google, local models, etc.)
|
|
- 💬 **Conversation History**: Intelligent message history with token-aware truncation
|
|
- 🖼️ **Image Support**: Analyze images attached to messages (for vision-capable models)
|
|
- ⚙️ **Configurable System Prompts**: Customize bot behavior via file-based prompts
|
|
- 🔄 **Async Architecture**: Efficient async/await design for responsive interactions
|
|
- 🐳 **Docker Support**: Easy deployment with Docker
|
|
|
|
## Prerequisites
|
|
|
|
- **Python 3.11+** (for local development) or **Docker** (for containerized deployment)
|
|
- **Discord Bot Token** ([How to create one](https://www.writebots.com/discord-bot-token/))
|
|
- **LiteLLM Proxy** instance running ([LiteLLM setup guide](https://docs.litellm.ai/docs/proxy/quick_start))
|
|
|
|
## Quick Start
|
|
|
|
### Option 1: Running with Docker (Recommended)
|
|
|
|
1. Clone the repository:
|
|
```bash
|
|
git clone <repository-url>
|
|
cd OpenWebUI-Discordbot
|
|
```
|
|
|
|
2. Configure environment variables:
|
|
```bash
|
|
cd scripts
|
|
cp .env.sample .env
|
|
# Edit .env with your actual values
|
|
```
|
|
|
|
3. Build and run with Docker:
|
|
```bash
|
|
docker build -t discord-bot .
|
|
docker run --env-file scripts/.env discord-bot
|
|
```
|
|
|
|
### Option 2: Running Locally
|
|
|
|
1. Clone the repository and navigate to scripts directory:
|
|
```bash
|
|
git clone <repository-url>
|
|
cd OpenWebUI-Discordbot/scripts
|
|
```
|
|
|
|
2. Install dependencies:
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
3. Copy and configure environment variables:
|
|
```bash
|
|
cp .env.sample .env
|
|
# Edit .env with your configuration
|
|
```
|
|
|
|
4. Run the bot:
|
|
```bash
|
|
python discordbot.py
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### Environment Variables
|
|
|
|
Create a `.env` file in the `scripts/` directory with the following variables:
|
|
|
|
```env
|
|
# Discord Bot Token - Get from https://discord.com/developers/applications
|
|
DISCORD_TOKEN=your_discord_bot_token
|
|
|
|
# LiteLLM API Configuration
|
|
LITELLM_API_KEY=sk-1234
|
|
LITELLM_API_BASE=http://localhost:4000
|
|
|
|
# Model name (any model supported by your LiteLLM proxy)
|
|
MODEL_NAME=gpt-4-turbo-preview
|
|
|
|
# System Prompt Configuration (optional)
|
|
SYSTEM_PROMPT_FILE=./system_prompt.txt
|
|
|
|
# Maximum tokens to use for conversation history (optional, default: 3000)
|
|
MAX_HISTORY_TOKENS=3000
|
|
```
|
|
|
|
### System Prompt Customization
|
|
|
|
The bot's behavior is controlled by a system prompt file. Edit `scripts/system_prompt.txt` to customize how the bot responds:
|
|
|
|
```txt
|
|
You are a helpful AI assistant integrated into Discord. Users will interact with you by mentioning you or sending direct messages.
|
|
|
|
Key behaviors:
|
|
- Be concise and friendly in your responses
|
|
- Use Discord markdown formatting when helpful (code blocks, bold, italics, etc.)
|
|
- When users attach images, analyze them and provide relevant insights
|
|
...
|
|
```
|
|
|
|
## Setting Up LiteLLM Proxy
|
|
|
|
### Quick Setup (Local)
|
|
|
|
1. Install LiteLLM:
|
|
```bash
|
|
pip install litellm
|
|
```
|
|
|
|
2. Run the proxy:
|
|
```bash
|
|
litellm --model gpt-4-turbo-preview --api_key YOUR_OPENAI_KEY
|
|
# Or for local models:
|
|
litellm --model ollama/llama3.2-vision
|
|
```
|
|
|
|
### Production Setup (Docker)
|
|
|
|
```bash
|
|
docker run -p 4000:4000 \
|
|
-e OPENAI_API_KEY=your_key \
|
|
ghcr.io/berriai/litellm:main-latest
|
|
```
|
|
|
|
For advanced configuration, create a `litellm_config.yaml`:
|
|
```yaml
|
|
model_list:
|
|
- model_name: gpt-4-turbo
|
|
litellm_params:
|
|
model: gpt-4-turbo-preview
|
|
api_key: os.environ/OPENAI_API_KEY
|
|
- model_name: claude
|
|
litellm_params:
|
|
model: claude-3-sonnet-20240229
|
|
api_key: os.environ/ANTHROPIC_API_KEY
|
|
```
|
|
|
|
Then run:
|
|
```bash
|
|
litellm --config litellm_config.yaml
|
|
```
|
|
|
|
See [LiteLLM documentation](https://docs.litellm.ai/) for more details.
|
|
|
|
## Usage
|
|
|
|
### Triggering the Bot
|
|
|
|
The bot responds to:
|
|
- **@mentions** in any channel where it has read access
|
|
- **Direct messages (DMs)**
|
|
|
|
Example:
|
|
```
|
|
User: @BotName what's the weather like?
|
|
Bot: I don't have access to real-time weather data, but I can help you with other questions!
|
|
```
|
|
|
|
### Image Analysis
|
|
|
|
Attach images to your message (requires vision-capable model):
|
|
```
|
|
User: @BotName what's in this image? [image.png]
|
|
Bot: The image shows a beautiful sunset over the ocean with...
|
|
```
|
|
|
|
### Message History
|
|
|
|
The bot automatically maintains conversation context:
|
|
- Retrieves recent relevant messages from the channel
|
|
- Limits history based on token count (configurable via `MAX_HISTORY_TOKENS`)
|
|
- Only includes messages where the bot was mentioned or bot's own responses
|
|
|
|
## Architecture Overview
|
|
|
|
### Key Improvements from OpenWebUI Version
|
|
|
|
1. **LiteLLM Integration**: Switched from OpenWebUI to LiteLLM for broader model support
|
|
2. **Proper Conversation Format**: Messages use correct role attribution (system/user/assistant)
|
|
3. **Token-Aware History**: Intelligent truncation to stay within model context limits
|
|
4. **Async Image Downloads**: Uses `aiohttp` instead of synchronous `requests`
|
|
5. **File-Based System Prompts**: Easy customization without code changes
|
|
6. **Better Error Handling**: Improved error messages and validation
|
|
|
|
### Project Structure
|
|
|
|
```
|
|
OpenWebUI-Discordbot/
|
|
├── scripts/
|
|
│ ├── discordbot.py # Main bot code (production)
|
|
│ ├── system_prompt.txt # System prompt configuration
|
|
│ ├── requirements.txt # Python dependencies
|
|
│ └── .env.sample # Environment variable template
|
|
├── v2/
|
|
│ └── bot.py # Development/experimental version
|
|
├── Dockerfile # Docker containerization
|
|
├── README.md # This file
|
|
└── claude.md # Development roadmap & upgrade notes
|
|
```
|
|
|
|
## Upgrading from OpenWebUI
|
|
|
|
If you're upgrading from the previous OpenWebUI version:
|
|
|
|
1. **Update environment variables**: Rename `OPENWEBUI_API_BASE` → `LITELLM_API_BASE`, `OPENAI_API_KEY` → `LITELLM_API_KEY`
|
|
2. **Set up LiteLLM proxy**: Follow setup instructions above
|
|
3. **Install new dependencies**: Run `pip install -r requirements.txt`
|
|
4. **Optional**: Customize `system_prompt.txt` for your use case
|
|
|
|
See `claude.md` for detailed upgrade documentation and future roadmap (MCP tools support, etc.). |