Josh Knapp dc95e5ac55
All checks were successful
OpenWebUI Discord Bot / Build-and-Push (push) Successful in 53s
Fix MCP tool execution to use proper JSON-RPC 2.0 format
The /mcp/call_tool endpoint expects JSON-RPC 2.0 format requests.
Updated to send proper RPC structure and parse RPC responses.

Request format:
{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "tool_name",
    "arguments": {...}
  },
  "id": 1
}

Response parsing updated to extract result from JSON-RPC envelope:
result.result.content[0].text

This fixes the 400 validation error:
"Field required: JSONRPCRequest.method, jsonrpc, id"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-12 11:44:48 -08:00
2025-01-02 19:38:42 -08:00
2024-12-20 00:27:48 +00:00

LiteLLM Discord Bot

A Discord bot that interfaces with LiteLLM proxy to provide AI-powered responses in your Discord server. Supports multiple LLM providers through LiteLLM, conversation history management, image analysis, and configurable system prompts.

Features

  • 🤖 LiteLLM Integration: Use any LLM provider supported by LiteLLM (OpenAI, Anthropic, Google, local models, etc.)
  • 💬 Conversation History: Intelligent message history with token-aware truncation
  • 🖼️ Image Support: Analyze images attached to messages (for vision-capable models)
  • ⚙️ Configurable System Prompts: Customize bot behavior via file-based prompts
  • 🔄 Async Architecture: Efficient async/await design for responsive interactions
  • 🐳 Docker Support: Easy deployment with Docker

Prerequisites

Quick Start

  1. Clone the repository:
git clone <repository-url>
cd OpenWebUI-Discordbot
  1. Configure environment variables:
cd scripts
cp .env.sample .env
# Edit .env with your actual values
  1. Build and run with Docker:
docker build -t discord-bot .
docker run --env-file scripts/.env discord-bot

Option 2: Running Locally

  1. Clone the repository and navigate to scripts directory:
git clone <repository-url>
cd OpenWebUI-Discordbot/scripts
  1. Install dependencies:
pip install -r requirements.txt
  1. Copy and configure environment variables:
cp .env.sample .env
# Edit .env with your configuration
  1. Run the bot:
python discordbot.py

Configuration

Environment Variables

Create a .env file in the scripts/ directory with the following variables:

# Discord Bot Token - Get from https://discord.com/developers/applications
DISCORD_TOKEN=your_discord_bot_token

# LiteLLM API Configuration
LITELLM_API_KEY=sk-1234
LITELLM_API_BASE=http://localhost:4000

# Model name (any model supported by your LiteLLM proxy)
MODEL_NAME=gpt-4-turbo-preview

# System Prompt Configuration (optional)
SYSTEM_PROMPT_FILE=./system_prompt.txt

# Maximum tokens to use for conversation history (optional, default: 3000)
MAX_HISTORY_TOKENS=3000

System Prompt Customization

The bot's behavior is controlled by a system prompt file. Edit scripts/system_prompt.txt to customize how the bot responds:

You are a helpful AI assistant integrated into Discord. Users will interact with you by mentioning you or sending direct messages.

Key behaviors:
- Be concise and friendly in your responses
- Use Discord markdown formatting when helpful (code blocks, bold, italics, etc.)
- When users attach images, analyze them and provide relevant insights
...

Setting Up LiteLLM Proxy

Quick Setup (Local)

  1. Install LiteLLM:
pip install litellm
  1. Run the proxy:
litellm --model gpt-4-turbo-preview --api_key YOUR_OPENAI_KEY
# Or for local models:
litellm --model ollama/llama3.2-vision

Production Setup (Docker)

docker run -p 4000:4000 \
  -e OPENAI_API_KEY=your_key \
  ghcr.io/berriai/litellm:main-latest

For advanced configuration, create a litellm_config.yaml:

model_list:
  - model_name: gpt-4-turbo
    litellm_params:
      model: gpt-4-turbo-preview
      api_key: os.environ/OPENAI_API_KEY
  - model_name: claude
    litellm_params:
      model: claude-3-sonnet-20240229
      api_key: os.environ/ANTHROPIC_API_KEY

Then run:

litellm --config litellm_config.yaml

See LiteLLM documentation for more details.

Usage

Triggering the Bot

The bot responds to:

  • @mentions in any channel where it has read access
  • Direct messages (DMs)

Example:

User: @BotName what's the weather like?
Bot: I don't have access to real-time weather data, but I can help you with other questions!

Image Analysis

Attach images to your message (requires vision-capable model):

User: @BotName what's in this image? [image.png]
Bot: The image shows a beautiful sunset over the ocean with...

Message History

The bot automatically maintains conversation context:

  • Retrieves recent relevant messages from the channel
  • Limits history based on token count (configurable via MAX_HISTORY_TOKENS)
  • Only includes messages where the bot was mentioned or bot's own responses

Architecture Overview

Key Improvements from OpenWebUI Version

  1. LiteLLM Integration: Switched from OpenWebUI to LiteLLM for broader model support
  2. Proper Conversation Format: Messages use correct role attribution (system/user/assistant)
  3. Token-Aware History: Intelligent truncation to stay within model context limits
  4. Async Image Downloads: Uses aiohttp instead of synchronous requests
  5. File-Based System Prompts: Easy customization without code changes
  6. Better Error Handling: Improved error messages and validation

Project Structure

OpenWebUI-Discordbot/
├── scripts/
│   ├── discordbot.py          # Main bot code (production)
│   ├── system_prompt.txt      # System prompt configuration
│   ├── requirements.txt       # Python dependencies
│   └── .env.sample           # Environment variable template
├── v2/
│   └── bot.py                # Development/experimental version
├── Dockerfile                # Docker containerization
├── README.md                 # This file
└── claude.md                 # Development roadmap & upgrade notes

Upgrading from OpenWebUI

If you're upgrading from the previous OpenWebUI version:

  1. Update environment variables: Rename OPENWEBUI_API_BASELITELLM_API_BASE, OPENAI_API_KEYLITELLM_API_KEY
  2. Set up LiteLLM proxy: Follow setup instructions above
  3. Install new dependencies: Run pip install -r requirements.txt
  4. Optional: Customize system_prompt.txt for your use case

See claude.md for detailed upgrade documentation and future roadmap (MCP tools support, etc.).

Description
A Discord bot to communicate with an OpenWebUI instance.
Readme MIT 134 KiB
Languages
Python 97.9%
Dockerfile 2.1%