Add MCP tools integration for Discord bot
All checks were successful
OpenWebUI Discord Bot / Build-and-Push (push) Successful in 1m2s
All checks were successful
OpenWebUI Discord Bot / Build-and-Push (push) Successful in 1m2s
Major improvements to LiteLLM Discord bot with MCP (Model Context Protocol) tools support: Features added: - MCP tools discovery and integration with LiteLLM proxy - Fetch and convert 40+ GitHub MCP tools to OpenAI format - Tool calling flow with placeholder execution (pending MCP endpoint confirmation) - Dynamic tool injection based on LiteLLM MCP server configuration - Enhanced system prompt with tool usage guidance - Added ENABLE_TOOLS environment variable for easy toggle - Comprehensive debug logging for troubleshooting Technical changes: - Added httpx>=0.25.0 dependency for async MCP API calls - Implemented get_available_mcp_tools() to query /v1/mcp/server and /v1/mcp/tools endpoints - Convert MCP tool schemas to OpenAI function calling format - Detect and handle tool_calls in model responses - Added system_prompt.txt for customizable bot behavior - Updated README with better documentation and setup instructions - Created claude.md with detailed development notes and upgrade roadmap Configuration: - New ENABLE_TOOLS flag in .env to control MCP integration - DEBUG_LOGGING for detailed execution logs - System prompt file support for easy customization Known limitations: - Tool execution currently uses placeholders (MCP execution endpoint needs verification) - Limited to 50 tools to avoid overwhelming the model - Requires LiteLLM proxy with MCP server configured Next steps: - Verify correct LiteLLM MCP tool execution endpoint - Implement actual tool execution via MCP proxy - Test end-to-end GitHub operations through Discord 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -1,4 +1,24 @@
|
||||
# Discord Bot Token - Get from https://discord.com/developers/applications
|
||||
DISCORD_TOKEN=your_discord_bot_token
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
OPENWEBUI_API_BASE=http://your.api.endpoint/v1
|
||||
MODEL_NAME="Your_Model_Name"
|
||||
|
||||
# LiteLLM API Configuration
|
||||
LITELLM_API_KEY=sk-1234
|
||||
LITELLM_API_BASE=http://localhost:4000
|
||||
|
||||
# Model name (any model supported by your LiteLLM proxy)
|
||||
MODEL_NAME=gpt-4-turbo-preview
|
||||
|
||||
# System Prompt Configuration (optional)
|
||||
SYSTEM_PROMPT_FILE=./system_prompt.txt
|
||||
|
||||
# Maximum tokens to use for conversation history (optional, default: 3000)
|
||||
MAX_HISTORY_TOKENS=3000
|
||||
|
||||
# Enable debug logging (optional, default: false)
|
||||
# Set to 'true' to see detailed logs for troubleshooting
|
||||
DEBUG_LOGGING=false
|
||||
|
||||
# Enable MCP tools integration (optional, default: false)
|
||||
# Set to 'true' to allow the bot to use tools configured in your LiteLLM proxy
|
||||
# Tools are auto-executed without user confirmation
|
||||
ENABLE_TOOLS=false
|
||||
|
||||
Reference in New Issue
Block a user