Compare commits
2 Commits
b5e98b9828
...
28370e4b2a
Author | SHA1 | Date | |
---|---|---|---|
28370e4b2a | |||
adcf47247c |
55
README.md
55
README.md
@ -1,3 +1,54 @@
|
||||
# OWUI-Mult-Agent-Processing
|
||||
# OWUI-Multi-Agent-Collaboration
|
||||
|
||||
Use OWUI Functions to pass request and information to multiple Agents
|
||||
A powerful pipe function for Open WebUI that enables collaboration between multiple AI models as agents.
|
||||
|
||||
[TOC]
|
||||
|
||||
## Overview
|
||||
|
||||
OWUI-Multi-Agent-Processing (v0.5.3) allows you to leverage the strengths of different language models by having them work together on the same prompt. The system functions as a pipeline where each agent processes the input sequentially, and a final operator model synthesizes all the agent responses.
|
||||
|
||||
## Features
|
||||
|
||||
- **Multiple Agent Support**: Chain multiple language models together to work on the same task
|
||||
- **Sequential Processing**: Each agent's response is added to the conversation context for the next agent
|
||||
- **Operator Model**: A designated model synthesizes all agent responses to produce a final answer
|
||||
- **Customizable Configuration**: Choose which models to use as agents and which to use as the operator
|
||||
|
||||
## How It Works
|
||||
|
||||
1. User sends a prompt to the system
|
||||
2. The prompt is sequentially processed through each model in the agent list
|
||||
3. Each agent's response is appended to the conversation context
|
||||
4. The operator model (Claude 3.5 Sonnet by default) receives the enriched context with all agent responses
|
||||
5. The operator synthesizes a final response that is returned to the user
|
||||
|
||||
## Installation
|
||||
|
||||
1. Ensure you have Open WebUI installed
|
||||
2. Navigate to `Admin Panel` -> `Functions`
|
||||
3. `Import Functions` then select the multi-agent-collaboration.py file
|
||||
4. Enable the new Function
|
||||
|
||||
## Configuration
|
||||
|
||||
Users can customize their experience with these configurable options:
|
||||
|
||||
- `agent_list`: List of model IDs to use as processing agents
|
||||
- `operator_model`: Model ID for the final operator (default: "us.anthropic.claude-3-5-sonnet-20241022-v2:0")
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- **Research Analysis**: Have specialized models analyze different aspects of a research question
|
||||
- **Creative Collaboration**: Use different models for idea generation, refinement, and evaluation
|
||||
- **Decision Support**: Gather perspectives from different models before making a final recommendation
|
||||
|
||||
## Requirements
|
||||
|
||||
- Open WebUI installation
|
||||
- API access to the models you wish to use as agents (if needed)
|
||||
|
||||
## Limitations
|
||||
|
||||
- Each additional agent increases token usage and processing time
|
||||
- Models must be available through your Open WebUI configuration
|
53
multi-agent-collaboration.py
Normal file
53
multi-agent-collaboration.py
Normal file
@ -0,0 +1,53 @@
|
||||
"""
|
||||
title: Multi Agent Collaboration System for Open WebUI
|
||||
Description: Allows for Multiple Models to act as Agents in collaboration
|
||||
version: 0.5.3
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from fastapi import Request
|
||||
from typing import Optional
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
|
||||
|
||||
class Pipe:
|
||||
|
||||
class UserValves(BaseModel):
|
||||
agent_list: list = (
|
||||
Field(default=[], description="List of Models to process as agents"),
|
||||
)
|
||||
operator_model: str = Field(
|
||||
default="us.anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
description="Default Operator Model to use",
|
||||
)
|
||||
pass
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
async def pipe(self, body: dict, __user__: dict, __request__: Request) -> str:
|
||||
# Use the unified endpoint with the updated signature
|
||||
user = Users.get_user_by_id(__user__["id"])
|
||||
agents = __user__["valves"].agent_list
|
||||
operator_model = __user__["valves"].operator_model
|
||||
number_of_agents = len(agents)
|
||||
if number_of_agents > 0:
|
||||
# Process through each agent in the list
|
||||
for agent_model in agents:
|
||||
# Temporarily change the model to the agent model
|
||||
body["model"] = agent_model
|
||||
print(f"Model being use: {agent_model}")
|
||||
response = await generate_chat_completion(__request__, body, user)
|
||||
# Add Agent response as context
|
||||
body["messages"].append(
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": f"{response} \n (Provided by Agent: {agent_model})",
|
||||
}
|
||||
)
|
||||
# set Operator for final processing
|
||||
body["model"] = operator_model
|
||||
print(f"Model being use: {operator_model}")
|
||||
#print(f"Body Response: {body['messages']}")
|
||||
return await generate_chat_completion(__request__, body, user)
|
Loading…
x
Reference in New Issue
Block a user