Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available() - Add LocalProvider connecting to bundled llama-server via OpenAI SDK - Add OpenAIProvider for direct OpenAI API access - Add AnthropicProvider for Anthropic Claude API - Add LiteLLMProvider for multi-provider gateway - Build AIProviderService with provider routing, auto-selection, and transcript context injection - Add ai.chat IPC handler supporting chat, list_providers, set_provider, and configure actions - Add ai_chat, ai_list_providers, ai_configure Tauri commands - Build interactive AIChatPanel with message history, quick actions (Summarize, Action Items), and transcript context awareness - Tests: 30 Python, 6 Rust, 0 Svelte errors Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -3,7 +3,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from collections.abc import AsyncIterator
|
||||
from typing import Any
|
||||
|
||||
|
||||
@@ -11,13 +10,17 @@ class AIProvider(ABC):
|
||||
"""Base interface for all AI providers."""
|
||||
|
||||
@abstractmethod
|
||||
async def chat(self, messages: list[dict[str, Any]], config: dict[str, Any]) -> str:
|
||||
"""Send a chat completion request and return the response."""
|
||||
def chat(self, messages: list[dict[str, str]], **kwargs: Any) -> str:
|
||||
"""Send a chat completion request and return the full response text."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def stream(
|
||||
self, messages: list[dict[str, Any]], config: dict[str, Any]
|
||||
) -> AsyncIterator[str]:
|
||||
"""Send a streaming chat request, yielding tokens as they arrive."""
|
||||
def is_available(self) -> bool:
|
||||
"""Check if this provider is configured and available."""
|
||||
...
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def name(self) -> str:
|
||||
"""Provider display name."""
|
||||
...
|
||||
|
||||
Reference in New Issue
Block a user