Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
"""Local AI provider — bundled llama-server (OpenAI-compatible API)."""
|
|
|
|
|
|
|
|
|
|
from __future__ import annotations
|
|
|
|
|
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
import sys
|
|
|
|
|
from typing import Any
|
Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
from voice_to_notes.providers.base import AIProvider
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class LocalProvider(AIProvider):
|
|
|
|
|
"""Connects to bundled llama-server via its OpenAI-compatible API."""
|
|
|
|
|
|
|
|
|
|
def __init__(self, base_url: str = "http://localhost:8080", model: str = "local") -> None:
|
|
|
|
|
self._base_url = base_url.rstrip("/")
|
|
|
|
|
self._model = model
|
|
|
|
|
self._client: Any = None
|
|
|
|
|
|
|
|
|
|
def _ensure_client(self) -> Any:
|
|
|
|
|
if self._client is not None:
|
|
|
|
|
return self._client
|
|
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
from openai import OpenAI
|
|
|
|
|
|
|
|
|
|
self._client = OpenAI(
|
|
|
|
|
base_url=f"{self._base_url}/v1",
|
|
|
|
|
api_key="not-needed", # llama-server doesn't require an API key
|
|
|
|
|
)
|
|
|
|
|
except ImportError:
|
|
|
|
|
raise RuntimeError(
|
|
|
|
|
"openai package is required for local AI. Install with: pip install openai"
|
|
|
|
|
)
|
|
|
|
|
return self._client
|
|
|
|
|
|
|
|
|
|
def chat(self, messages: list[dict[str, str]], **kwargs: Any) -> str:
|
|
|
|
|
client = self._ensure_client()
|
|
|
|
|
response = client.chat.completions.create(
|
|
|
|
|
model=self._model,
|
|
|
|
|
messages=messages,
|
|
|
|
|
temperature=kwargs.get("temperature", 0.7),
|
|
|
|
|
max_tokens=kwargs.get("max_tokens", 2048),
|
|
|
|
|
)
|
|
|
|
|
return response.choices[0].message.content or ""
|
|
|
|
|
|
|
|
|
|
def is_available(self) -> bool:
|
|
|
|
|
try:
|
|
|
|
|
import urllib.request
|
|
|
|
|
|
|
|
|
|
req = urllib.request.Request(f"{self._base_url}/health", method="GET")
|
|
|
|
|
with urllib.request.urlopen(req, timeout=2) as resp:
|
|
|
|
|
return resp.status == 200
|
|
|
|
|
except Exception:
|
|
|
|
|
return False
|
|
|
|
|
|
|
|
|
|
@property
|
|
|
|
|
def name(self) -> str:
|
|
|
|
|
return "Local (llama-server)"
|