Cross-platform distribution, UI improvements, and performance optimizations
- PyInstaller frozen sidecar: spec file, build script, and ffmpeg path resolver for self-contained distribution without Python prerequisites - Dual-mode sidecar launcher: frozen binary (production) with dev mode fallback - Parallel transcription + diarization pipeline (~30-40% faster) - GPU auto-detection for diarization (CUDA when available) - Async run_pipeline command for real-time progress event delivery - Web Audio API backend for instant playback and seeking - OpenAI-compatible provider replacing LiteLLM client-side routing - Cross-platform RAM detection (Linux/macOS/Windows) - Settings: speaker count hint, token reveal toggles, dark dropdown styling - Loading splash screen, flexbox layout fix for viewport overflow - Gitea Actions CI/CD pipeline (Linux, Windows, macOS ARM) - Updated README and CLAUDE.md documentation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -260,10 +260,12 @@ def make_ai_chat_handler() -> HandlerFunc:
|
||||
model=config.get("model", "claude-sonnet-4-6"),
|
||||
))
|
||||
elif provider_name == "litellm":
|
||||
from voice_to_notes.providers.litellm_provider import LiteLLMProvider
|
||||
from voice_to_notes.providers.litellm_provider import OpenAICompatibleProvider
|
||||
|
||||
service.register_provider("litellm", LiteLLMProvider(
|
||||
service.register_provider("litellm", OpenAICompatibleProvider(
|
||||
model=config.get("model", "gpt-4o-mini"),
|
||||
api_key=config.get("api_key"),
|
||||
api_base=config.get("api_base"),
|
||||
))
|
||||
return IPCMessage(
|
||||
id=msg.id,
|
||||
|
||||
Reference in New Issue
Block a user