Switch local AI from Ollama to bundled llama-server, add MIT license
- Replace Ollama dependency with bundled llama-server (llama.cpp) so users need no separate install for local AI inference - Rust backend manages llama-server lifecycle (spawn, port, shutdown) - Add MIT license for open source release - Update architecture doc, CLAUDE.md, and README accordingly Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -7,7 +7,8 @@ Desktop app for transcribing audio/video with speaker identification. Runs local
|
||||
- **Desktop shell:** Tauri v2 (Rust backend + Svelte/TypeScript frontend)
|
||||
- **ML pipeline:** Python sidecar process (faster-whisper, pyannote.audio, wav2vec2)
|
||||
- **Database:** SQLite (via rusqlite in Rust)
|
||||
- **AI providers:** LiteLLM, OpenAI, Anthropic, Ollama (local)
|
||||
- **Local AI:** Bundled llama-server (llama.cpp) — default, no install needed
|
||||
- **Cloud AI providers:** LiteLLM, OpenAI, Anthropic (optional, user-configured)
|
||||
- **Caption export:** pysubs2 (Python)
|
||||
- **Audio UI:** wavesurfer.js
|
||||
- **Transcript editor:** TipTap (ProseMirror)
|
||||
@@ -15,7 +16,9 @@ Desktop app for transcribing audio/video with speaker identification. Runs local
|
||||
## Key Architecture Decisions
|
||||
- Python sidecar communicates with Rust via JSON-line IPC (stdin/stdout)
|
||||
- All ML models must work on CPU. GPU (CUDA) is optional acceleration.
|
||||
- AI cloud providers are optional. Local models (Ollama) are a first-class option.
|
||||
- AI cloud providers are optional. Bundled llama-server (llama.cpp) is the default local AI — no separate install needed.
|
||||
- Rust backend manages llama-server lifecycle (start/stop/port allocation).
|
||||
- Project is open source (MIT license).
|
||||
- SQLite database is per-project, stored alongside media files.
|
||||
- Word-level timestamps are required for click-to-seek playback sync.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user