- Implement LlamaManager in Rust for llama-server lifecycle: spawn with
port allocation, health check, clean shutdown on Drop, model listing
- Add llama_start/stop/status/list_models Tauri commands
- Add load_settings/save_settings commands with JSON persistence
- Build SettingsModal with tabs for Transcription, AI Provider, Local AI
settings (model size, device, language, API keys, provider selection)
- Wire settings into pipeline calls (model, device, language, skip diarization)
- Configure Tauri packaging: asset protocol for local audio files,
CSP policy, bundle metadata, Linux .deb/.AppImage and Windows .msi config
- Add keyboard shortcuts: Space (play/pause), Ctrl+O (import),
Ctrl+, (settings), Escape (close menus/modals)
- Close export dropdown on outside click
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Implement ExportService using pysubs2 for caption formats (SRT, VTT, ASS)
and custom formatters for plain text and Markdown
- SRT exports with [Speaker]: prefix, WebVTT with <v Speaker> voice tags,
ASS with color-coded speaker styles
- Plain text groups by speaker with labels, Markdown adds timestamps
- Add export.start IPC handler and export_transcript Tauri command
- Add export dropdown menu in header (appears after transcription)
- Uses native save dialog for output file selection
- Add pysubs2 dependency
- Tests: 30 Python (6 export tests), 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Auto-scroll transcript to active segment during playback with smart
pause when user manually scrolls (resumes after 3s)
- Replace prompt() with native Tauri file dialog for audio/video import
with file type filters
- Add inline transcript editing via double-click with Enter to save,
Esc to cancel, preserving original text for change tracking
- Show "edited" badge on modified segments
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace Ollama dependency with bundled llama-server (llama.cpp)
so users need no separate install for local AI inference
- Rust backend manages llama-server lifecycle (spawn, port, shutdown)
- Add MIT license for open source release
- Update architecture doc, CLAUDE.md, and README accordingly
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Detailed architecture covering Tauri + Svelte frontend, Rust backend,
Python sidecar for ML (faster-whisper, pyannote.audio), IPC protocol,
SQLite schema, AI provider system, export formats, and phased
implementation plan with agent work breakdown.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Establish the voice-to-notes project with documentation covering
goals, platform targets, and planned feature set.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>