Switch local AI from Ollama to bundled llama-server, add MIT license

- Replace Ollama dependency with bundled llama-server (llama.cpp)
  so users need no separate install for local AI inference
- Rust backend manages llama-server lifecycle (spawn, port, shutdown)
- Add MIT license for open source release
- Update architecture doc, CLAUDE.md, and README accordingly

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-26 09:00:47 -08:00
parent 0edb06a913
commit c450ef3c0c
4 changed files with 61 additions and 13 deletions

View File

@@ -27,4 +27,4 @@ A desktop application that transcribes audio/video recordings with speaker ident
## License
TBD
MIT