Commit Graph

3 Commits

Author SHA1 Message Date
c450ef3c0c Switch local AI from Ollama to bundled llama-server, add MIT license
- Replace Ollama dependency with bundled llama-server (llama.cpp)
  so users need no separate install for local AI inference
- Rust backend manages llama-server lifecycle (spawn, port, shutdown)
- Add MIT license for open source release
- Update architecture doc, CLAUDE.md, and README accordingly

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 09:00:47 -08:00
0edb06a913 Add architecture document and project guidelines
Detailed architecture covering Tauri + Svelte frontend, Rust backend,
Python sidecar for ML (faster-whisper, pyannote.audio), IPC protocol,
SQLite schema, AI provider system, export formats, and phased
implementation plan with agent work breakdown.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 08:37:45 -08:00
d2bdbe3315 Initial project setup with README and gitignore
Establish the voice-to-notes project with documentation covering
goals, platform targets, and planned feature set.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 08:11:57 -08:00