Files
voice-to-notes/docs/TESTING.md
Josh Knapp 503cc6c0cf Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors

Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
  - Full schema: projects, media_files, speakers, segments, words,
    ai_outputs, annotations (with indexes)
  - Model structs with serde serialization
  - CRUD queries for projects, speakers, segments, words
  - Segment text editing preserves original text
  - Schema versioning for future migrations
  - 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management

Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean

Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00

3.3 KiB

Voice to Notes — Testing Strategy

Overview

Each layer has its own test approach. Agents should run tests after every significant change before considering work complete.


1. Rust Backend Tests

Tool: cargo test (built-in) Location: src-tauri/src/ (inline #[cfg(test)] modules)

cd src-tauri && cargo test

What to test:

  • SQLite schema creation and migrations
  • Database CRUD operations (projects, media files, segments, words, speakers)
  • IPC message serialization/deserialization (serde)
  • llama-server port allocation logic
  • File path handling across platforms

Lint/format check:

cd src-tauri && cargo fmt --check && cargo clippy -- -D warnings

2. Python Sidecar Tests

Tool: pytest Location: python/tests/

cd python && python -m pytest tests/ -v

What to test:

  • IPC protocol: JSON-line encode/decode, message routing
  • Transcription service (mock faster-whisper for unit tests)
  • Diarization service (mock pyannote for unit tests)
  • AI provider adapters (mock HTTP responses)
  • Export formatters (SRT, WebVTT, text output correctness)
  • Hardware detection logic

Lint check:

cd python && ruff check . && ruff format --check .

3. Frontend Tests

Tool: vitest (Vite-native, works with Svelte) Location: src/lib/**/*.test.ts

npm run test

What to test:

  • Svelte store logic (transcript, playback, project state)
  • Tauri bridge service (mock tauri::invoke)
  • Audio sync calculations (timestamp → word mapping)
  • Export option formatting

Lint/format check:

npm run check   # svelte-check (TypeScript)
npm run lint    # eslint

4. Integration Tests

IPC Round-trip Test

Verify Rust ↔ Python communication works end-to-end:

# From project root
cd src-tauri && cargo test --test ipc_integration

This spawns the real Python sidecar, sends a ping message, and verifies the response.

Tauri App Launch Test

Verify the app starts and the frontend loads:

cd src-tauri && cargo test --test app_launch

5. Quick Validation Script

Agents should run this after any significant change:

#!/bin/bash
# scripts/validate.sh — Run all checks
set -e

echo "=== Rust checks ==="
cd src-tauri
cargo fmt --check
cargo clippy -- -D warnings
cargo test
cd ..

echo "=== Python checks ==="
cd python
ruff check .
ruff format --check .
python -m pytest tests/ -v
cd ..

echo "=== Frontend checks ==="
npm run check
npm run lint
npm run test -- --run

echo "=== All checks passed ==="

6. Manual Testing (Final User Test)

These require a human to verify since they involve audio playback and visual UI:

  • Import a real audio file and run transcription
  • Verify waveform displays correctly
  • Click words → audio seeks to correct position
  • Rename speakers → changes propagate
  • Export caption files → open in VLC/subtitle editor
  • AI chat → get responses about transcript content

7. Test Fixtures

Store small test fixtures in tests/fixtures/:

  • short_clip.wav — 5-second audio clip with 2 speakers (for integration tests)
  • sample_transcript.json — Pre-built transcript data for UI/export tests
  • sample_ipc_messages.jsonl — Example IPC message sequences

Agents should create mock/fixture data rather than requiring real audio files for unit tests.