Update README and CLAUDE.md for Tauri rewrite

Update both docs to reflect the new architecture:
- Tauri v2 + Svelte 5 frontend replacing PySide6/Qt
- Headless Python backend with FastAPI control API
- Cross-platform support (Windows, macOS, Linux)
- Deepgram remote transcription (managed/BYOK)
- Gitea CI/CD workflows for automated builds
- New project structure with backend/, src/, src-tauri/
- Updated development commands and build instructions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Developer
2026-04-06 13:34:10 -07:00
parent 25d2a55efb
commit 47ca74e75d
2 changed files with 342 additions and 295 deletions

413
CLAUDE.md
View File

@@ -4,52 +4,108 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
Local Transcription is a desktop application for real-time speech-to-text transcription designed for streamers. It uses Whisper models (via faster-whisper) to transcribe audio locally with optional multi-user server synchronization.
Local Transcription is a cross-platform desktop application for real-time speech-to-text transcription designed for streamers. It supports local Whisper models and cloud-based Deepgram transcription, with OBS browser source integration and optional multi-user sync.
**Architecture:** Two-process model — a Tauri v2 shell (Svelte 5 frontend) communicates with a headless Python backend (sidecar) via REST API and WebSocket.
**Key Features:**
- Standalone desktop GUI (PySide6/Qt)
- Local transcription with CPU/GPU support
- Built-in web server for OBS browser source integration
- Optional Node.js-based multi-user server for syncing transcriptions across users
- Noise suppression and Voice Activity Detection (VAD)
- Cross-platform builds (Linux/Windows) with PyInstaller
- Cross-platform desktop app (Windows, macOS, Linux) via Tauri v2 + Svelte 5
- Headless Python backend with FastAPI control API
- Dual transcription modes: local Whisper or cloud Deepgram (managed/BYOK)
- Built-in web server for OBS browser source at `http://localhost:8080`
- Optional multi-user sync via Node.js server
- CUDA, MPS (Apple Silicon), and CPU support
- Auto-updates, custom fonts, configurable colors
> **Legacy GUI:** The original PySide6/Qt GUI (`main.py`, `gui/`) still works during the transition. New features should target the Tauri frontend and headless backend.
## Project Structure
```
local-transcription/
├── client/ # Core transcription logic
│ ├── audio_capture.py # Audio input and buffering
│ ├── transcription_engine.py # Whisper model integration
│ ├── noise_suppression.py # VAD and noise reduction
│ ├── device_utils.py # CPU/GPU device management
│ ├── config.py # Configuration management
└── server_sync.py # Multi-user server sync client
├── gui/ # Desktop application UI
│ ├── main_window_qt.py # Main application window (PySide6)
── settings_dialog_qt.py # Settings dialog (PySide6)
│ └── transcription_display_qt.py # Display widget
├── server/ # Web display servers
├── web_display.py # FastAPI server for OBS browser source (local)
└── nodejs/ # Optional multi-user Node.js server
│ ├── server.js # Multi-user sync server with WebSocket
├── package.json # Node.js dependencies
└── README.md # Server deployment documentation
├── config/ # Example configuration files
│ └── default_config.yaml # Default settings template
├── main.py # GUI application entry point
├── main_cli.py # CLI version for testing
└── pyproject.toml # Dependencies and build config
├── src/ # Svelte 5 frontend (Tauri UI)
│ ├── App.svelte # Main app shell
│ ├── app.css # Global dark theme styles
│ ├── main.ts # Svelte mount point
│ ├── lib/components/ # UI components
│ ├── Header.svelte # Title bar + settings button
│ ├── StatusBar.svelte # State indicator, device, user info
├── Controls.svelte # Start/Stop, Clear, Save buttons
│ ├── TranscriptionDisplay.svelte # Scrolling transcript view
│ └── Settings.svelte # Full settings modal (all sections)
│ └── lib/stores/ # Svelte 5 reactive stores ($state/$derived)
│ ├── backend.ts # WebSocket + REST API client
├── config.ts # App configuration fetch/update
└── transcriptions.ts # Transcript data management
├── src-tauri/ # Tauri v2 Rust shell
├── src/lib.rs # Plugin registration (shell, dialog, process)
├── src/main.rs # Entry point
│ ├── tauri.conf.json # Window, bundle, plugin config
│ └── Cargo.toml # Rust dependencies
├── backend/ # Headless Python backend (the sidecar)
│ ├── app_controller.py # Core orchestration (engine, sync, config)
│ ├── api_server.py # FastAPI REST endpoints + /ws/control
│ └── main_headless.py # Headless entry point (prints JSON to stdout)
├── client/ # Core transcription modules (used by backend)
│ ├── audio_capture.py # Audio input handling
│ ├── transcription_engine_realtime.py # RealtimeSTT / Whisper engine
│ ├── deepgram_transcription.py # Deepgram WebSocket cloud transcription
│ ├── noise_suppression.py # VAD and noise reduction
│ ├── device_utils.py # CPU/GPU/MPS detection
│ ├── config.py # YAML config management (~/.local-transcription/)
│ ├── server_sync.py # Multi-user server sync client
│ ├── instance_lock.py # Single-instance PID lock
│ └── update_checker.py # Gitea release update checker
├── gui/ # Legacy PySide6/Qt GUI (still functional)
│ ├── main_window_qt.py # Main window (orchestration lives here in legacy)
│ ├── settings_dialog_qt.py # Settings dialog
│ └── transcription_display_qt.py # Display widget
├── server/
│ ├── web_display.py # FastAPI OBS display server (WebSocket + HTML)
│ └── nodejs/ # Optional multi-user sync server
├── .gitea/workflows/ # CI/CD
│ ├── release.yml # Tauri app builds (Linux/Windows/macOS)
│ └── build-sidecar.yml # Python sidecar builds (CUDA + CPU)
├── config/default_config.yaml # Default settings template
├── main.py # Legacy PySide6 GUI entry point
├── main_cli.py # CLI version for testing
├── version.py # Version string (__version__)
├── local-transcription.spec # PyInstaller config (legacy, includes PySide6)
├── local-transcription-headless.spec # PyInstaller config (headless sidecar, no Qt)
├── pyproject.toml # Python deps (uv, CUDA PyTorch index)
├── package.json # Node/Tauri deps
└── vite.config.ts # Vite build config ($lib alias)
```
## Development Commands
### Installation and Setup
### Frontend (Tauri + Svelte)
```bash
# Install dependencies (creates .venv automatically)
# Install npm dependencies
npm install
# Run Tauri in development mode (hot-reload)
npm run tauri dev
# Build frontend only (for testing)
npx vite build
# Type-check Svelte
npx svelte-check
# Check Rust compiles
cd src-tauri && cargo check
```
### Backend (Python)
```bash
# Install Python dependencies
uv sync
# Run the GUI application
# Run the headless backend standalone (for development)
uv run python -m backend.main_headless --port 8080
# Run the legacy PySide6 GUI
uv run python main.py
# Run CLI version (headless, for testing)
@@ -57,257 +113,154 @@ uv run python main_cli.py
# List available audio devices
uv run python main_cli.py --list-devices
# Install with CUDA support (if needed)
uv pip install torch --index-url https://download.pytorch.org/whl/cu121
```
### Building Executables
### Building
```bash
# Linux (includes CUDA support - works on both GPU and CPU systems)
./build.sh
# Build Tauri app (produces platform installer)
npm run tauri build
# Windows (includes CUDA support - works on both GPU and CPU systems)
build.bat
# Build headless Python sidecar (no PySide6)
uv run pyinstaller local-transcription-headless.spec
# Output: dist/local-transcription-backend/
# Manual build with PyInstaller
uv sync # Install dependencies (includes CUDA PyTorch)
uv pip uninstall -q enum34 # Remove incompatible enum34 package
# Build legacy PySide6 app
uv run pyinstaller local-transcription.spec
# Or use: ./build.sh (Linux) / build.bat (Windows)
```
**Important:** All builds include CUDA support via `pyproject.toml` configuration. CUDA builds can be created on systems without NVIDIA GPUs. The PyTorch CUDA runtime is bundled, and the app automatically falls back to CPU if no GPU is available.
### Testing
```bash
# Run component tests
uv run python test_components.py
# Check CUDA availability
uv run python check_cuda.py
# Test web server manually
uv run python -m uvicorn server.web_display:app --reload
```
## Architecture
## Architecture Details
### Audio Processing Pipeline
### Communication: Tauri <-> Python Backend
1. **Audio Capture** ([client/audio_capture.py](client/audio_capture.py))
- Captures audio from microphone/system using sounddevice
- Handles automatic sample rate detection and resampling
- Uses chunking with overlap for better transcription quality
- Default: 3-second chunks with 0.5s overlap
The Svelte frontend connects to the Python backend via two channels:
2. **Noise Suppression** ([client/noise_suppression.py](client/noise_suppression.py))
- Applies noisereduce for background noise reduction
- Voice Activity Detection (VAD) using webrtcvad
- Skips silent segments to improve performance
**REST API** (on port 8081 by default):
- `GET /api/status` — app state, device info, version
- `POST /api/start` / `POST /api/stop` — transcription control
- `GET /api/config` / `PUT /api/config` — read/write settings (dot-notation keys)
- `GET /api/audio-devices` / `GET /api/compute-devices` — device enumeration
- `POST /api/reload-engine` — reload with new model/device
- `GET /api/transcriptions` / `POST /api/clear` — transcript management
- `POST /api/save-file` — write text to a file path
- `GET /api/check-update` / `POST /api/skip-version` — update management
- `POST /api/login` / `POST /api/register` / `GET /api/balance` — managed mode proxy
3. **Transcription** ([client/transcription_engine.py](client/transcription_engine.py))
- Uses faster-whisper for efficient inference
- Supports CPU, CUDA, and Apple MPS (Mac)
- Models: tiny, base, small, medium, large
- Thread-safe model loading with locks
**WebSocket** `/ws/control`:
- Pushes real-time events: `state_changed`, `transcription`, `preview`, `error`, `credits_low`
- Client sends keepalive pings
4. **Display** ([gui/main_window_qt.py](gui/main_window_qt.py))
- PySide6/Qt-based desktop GUI
- Real-time transcription display with scrolling
- Settings panel with live updates (no restart needed)
The OBS display server runs separately on port 8080 (`GET /` for HTML, `WebSocket /ws` for transcriptions).
### Web Server Architecture
### Backend Process Lifecycle
**Local Web Server** ([server/web_display.py](server/web_display.py))
- Always runs when GUI starts (port 8080 by default)
- FastAPI with WebSocket for real-time updates
- Used for OBS browser source integration
- Single-user (displays only local transcriptions)
1. `main_headless.py` starts, acquires instance lock, creates `AppController`
2. `AppController.initialize()` starts the OBS web server (port 8080) and engine init thread
3. `APIServer` wraps the controller with FastAPI routes, runs on port 8081
4. Backend prints `{"event": "ready", "port": 8080}` to stdout for Tauri to discover
5. On shutdown: engine stopped, web server stopped, lock released
**Multi-User Server** (Optional - for syncing across multiple users)
### Headless Backend vs Legacy GUI
**Node.js WebSocket Server** ([server/nodejs/](server/nodejs/)) - **RECOMMENDED**
- Real-time WebSocket support (< 100ms latency)
- Handles 100+ concurrent users
- Easy deployment to VPS/cloud hosting (Railway, Heroku, DigitalOcean, or any VPS)
- Configurable display options via URL parameters:
- `timestamps=true/false` - Show/hide timestamps
- `maxlines=50` - Maximum visible lines (prevents scroll bars in OBS)
- `fontsize=16` - Font size in pixels
- `fontfamily=Arial` - Font family
- `fade=10` - Seconds before text fades (0 = never)
The `AppController` class (`backend/app_controller.py`) extracts all orchestration logic from `gui/main_window_qt.py` into a Qt-free class. The mapping:
See [server/nodejs/README.md](server/nodejs/README.md) for deployment instructions
| Legacy (MainWindow) | Headless (AppController) |
|---------------------|--------------------------|
| `_initialize_components()` | `_initialize_engine()` |
| `_start_transcription()` | `start_transcription()` |
| `_stop_transcription()` | `stop_transcription()` |
| `_on_settings_saved()` | `apply_settings()` |
| `_reload_engine()` | `reload_engine()` |
| `_start_web_server_if_enabled()` | `_start_web_server()` |
| `_start_server_sync()` | `_start_server_sync()` |
| Qt signals | Callbacks (`on_state_changed`, `on_transcription`, etc.) |
### Configuration System
### Threading Model (Headless)
- Config stored at `~/.local-transcription/config.yaml`
- Managed by [client/config.py](client/config.py)
- Settings apply immediately without restart (except model changes)
- YAML format with nested keys (e.g., `transcription.model`)
- Main thread: Uvicorn (FastAPI) event loop
- Engine init thread: Downloads models, initializes VAD
- Web server thread: Separate asyncio loop for OBS display
- Audio capture: Runs in engine callback threads
- All results flow through `AppController` callbacks -> `APIServer` WebSocket broadcast
### Device Management
### Svelte Frontend
- [client/device_utils.py](client/device_utils.py) handles CPU/GPU detection
- Auto-detects CUDA, MPS (Mac), or falls back to CPU
- Compute types: float32 (best quality), float16 (GPU), int8 (fastest)
- Thread-safe device selection
Uses Svelte 5 runes throughout (`$state`, `$derived`, `$effect`, `$props`). No Svelte 4 patterns.
## Key Implementation Details
**Stores** (`src/lib/stores/`):
- `backend.ts` — WebSocket connection + REST helpers (`apiGet`, `apiPost`, `apiPut`), auto-reconnect
- `config.ts` — fetches/updates config from backend API
- `transcriptions.ts` — manages transcript list, listens for `CustomEvent`s from backend store
### PyInstaller Build Configuration
**Key patterns:**
- Backend store dispatches `CustomEvent`s on `window` for cross-store communication
- Settings component collects all changed values into a `Record<string, any>` with dot-notation keys, sends via `PUT /api/config`
- Controls use Tauri dialog plugin for native file save, falls back to blob download
- [local-transcription.spec](local-transcription.spec) controls build
- UPX compression enabled for smaller executables
- Hidden imports required for PySide6, faster-whisper, torch
- Console mode enabled by default (set `console=False` to hide)
## CI/CD
### Threading Model
Two Gitea Actions workflows in `.gitea/workflows/`:
- Main thread: Qt GUI event loop
- Audio thread: Captures and processes audio chunks
- Web server thread: Runs FastAPI server
- Transcription: Runs in callback thread from audio capture
- All transcription results communicated via Qt signals
- **`release.yml`**: Triggers on push to `main`. Auto-bumps version, builds Tauri app on Linux/Windows/macOS, uploads `.deb`, `.rpm`, `.msi`, `.dmg` to Gitea release.
- **`build-sidecar.yml`**: Triggers on changes to `client/`, `server/`, `backend/`, `pyproject.toml`. Builds headless Python sidecar via PyInstaller. CUDA + CPU for Linux/Windows, CPU-only for macOS.
### Server Sync (Optional Multi-User Feature)
- [client/server_sync.py](client/server_sync.py) handles server communication
- Toggle in Settings: "Enable Server Sync"
- Sends transcriptions to Node.js server via HTTP POST
- Real-time updates via WebSocket to display page
- Per-speaker font support (Web-Safe, Google Fonts, Custom uploads)
- Falls back gracefully if server unavailable
Both require a `BUILD_TOKEN` secret (Gitea API token with release write access).
## Common Patterns
### Adding a New Setting
1. Add to [config/default_config.yaml](config/default_config.yaml)
2. Update [client/config.py](client/config.py) if validation needed
3. Add UI control in [gui/settings_dialog_qt.py](gui/settings_dialog_qt.py)
4. Apply setting in relevant component (no restart if possible)
5. Emit signal to update display if needed
1. Add default to [config/default_config.yaml](config/default_config.yaml)
2. Add UI control in [src/lib/components/Settings.svelte](src/lib/components/Settings.svelte)
3. Ensure the setting is included in the save handler's config update
4. Apply in `AppController.apply_settings()` or the relevant component
5. For legacy GUI: also update [gui/settings_dialog_qt.py](gui/settings_dialog_qt.py)
### Adding a New API Endpoint
1. Add route in [backend/api_server.py](backend/api_server.py) `_setup_routes()`
2. Add supporting logic in [backend/app_controller.py](backend/app_controller.py) if needed
3. Call from Svelte via `backendStore.apiGet/apiPost/apiPut`
### Modifying Transcription Display
- Local GUI: [gui/transcription_display_qt.py](gui/transcription_display_qt.py)
- Local web display (OBS): [server/web_display.py](server/web_display.py) (HTML in `_get_html()`)
- Tauri UI: [src/lib/components/TranscriptionDisplay.svelte](src/lib/components/TranscriptionDisplay.svelte)
- OBS display: [server/web_display.py](server/web_display.py) (HTML in `_get_html()`)
- Multi-user display: [server/nodejs/server.js](server/nodejs/server.js) (display page in `/display` route)
### Adding a New Model Size
- Update [client/transcription_engine.py](client/transcription_engine.py)
- Add to model selector in [gui/settings_dialog_qt.py](gui/settings_dialog_qt.py)
- Update CLI argument choices in [main_cli.py](main_cli.py)
## Dependencies
**Core:**
- `faster-whisper`: Optimized Whisper inference
- `torch`: ML framework (CUDA-enabled via special index)
- `PySide6`: Qt6 bindings for GUI
- `sounddevice`: Cross-platform audio I/O
- `noisereduce`, `webrtcvad`: Audio preprocessing
**Web Server:**
- `fastapi`, `uvicorn`: Web server and ASGI
- `websockets`: Real-time communication
**Build:**
- `pyinstaller`: Create standalone executables
- `uv`: Fast package manager
**PyTorch CUDA Index:**
- Configured in [pyproject.toml](pyproject.toml) under `[[tool.uv.index]]`
- Uses PyTorch's custom wheel repository for CUDA builds
- Automatically installed with `uv sync` when using CUDA build scripts
**Frontend:** Tauri v2, Svelte 5, Vite, TypeScript
**Backend:** Python 3.9+, FastAPI, Uvicorn, RealtimeSTT, faster-whisper, PyTorch (CUDA), sounddevice
**Build:** PyInstaller (sidecar), Tauri CLI (app), uv (Python packages)
**CI:** Gitea Actions with platform-specific runners
## Platform-Specific Notes
### Linux
- Uses PulseAudio/ALSA for audio
- Build scripts use bash (`.sh` files)
- Executable: `dist/LocalTranscription/LocalTranscription`
- Tauri needs: `libgtk-3-dev`, `libwebkit2gtk-4.1-dev`, `libappindicator3-dev`, `librsvg2-dev`, `patchelf`
- Audio: PulseAudio/ALSA via sounddevice
### Windows
- Uses Windows Audio/WASAPI
- Build scripts use batch (`.bat` files)
- Executable: `dist\LocalTranscription\LocalTranscription.exe`
- Requires Visual C++ Redistributable on target systems
- Tauri needs: WebView2 (usually pre-installed on Windows 10+)
- Audio: WASAPI via sounddevice
### Cross-Building
- **Cannot cross-compile** - must build on target platform
- CI/CD should use platform-specific runners
## Troubleshooting
### Model Loading Issues
- Models download to `~/.cache/huggingface/`
- First run requires internet connection
- Check disk space (models: 75MB-3GB depending on size)
### Audio Device Issues
- Run `uv run python main_cli.py --list-devices`
- Check permissions (microphone access)
- Try different device indices in settings
### GPU Not Detected
- Run `uv run python check_cuda.py`
- Install CUDA drivers (not CUDA toolkit - bundled in build)
- Verify PyTorch sees GPU: `python -c "import torch; print(torch.cuda.is_available())"`
### Web Server Port Conflicts
- Default port: 8080
- Change in [gui/main_window_qt.py](gui/main_window_qt.py) or config
- Use `lsof -i :8080` (Linux) or `netstat -ano | findstr :8080` (Windows)
## OBS Integration
### Local Display (Single User)
1. Start Local Transcription app
2. In OBS: Add "Browser" source
3. URL: `http://localhost:8080`
4. Set dimensions (e.g., 1920x300)
### Multi-User Display (Node.js Server)
1. Deploy Node.js server (see [server/nodejs/README.md](server/nodejs/README.md))
2. Each user configures Server URL: `http://your-server:3000/api/send`
3. Enter same room name and passphrase
4. In OBS: Add "Browser" source
5. URL: `http://your-server:3000/display?room=ROOM&fade=10&timestamps=true&maxlines=50&fontsize=16`
6. Customize URL parameters as needed:
- `timestamps=false` - Hide timestamps
- `maxlines=30` - Show max 30 lines (prevents scroll bars)
- `fontsize=18` - Larger font
- `fontfamily=Courier` - Different font
## Performance Optimization
**For Real-Time Transcription:**
- Use `tiny` or `base` model (faster)
- Enable GPU if available (5-10x faster)
- Increase chunk_duration for better accuracy (higher latency)
- Decrease chunk_duration for lower latency (less context)
- Enable VAD to skip silent audio
**For Build Size Reduction:**
- Don't bundle models (download on demand)
- Use CPU-only build if no GPU users
- Enable UPX compression (already in spec)
## Phase Status
-**Phase 1**: Standalone desktop application (complete)
-**Web Server**: Local OBS integration (complete)
-**Builds**: PyInstaller executables (complete)
-**Phase 2**: Multi-user Node.js server (complete, optional)
- ⏸️ **Phase 3+**: Advanced features (see [NEXT_STEPS.md](NEXT_STEPS.md))
### macOS
- Tauri needs: Xcode Command Line Tools
- Audio: CoreAudio via sounddevice
- GPU: MPS (Apple Silicon) detected by `device_utils.py`
- `Info.plist` must include `NSMicrophoneUsageDescription` for mic access
- No CUDA builds — CPU/MPS only
## Related Documentation
- [README.md](README.md) - User-facing documentation
- [BUILD.md](BUILD.md) - Detailed build instructions
- [INSTALL.md](INSTALL.md) - Installation guide
- [NEXT_STEPS.md](NEXT_STEPS.md) - Future enhancements
- [server/nodejs/README.md](server/nodejs/README.md) - Node.js server setup and deployment
- [README.md](README.md) User-facing documentation
- [BUILD.md](BUILD.md) Detailed build instructions
- [INSTALL.md](INSTALL.md) Installation guide
- [server/nodejs/README.md](server/nodejs/README.md) — Node.js server setup