Previously per-OS build workflows triggered on tag push events, but
Gitea doesn't fire events for tags pushed by other workflows. Now:
- release.yml dispatches build-app-{linux,windows,macos}.yml via
the Gitea API after creating the tag and release
- sidecar-release.yml dispatches build-sidecar-{linux,windows,macos}.yml
Per-OS workflows changed from push+dispatch triggers to dispatch-only
with tag as a required input. To re-run a failed build for the same
version, just dispatch the specific OS workflow with the same tag --
upload logic replaces existing assets automatically.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Refactored from 2 monolithic workflows into 8 targeted ones:
Coordinators (version bump + tag + release creation):
- release.yml: bumps app version, tags v*, creates Gitea release
- sidecar-release.yml: bumps sidecar version, tags sidecar-v*
Per-OS app builds (triggered by v* tags or workflow_dispatch):
- build-app-linux.yml: .deb, .rpm, .AppImage
- build-app-windows.yml: .msi, -setup.exe
- build-app-macos.yml: .dmg
Per-OS sidecar builds (triggered by sidecar-v* tags or workflow_dispatch):
- build-sidecar-linux.yml: CUDA + CPU variants
- build-sidecar-windows.yml: CUDA + CPU variants
- build-sidecar-macos.yml: CPU only
Each build workflow can be re-triggered independently without
re-running the version bump or rebuilding other platforms.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The CPU build steps used `uv run pyinstaller` which re-resolves
dependencies from pyproject.toml's [tool.uv.sources] before running,
pulling CUDA torch back in after the CPU-only reinstall. This made
CPU and CUDA zips the same size.
Fix: run pyinstaller directly from the venv (.venv/bin/pyinstaller
on Linux/macOS, .venv\Scripts\pyinstaller.exe on Windows) to skip
uv's dependency resolution entirely.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
macOS sidecar: `uv run` re-resolves dependencies using CUDA sources
even after `uv sync --no-sources`. Use UV_NO_SOURCES=1 env var instead
so it applies to all uv commands in the step.
Blank window: When the Tauri app starts without the Python backend
running, it showed a completely blank window. Now shows a "Connecting
to backend..." spinner, or an error state with instructions to start
the backend manually.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
macOS: pyproject.toml's [tool.uv.sources] forces torch from the CUDA
index which has no macOS ARM wheels. Use `uv sync --no-sources` to
bypass this and get torch from PyPI (which includes MPS support).
Windows: Add additional uv PATH locations ($LOCALAPPDATA\uv\bin) for
robustness with different runner environments.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two workflows adapted from voice-to-notes:
- release.yml: Builds the Tauri app shell (.deb/.rpm for Linux, .msi
for Windows, .dmg for macOS) on push to main. Auto-bumps version,
creates Gitea release, uploads platform binaries.
- build-sidecar.yml: Builds the headless Python backend sidecar via
PyInstaller when client/server/backend code changes. Produces CUDA
and CPU variants for Linux/Windows, CPU-only for macOS. Uses the new
local-transcription-headless.spec (no PySide6 dependencies).
Also adds local-transcription-headless.spec — a simplified PyInstaller
config for the headless backend that excludes all Qt/PySide6 imports.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>