Fix CPU sidecar builds bundling CUDA torch instead of CPU
All checks were successful
Release / Bump version and tag (push) Successful in 7s
Release / Build App (macOS) (push) Successful in 1m8s
Release / Build App (Windows) (push) Successful in 2m8s
Release / Build App (Linux) (push) Successful in 3m23s

The CPU build steps used `uv run pyinstaller` which re-resolves
dependencies from pyproject.toml's [tool.uv.sources] before running,
pulling CUDA torch back in after the CPU-only reinstall. This made
CPU and CUDA zips the same size.

Fix: run pyinstaller directly from the venv (.venv/bin/pyinstaller
on Linux/macOS, .venv\Scripts\pyinstaller.exe on Windows) to skip
uv's dependency resolution entirely.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Developer
2026-04-06 17:23:26 -07:00
parent fff37992b1
commit 4a186d1de6

View File

@@ -152,7 +152,9 @@ jobs:
run: | run: |
rm -rf dist/local-transcription-backend build/ rm -rf dist/local-transcription-backend build/
uv pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall uv pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall
uv run pyinstaller local-transcription-headless.spec # Run pyinstaller directly from venv to prevent uv run from
# re-resolving torch back to the CUDA version via pyproject.toml sources
.venv/bin/pyinstaller local-transcription-headless.spec
- name: Package sidecar (CPU) - name: Package sidecar (CPU)
run: | run: |
@@ -268,7 +270,9 @@ jobs:
run: | run: |
Remove-Item -Recurse -Force dist\local-transcription-backend, build -ErrorAction SilentlyContinue Remove-Item -Recurse -Force dist\local-transcription-backend, build -ErrorAction SilentlyContinue
uv pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall uv pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall
uv run pyinstaller local-transcription-headless.spec # Run pyinstaller directly from venv to prevent uv run from
# re-resolving torch back to the CUDA version via pyproject.toml sources
.venv\Scripts\pyinstaller.exe local-transcription-headless.spec
- name: Package sidecar (CPU) - name: Package sidecar (CPU)
shell: powershell shell: powershell
@@ -370,10 +374,9 @@ jobs:
run: | run: |
# UV_NO_SOURCES bypasses pyproject.toml's [tool.uv.sources] which forces # UV_NO_SOURCES bypasses pyproject.toml's [tool.uv.sources] which forces
# torch from the CUDA index (no macOS ARM wheels there). # torch from the CUDA index (no macOS ARM wheels there).
# Applies to both uv sync AND uv run (which re-resolves).
# Default PyPI torch includes MPS (Apple Silicon GPU) support. # Default PyPI torch includes MPS (Apple Silicon GPU) support.
uv sync uv sync
uv run pyinstaller local-transcription-headless.spec .venv/bin/pyinstaller local-transcription-headless.spec
- name: Package sidecar (CPU) - name: Package sidecar (CPU)
run: | run: |