Remove CUDA sidecar builds, keep CPU + Cloud only
CUDA sidecars are ~2GB and too slow to upload from the Windows runner. Cloud (Deepgram) provides faster transcription anyway. Removed: - CUDA build steps from Windows and Linux sidecar workflows - CUDA option from the SidecarSetup download screen Remaining sidecar variants: - Cloud (Deepgram): ~50 MB - recommended for most users - Local CPU: ~500 MB - for offline/privacy use CUDA can be revisited once the managed Deepgram service is ready. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -40,21 +40,11 @@ jobs:
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y portaudio19-dev
|
||||
|
||||
- name: Build sidecar (CUDA)
|
||||
run: |
|
||||
uv sync --frozen || uv sync
|
||||
uv run pyinstaller local-transcription-headless.spec
|
||||
|
||||
- name: Package sidecar (CUDA)
|
||||
run: |
|
||||
cd dist/local-transcription-backend && zip -9 -r ../../sidecar-linux-x86_64-cuda.zip .
|
||||
|
||||
- name: Build sidecar (CPU)
|
||||
env:
|
||||
UV_NO_SOURCES: "1"
|
||||
run: |
|
||||
rm -rf dist/local-transcription-backend build/
|
||||
uv pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall
|
||||
# Run pyinstaller directly from venv to prevent uv run from
|
||||
# re-resolving torch back to the CUDA version via pyproject.toml sources
|
||||
uv sync
|
||||
.venv/bin/pyinstaller local-transcription-headless.spec
|
||||
|
||||
- name: Package sidecar (CPU)
|
||||
|
||||
@@ -54,23 +54,12 @@ jobs:
|
||||
choco install 7zip -y
|
||||
}
|
||||
|
||||
- name: Build sidecar (CUDA)
|
||||
shell: powershell
|
||||
run: |
|
||||
uv sync --frozen
|
||||
if ($LASTEXITCODE -ne 0) { uv sync }
|
||||
uv run pyinstaller local-transcription-headless.spec
|
||||
|
||||
- name: Package sidecar (CUDA)
|
||||
shell: powershell
|
||||
run: |
|
||||
7z a -tzip -mx=9 sidecar-windows-x86_64-cuda.zip .\dist\local-transcription-backend\*
|
||||
|
||||
- name: Build sidecar (CPU)
|
||||
shell: powershell
|
||||
env:
|
||||
UV_NO_SOURCES: "1"
|
||||
run: |
|
||||
Remove-Item -Recurse -Force dist\local-transcription-backend, build -ErrorAction SilentlyContinue
|
||||
uv pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall
|
||||
uv sync
|
||||
.venv\Scripts\pyinstaller.exe local-transcription-headless.spec
|
||||
|
||||
- name: Package sidecar (CPU)
|
||||
|
||||
@@ -126,23 +126,6 @@
|
||||
</div>
|
||||
</label>
|
||||
|
||||
<label class="variant-option" class:selected={variant === "cuda"}>
|
||||
<input
|
||||
type="radio"
|
||||
name="variant"
|
||||
value="cuda"
|
||||
bind:group={variant}
|
||||
/>
|
||||
<div class="variant-info">
|
||||
<span class="variant-name">Local - GPU (NVIDIA CUDA)</span>
|
||||
<span class="variant-desc">~2 GB download</span>
|
||||
<span class="variant-detail">
|
||||
Runs Whisper AI models locally using your NVIDIA GPU for fast
|
||||
transcription. No internet needed after download. Requires an
|
||||
NVIDIA GPU with CUDA support.
|
||||
</span>
|
||||
</div>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<button class="download-btn" onclick={startDownload}>
|
||||
|
||||
Reference in New Issue
Block a user