Remove CUDA sidecar builds, keep CPU + Cloud only
All checks were successful
Tests / Python Backend Tests (push) Successful in 6s
Tests / Frontend Tests (push) Successful in 8s
Tests / Rust Sidecar Tests (push) Successful in 2m3s

CUDA sidecars are ~2GB and too slow to upload from the Windows runner.
Cloud (Deepgram) provides faster transcription anyway. Removed:

- CUDA build steps from Windows and Linux sidecar workflows
- CUDA option from the SidecarSetup download screen

Remaining sidecar variants:
- Cloud (Deepgram): ~50 MB - recommended for most users
- Local CPU: ~500 MB - for offline/privacy use

CUDA can be revisited once the managed Deepgram service is ready.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Developer
2026-04-08 09:49:36 -07:00
parent ce64cacc5e
commit fb02a24334
3 changed files with 6 additions and 44 deletions

View File

@@ -126,23 +126,6 @@
</div>
</label>
<label class="variant-option" class:selected={variant === "cuda"}>
<input
type="radio"
name="variant"
value="cuda"
bind:group={variant}
/>
<div class="variant-info">
<span class="variant-name">Local - GPU (NVIDIA CUDA)</span>
<span class="variant-desc">~2 GB download</span>
<span class="variant-detail">
Runs Whisper AI models locally using your NVIDIA GPU for fast
transcription. No internet needed after download. Requires an
NVIDIA GPU with CUDA support.
</span>
</div>
</label>
</div>
<button class="download-btn" onclick={startDownload}>