Simplify build process: CUDA support now included by default

Since pyproject.toml is configured to use PyTorch CUDA index by default,
all builds automatically include CUDA support. Removed redundant separate
CUDA build scripts and updated documentation.

Changes:
- Removed build-cuda.sh and build-cuda.bat (no longer needed)
- Updated build.sh and build.bat to include CUDA by default
  - Added "uv sync" step to ensure CUDA PyTorch is installed
  - Updated messages to clarify CUDA support is included
- Updated BUILD.md to reflect simplified build process
  - Removed separate CUDA build sections
  - Clarified all builds include CUDA support
  - Updated GPU support section
- Updated CLAUDE.md with simplified build commands

Benefits:
- Simpler build process (one script per platform instead of two)
- Less confusion about which script to use
- All builds work on any system (GPU or CPU)
- Automatic fallback to CPU if no GPU available
- pyproject.toml is single source of truth for dependencies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-12-28 19:09:36 -08:00
parent be53f2e962
commit d34d272cf0
6 changed files with 42 additions and 186 deletions

View File

@@ -64,23 +64,19 @@ uv pip install torch --index-url https://download.pytorch.org/whl/cu121
### Building Executables
```bash
# Linux (CPU-only)
# Linux (includes CUDA support - works on both GPU and CPU systems)
./build.sh
# Linux (with CUDA support - works on both GPU and CPU systems)
./build-cuda.sh
# Windows (CPU-only)
# Windows (includes CUDA support - works on both GPU and CPU systems)
build.bat
# Windows (with CUDA support)
build-cuda.bat
# Manual build with PyInstaller
uv sync # Install dependencies (includes CUDA PyTorch)
uv pip uninstall -q enum34 # Remove incompatible enum34 package
uv run pyinstaller local-transcription.spec
```
**Important:** CUDA builds can be created on systems without NVIDIA GPUs. The PyTorch CUDA runtime is bundled, and the app automatically falls back to CPU if no GPU is available.
**Important:** All builds include CUDA support via `pyproject.toml` configuration. CUDA builds can be created on systems without NVIDIA GPUs. The PyTorch CUDA runtime is bundled, and the app automatically falls back to CPU if no GPU is available.
### Testing
```bash