Simplify build process: CUDA support now included by default

Since pyproject.toml is configured to use PyTorch CUDA index by default,
all builds automatically include CUDA support. Removed redundant separate
CUDA build scripts and updated documentation.

Changes:
- Removed build-cuda.sh and build-cuda.bat (no longer needed)
- Updated build.sh and build.bat to include CUDA by default
  - Added "uv sync" step to ensure CUDA PyTorch is installed
  - Updated messages to clarify CUDA support is included
- Updated BUILD.md to reflect simplified build process
  - Removed separate CUDA build sections
  - Clarified all builds include CUDA support
  - Updated GPU support section
- Updated CLAUDE.md with simplified build commands

Benefits:
- Simpler build process (one script per platform instead of two)
- Less confusion about which script to use
- All builds work on any system (GPU or CPU)
- Automatic fallback to CPU if no GPU available
- pyproject.toml is single source of truth for dependencies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-12-28 19:09:36 -08:00
parent be53f2e962
commit d34d272cf0
6 changed files with 42 additions and 186 deletions

View File

@@ -10,7 +10,7 @@ This guide explains how to build standalone executables for Linux and Windows.
## Building for Linux
### Standard Build (CPU-only):
### Standard Build (includes CUDA support):
```bash
# Make the build script executable (first time only)
@@ -20,20 +20,8 @@ chmod +x build.sh
./build.sh
```
### CUDA Build (GPU Support):
Build with CUDA support even without NVIDIA hardware:
```bash
# Make the build script executable (first time only)
chmod +x build-cuda.sh
# Run the CUDA build script
./build-cuda.sh
```
This will:
- Install PyTorch with CUDA 12.1 support
- Install PyTorch with CUDA 12.1 support (configured in pyproject.toml)
- Bundle CUDA runtime libraries (~600MB extra)
- Create an executable that works on both GPU and CPU systems
- Automatically fall back to CPU if no CUDA GPU is available
@@ -45,6 +33,12 @@ The executable will be created in `dist/LocalTranscription/LocalTranscription`
# Clean previous builds
rm -rf build dist
# Sync dependencies (includes CUDA PyTorch)
uv sync
# Remove incompatible enum34 package
uv pip uninstall -q enum34
# Build with PyInstaller
uv run pyinstaller local-transcription.spec
```
@@ -57,24 +51,15 @@ tar -czf LocalTranscription-Linux.tar.gz LocalTranscription/
## Building for Windows
### Standard Build (CPU-only):
### Standard Build (includes CUDA support):
```cmd
# Run the build script
build.bat
```
### CUDA Build (GPU Support):
Build with CUDA support even without NVIDIA hardware:
```cmd
# Run the CUDA build script
build-cuda.bat
```
This will:
- Install PyTorch with CUDA 12.1 support
- Install PyTorch with CUDA 12.1 support (configured in pyproject.toml)
- Bundle CUDA runtime libraries (~600MB extra)
- Create an executable that works on both GPU and CPU systems
- Automatically fall back to CPU if no CUDA GPU is available
@@ -87,6 +72,12 @@ The executable will be created in `dist\LocalTranscription\LocalTranscription.ex
rmdir /s /q build
rmdir /s /q dist
# Sync dependencies (includes CUDA PyTorch)
uv sync
# Remove incompatible enum34 package
uv pip uninstall -q enum34
# Build with PyInstaller
uv run pyinstaller local-transcription.spec
```
@@ -129,7 +120,7 @@ By default, the console window is visible (for debugging). To hide it:
### GPU Support
#### Building with CUDA (Recommended for Distribution)
**CUDA support is included by default** in all builds via the PyTorch CUDA configuration in `pyproject.toml`.
**Yes, you CAN build with CUDA support on systems without NVIDIA GPUs!**
@@ -140,41 +131,16 @@ PyTorch provides CUDA-enabled builds that bundle the CUDA runtime libraries. Thi
3. **Automatic fallback** - the app detects available hardware and uses GPU if available, CPU otherwise
4. **Larger file size** - adds ~600MB-1GB to the executable size
**How it works:**
```bash
# Linux
./build-cuda.sh
# Windows
build-cuda.bat
```
The build script will:
- Install PyTorch with bundled CUDA 12.1 runtime
- Package all CUDA libraries into the executable
- Create a universal build that runs on any system
**When users run the executable:**
- If they have an NVIDIA GPU with drivers: Uses GPU acceleration
- If they don't have NVIDIA GPU: Automatically uses CPU
- No configuration needed - it just works!
#### Alternative: CPU-Only Builds
If you only want CPU support (smaller file size):
```bash
# Linux
./build.sh
# Windows
build.bat
```
#### AMD GPU Support
- **ROCm**: Requires special PyTorch builds from AMD
- Not recommended for general distribution
- Better to use CUDA build (works on all systems) or CPU build
- The default CUDA build already works on all systems (NVIDIA GPU, AMD GPU, or CPU-only)
### Optimizations