Simplify build process: CUDA support now included by default
Since pyproject.toml is configured to use PyTorch CUDA index by default, all builds automatically include CUDA support. Removed redundant separate CUDA build scripts and updated documentation. Changes: - Removed build-cuda.sh and build-cuda.bat (no longer needed) - Updated build.sh and build.bat to include CUDA by default - Added "uv sync" step to ensure CUDA PyTorch is installed - Updated messages to clarify CUDA support is included - Updated BUILD.md to reflect simplified build process - Removed separate CUDA build sections - Clarified all builds include CUDA support - Updated GPU support section - Updated CLAUDE.md with simplified build commands Benefits: - Simpler build process (one script per platform instead of two) - Less confusion about which script to use - All builds work on any system (GPU or CPU) - Automatic fallback to CPU if no GPU available - pyproject.toml is single source of truth for dependencies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
70
BUILD.md
70
BUILD.md
@@ -10,7 +10,7 @@ This guide explains how to build standalone executables for Linux and Windows.
|
||||
|
||||
## Building for Linux
|
||||
|
||||
### Standard Build (CPU-only):
|
||||
### Standard Build (includes CUDA support):
|
||||
|
||||
```bash
|
||||
# Make the build script executable (first time only)
|
||||
@@ -20,20 +20,8 @@ chmod +x build.sh
|
||||
./build.sh
|
||||
```
|
||||
|
||||
### CUDA Build (GPU Support):
|
||||
|
||||
Build with CUDA support even without NVIDIA hardware:
|
||||
|
||||
```bash
|
||||
# Make the build script executable (first time only)
|
||||
chmod +x build-cuda.sh
|
||||
|
||||
# Run the CUDA build script
|
||||
./build-cuda.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install PyTorch with CUDA 12.1 support
|
||||
- Install PyTorch with CUDA 12.1 support (configured in pyproject.toml)
|
||||
- Bundle CUDA runtime libraries (~600MB extra)
|
||||
- Create an executable that works on both GPU and CPU systems
|
||||
- Automatically fall back to CPU if no CUDA GPU is available
|
||||
@@ -45,6 +33,12 @@ The executable will be created in `dist/LocalTranscription/LocalTranscription`
|
||||
# Clean previous builds
|
||||
rm -rf build dist
|
||||
|
||||
# Sync dependencies (includes CUDA PyTorch)
|
||||
uv sync
|
||||
|
||||
# Remove incompatible enum34 package
|
||||
uv pip uninstall -q enum34
|
||||
|
||||
# Build with PyInstaller
|
||||
uv run pyinstaller local-transcription.spec
|
||||
```
|
||||
@@ -57,24 +51,15 @@ tar -czf LocalTranscription-Linux.tar.gz LocalTranscription/
|
||||
|
||||
## Building for Windows
|
||||
|
||||
### Standard Build (CPU-only):
|
||||
### Standard Build (includes CUDA support):
|
||||
|
||||
```cmd
|
||||
# Run the build script
|
||||
build.bat
|
||||
```
|
||||
|
||||
### CUDA Build (GPU Support):
|
||||
|
||||
Build with CUDA support even without NVIDIA hardware:
|
||||
|
||||
```cmd
|
||||
# Run the CUDA build script
|
||||
build-cuda.bat
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install PyTorch with CUDA 12.1 support
|
||||
- Install PyTorch with CUDA 12.1 support (configured in pyproject.toml)
|
||||
- Bundle CUDA runtime libraries (~600MB extra)
|
||||
- Create an executable that works on both GPU and CPU systems
|
||||
- Automatically fall back to CPU if no CUDA GPU is available
|
||||
@@ -87,6 +72,12 @@ The executable will be created in `dist\LocalTranscription\LocalTranscription.ex
|
||||
rmdir /s /q build
|
||||
rmdir /s /q dist
|
||||
|
||||
# Sync dependencies (includes CUDA PyTorch)
|
||||
uv sync
|
||||
|
||||
# Remove incompatible enum34 package
|
||||
uv pip uninstall -q enum34
|
||||
|
||||
# Build with PyInstaller
|
||||
uv run pyinstaller local-transcription.spec
|
||||
```
|
||||
@@ -129,7 +120,7 @@ By default, the console window is visible (for debugging). To hide it:
|
||||
|
||||
### GPU Support
|
||||
|
||||
#### Building with CUDA (Recommended for Distribution)
|
||||
**CUDA support is included by default** in all builds via the PyTorch CUDA configuration in `pyproject.toml`.
|
||||
|
||||
**Yes, you CAN build with CUDA support on systems without NVIDIA GPUs!**
|
||||
|
||||
@@ -140,41 +131,16 @@ PyTorch provides CUDA-enabled builds that bundle the CUDA runtime libraries. Thi
|
||||
3. **Automatic fallback** - the app detects available hardware and uses GPU if available, CPU otherwise
|
||||
4. **Larger file size** - adds ~600MB-1GB to the executable size
|
||||
|
||||
**How it works:**
|
||||
```bash
|
||||
# Linux
|
||||
./build-cuda.sh
|
||||
|
||||
# Windows
|
||||
build-cuda.bat
|
||||
```
|
||||
|
||||
The build script will:
|
||||
- Install PyTorch with bundled CUDA 12.1 runtime
|
||||
- Package all CUDA libraries into the executable
|
||||
- Create a universal build that runs on any system
|
||||
|
||||
**When users run the executable:**
|
||||
- If they have an NVIDIA GPU with drivers: Uses GPU acceleration
|
||||
- If they don't have NVIDIA GPU: Automatically uses CPU
|
||||
- No configuration needed - it just works!
|
||||
|
||||
#### Alternative: CPU-Only Builds
|
||||
|
||||
If you only want CPU support (smaller file size):
|
||||
```bash
|
||||
# Linux
|
||||
./build.sh
|
||||
|
||||
# Windows
|
||||
build.bat
|
||||
```
|
||||
|
||||
#### AMD GPU Support
|
||||
|
||||
- **ROCm**: Requires special PyTorch builds from AMD
|
||||
- Not recommended for general distribution
|
||||
- Better to use CUDA build (works on all systems) or CPU build
|
||||
- The default CUDA build already works on all systems (NVIDIA GPU, AMD GPU, or CPU-only)
|
||||
|
||||
### Optimizations
|
||||
|
||||
|
||||
14
CLAUDE.md
14
CLAUDE.md
@@ -64,23 +64,19 @@ uv pip install torch --index-url https://download.pytorch.org/whl/cu121
|
||||
|
||||
### Building Executables
|
||||
```bash
|
||||
# Linux (CPU-only)
|
||||
# Linux (includes CUDA support - works on both GPU and CPU systems)
|
||||
./build.sh
|
||||
|
||||
# Linux (with CUDA support - works on both GPU and CPU systems)
|
||||
./build-cuda.sh
|
||||
|
||||
# Windows (CPU-only)
|
||||
# Windows (includes CUDA support - works on both GPU and CPU systems)
|
||||
build.bat
|
||||
|
||||
# Windows (with CUDA support)
|
||||
build-cuda.bat
|
||||
|
||||
# Manual build with PyInstaller
|
||||
uv sync # Install dependencies (includes CUDA PyTorch)
|
||||
uv pip uninstall -q enum34 # Remove incompatible enum34 package
|
||||
uv run pyinstaller local-transcription.spec
|
||||
```
|
||||
|
||||
**Important:** CUDA builds can be created on systems without NVIDIA GPUs. The PyTorch CUDA runtime is bundled, and the app automatically falls back to CPU if no GPU is available.
|
||||
**Important:** All builds include CUDA support via `pyproject.toml` configuration. CUDA builds can be created on systems without NVIDIA GPUs. The PyTorch CUDA runtime is bundled, and the app automatically falls back to CPU if no GPU is available.
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
@echo off
|
||||
REM Build script for Windows with CUDA support
|
||||
|
||||
echo Building Local Transcription with CUDA support...
|
||||
echo ==================================================
|
||||
echo.
|
||||
echo This will create a build that supports both CPU and CUDA GPUs.
|
||||
echo The executable will be larger (~2-3GB) but will work on any system.
|
||||
echo.
|
||||
|
||||
set /p INSTALL_CUDA="Install PyTorch with CUDA support? (y/n) "
|
||||
if /i "%INSTALL_CUDA%"=="y" (
|
||||
echo Installing PyTorch with CUDA 12.1 support...
|
||||
|
||||
REM Uninstall CPU-only version if present
|
||||
REM Note: uv doesn't support -y flag, it uninstalls without confirmation
|
||||
uv pip uninstall torch 2>nul
|
||||
|
||||
REM Install CUDA-enabled PyTorch
|
||||
REM This installs PyTorch with bundled CUDA runtime
|
||||
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||
|
||||
echo CUDA-enabled PyTorch installed
|
||||
echo.
|
||||
)
|
||||
|
||||
REM Clean previous builds
|
||||
echo Cleaning previous builds...
|
||||
if exist build rmdir /s /q build
|
||||
if exist dist rmdir /s /q dist
|
||||
|
||||
REM Remove enum34 if present (incompatible with PyInstaller)
|
||||
echo Removing enum34 (if present)...
|
||||
uv pip uninstall -q enum34 2>nul
|
||||
|
||||
REM Build with PyInstaller
|
||||
echo Running PyInstaller...
|
||||
uv run pyinstaller local-transcription.spec
|
||||
|
||||
REM Check if build succeeded
|
||||
if exist "dist\LocalTranscription" (
|
||||
echo.
|
||||
echo Build successful!
|
||||
echo Executable location: dist\LocalTranscription\LocalTranscription.exe
|
||||
echo.
|
||||
echo CUDA Support: YES (falls back to CPU if CUDA not available^)
|
||||
echo.
|
||||
echo To run the application:
|
||||
echo cd dist\LocalTranscription
|
||||
echo LocalTranscription.exe
|
||||
echo.
|
||||
echo To create a distributable package:
|
||||
echo - Compress the dist\LocalTranscription folder to a ZIP file
|
||||
echo - Name it: LocalTranscription-Windows-CUDA.zip
|
||||
echo.
|
||||
echo Note: This build will work on systems with or without NVIDIA GPUs.
|
||||
) else (
|
||||
echo.
|
||||
echo Build failed!
|
||||
exit /b 1
|
||||
)
|
||||
@@ -1,62 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Build script for Linux with CUDA support
|
||||
|
||||
echo "Building Local Transcription with CUDA support..."
|
||||
echo "=================================================="
|
||||
echo ""
|
||||
echo "This will create a build that supports both CPU and CUDA GPUs."
|
||||
echo "The executable will be larger (~2-3GB) but will work on any system."
|
||||
echo ""
|
||||
|
||||
# Check if we should install CUDA-enabled PyTorch
|
||||
read -p "Install PyTorch with CUDA support? (y/n) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
echo "Installing PyTorch with CUDA 12.1 support..."
|
||||
# Uninstall CPU-only version if present
|
||||
# Note: uv doesn't support -y flag, it uninstalls without confirmation
|
||||
uv pip uninstall torch 2>/dev/null || true
|
||||
|
||||
# Install CUDA-enabled PyTorch
|
||||
# This installs PyTorch with bundled CUDA runtime
|
||||
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||
|
||||
echo "✓ CUDA-enabled PyTorch installed"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Clean previous builds
|
||||
echo "Cleaning previous builds..."
|
||||
rm -rf build dist
|
||||
|
||||
# Remove enum34 if present (incompatible with PyInstaller)
|
||||
echo "Removing enum34 (if present)..."
|
||||
uv pip uninstall -q enum34 2>/dev/null || true
|
||||
|
||||
# Build with PyInstaller
|
||||
echo "Running PyInstaller..."
|
||||
uv run pyinstaller local-transcription.spec
|
||||
|
||||
# Check if build succeeded
|
||||
if [ -d "dist/LocalTranscription" ]; then
|
||||
echo ""
|
||||
echo "✓ Build successful!"
|
||||
echo "Executable location: dist/LocalTranscription/LocalTranscription"
|
||||
echo ""
|
||||
echo "CUDA Support: YES (falls back to CPU if CUDA not available)"
|
||||
echo ""
|
||||
echo "To run the application:"
|
||||
echo " cd dist/LocalTranscription"
|
||||
echo " ./LocalTranscription"
|
||||
echo ""
|
||||
echo "To create a distributable package:"
|
||||
echo " cd dist"
|
||||
echo " tar -czf LocalTranscription-Linux-CUDA.tar.gz LocalTranscription/"
|
||||
echo ""
|
||||
echo "Note: This build will work on systems with or without NVIDIA GPUs."
|
||||
else
|
||||
echo ""
|
||||
echo "✗ Build failed!"
|
||||
exit 1
|
||||
fi
|
||||
10
build.bat
10
build.bat
@@ -1,15 +1,21 @@
|
||||
@echo off
|
||||
REM Build script for Windows
|
||||
REM Build script for Windows with CUDA support (falls back to CPU if no GPU)
|
||||
|
||||
echo Building Local Transcription for Windows...
|
||||
echo ==========================================
|
||||
echo.
|
||||
echo This build includes CUDA support and works on both GPU and CPU systems.
|
||||
echo.
|
||||
|
||||
REM Clean previous builds
|
||||
echo Cleaning previous builds...
|
||||
if exist build rmdir /s /q build
|
||||
if exist dist rmdir /s /q dist
|
||||
|
||||
REM Sync dependencies (uses PyTorch CUDA from pyproject.toml)
|
||||
echo Installing dependencies with CUDA support...
|
||||
uv sync
|
||||
|
||||
REM Remove enum34 if present (incompatible with PyInstaller)
|
||||
echo Removing enum34 (if present)...
|
||||
uv pip uninstall -q enum34 2>nul
|
||||
@@ -24,6 +30,8 @@ if exist "dist\LocalTranscription" (
|
||||
echo Build successful!
|
||||
echo Executable location: dist\LocalTranscription\LocalTranscription.exe
|
||||
echo.
|
||||
echo CUDA Support: YES (automatically falls back to CPU if no GPU detected^)
|
||||
echo.
|
||||
echo To run the application:
|
||||
echo cd dist\LocalTranscription
|
||||
echo LocalTranscription.exe
|
||||
|
||||
11
build.sh
11
build.sh
@@ -1,13 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Build script for Linux
|
||||
# Build script for Linux with CUDA support (falls back to CPU if no GPU)
|
||||
|
||||
echo "Building Local Transcription for Linux..."
|
||||
echo "========================================="
|
||||
echo ""
|
||||
echo "This build includes CUDA support and works on both GPU and CPU systems."
|
||||
echo ""
|
||||
|
||||
# Clean previous builds
|
||||
echo "Cleaning previous builds..."
|
||||
rm -rf build dist
|
||||
|
||||
# Sync dependencies (uses PyTorch CUDA from pyproject.toml)
|
||||
echo "Installing dependencies with CUDA support..."
|
||||
uv sync
|
||||
|
||||
# Remove enum34 if present (incompatible with PyInstaller)
|
||||
echo "Removing enum34 (if present)..."
|
||||
uv pip uninstall -q enum34 2>/dev/null || true
|
||||
@@ -22,6 +29,8 @@ if [ -d "dist/LocalTranscription" ]; then
|
||||
echo "✓ Build successful!"
|
||||
echo "Executable location: dist/LocalTranscription/LocalTranscription"
|
||||
echo ""
|
||||
echo "CUDA Support: YES (automatically falls back to CPU if no GPU detected)"
|
||||
echo ""
|
||||
echo "To run the application:"
|
||||
echo " cd dist/LocalTranscription"
|
||||
echo " ./LocalTranscription"
|
||||
|
||||
Reference in New Issue
Block a user