Files
local-transcription/build-cuda.sh
Josh Knapp 472233aec4 Initial commit: Local Transcription App v1.0
Phase 1 Complete - Standalone Desktop Application

Features:
- Real-time speech-to-text with Whisper (faster-whisper)
- PySide6 desktop GUI with settings dialog
- Web server for OBS browser source integration
- Audio capture with automatic sample rate detection and resampling
- Noise suppression with Voice Activity Detection (VAD)
- Configurable display settings (font, timestamps, fade duration)
- Settings apply without restart (with automatic model reloading)
- Auto-fade for web display transcriptions
- CPU/GPU support with automatic device detection
- Standalone executable builds (PyInstaller)
- CUDA build support (works on systems without CUDA hardware)

Components:
- Audio capture with sounddevice
- Noise reduction with noisereduce + webrtcvad
- Transcription with faster-whisper
- GUI with PySide6
- Web server with FastAPI + WebSocket
- Configuration system with YAML

Build System:
- Standard builds (CPU-only): build.sh / build.bat
- CUDA builds (universal): build-cuda.sh / build-cuda.bat
- Comprehensive BUILD.md documentation
- Cross-platform support (Linux, Windows)

Documentation:
- README.md with project overview and quick start
- BUILD.md with detailed build instructions
- NEXT_STEPS.md with future enhancement roadmap
- INSTALL.md with setup instructions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-25 18:48:23 -08:00

58 lines
1.7 KiB
Bash
Executable File

#!/bin/bash
# Build script for Linux with CUDA support
echo "Building Local Transcription with CUDA support..."
echo "=================================================="
echo ""
echo "This will create a build that supports both CPU and CUDA GPUs."
echo "The executable will be larger (~2-3GB) but will work on any system."
echo ""
# Check if we should install CUDA-enabled PyTorch
read -p "Install PyTorch with CUDA support? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
echo "Installing PyTorch with CUDA 12.1 support..."
# Uninstall CPU-only version if present
uv pip uninstall -y torch
# Install CUDA-enabled PyTorch
# This installs PyTorch with bundled CUDA runtime
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
echo "✓ CUDA-enabled PyTorch installed"
echo ""
fi
# Clean previous builds
echo "Cleaning previous builds..."
rm -rf build dist
# Build with PyInstaller
echo "Running PyInstaller..."
uv run pyinstaller local-transcription.spec
# Check if build succeeded
if [ -d "dist/LocalTranscription" ]; then
echo ""
echo "✓ Build successful!"
echo "Executable location: dist/LocalTranscription/LocalTranscription"
echo ""
echo "CUDA Support: YES (falls back to CPU if CUDA not available)"
echo ""
echo "To run the application:"
echo " cd dist/LocalTranscription"
echo " ./LocalTranscription"
echo ""
echo "To create a distributable package:"
echo " cd dist"
echo " tar -czf LocalTranscription-Linux-CUDA.tar.gz LocalTranscription/"
echo ""
echo "Note: This build will work on systems with or without NVIDIA GPUs."
else
echo ""
echo "✗ Build failed!"
exit 1
fi