Files
alfred-mobile/VOSK_MODEL_SETUP.md
jknapp 6d4ae2e5c3 Initial commit: Alfred Mobile - AI Assistant Android App
- OAuth authentication via Authentik
- WebSocket connection to OpenClaw gateway
- Configurable gateway URL with first-run setup
- User preferences sync across devices
- Multi-user support with custom assistant names
- ElevenLabs TTS integration (local + remote)
- FCM push notifications for alarms
- Voice input via Google Speech API
- No hardcoded secrets or internal IPs in tracked files
2026-02-09 11:12:51 -08:00

1.6 KiB

Vosk Model Setup Instructions

Step 1: Download the Model

Download the small English model from Vosk:

Direct Link: https://alphacephei.com/vosk/models/vosk-model-small-en-us-0.15.zip

Size: ~40 MB

Step 2: Extract the Model

  1. Extract vosk-model-small-en-us-0.15.zip
  2. You should have a folder named vosk-model-small-en-us-0.15

Step 3: Add to Android Project

  1. Create the assets folder if it doesn't exist:

    mkdir -p ~/.openclaw/workspace/alfred-mobile/app/src/main/assets
    
  2. Move the extracted model folder:

    mv ~/Downloads/vosk-model-small-en-us-0.15 ~/.openclaw/workspace/alfred-mobile/app/src/main/assets/
    
  3. Verify the structure:

    app/src/main/assets/
    └── vosk-model-small-en-us-0.15/
        ├── am/
        ├── conf/
        ├── graph/
        ├── ivector/
        └── README
    

Step 4: Rebuild the App

The model will be bundled with the APK. This increases the app size by ~40 MB but allows completely offline wake word detection.

Alternative: Smaller Model

If 40 MB is too large, you can use an even smaller model:

vosk-model-small-en-us-0.4 (~10 MB)

Verification

Once the model is in place, the app will:

  1. Automatically unpack it to internal storage on first run
  2. Load it into memory
  3. Start listening for "alfred", "hey alfred", or "ok alfred"

Ready to download the model? Let me know when it's in place and we'll continue with the UI integration!