Compare commits

..

34 Commits

Author SHA1 Message Date
090aad6bc6 Fix project rename, remove confirmation, and auth mode change bugs
All checks were successful
Build App / build-macos (push) Successful in 2m43s
Build App / build-windows (push) Successful in 4m31s
Build App / build-linux (push) Successful in 4m41s
Build App / sync-to-github (push) Successful in 9s
- Add double-click-to-rename on project names in the sidebar
- Replace window.confirm() with inline React confirmation for project
  removal (confirm dialog didn't block in Tauri webview)
- Add serde(default) to skip_serializing fields in Rust models so
  deserialization doesn't fail when frontend omits secret fields

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 17:39:34 -08:00
c023d80c86 Hide mic toggle UI until upstream /voice WSL detection is fixed
All checks were successful
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 3m57s
Build App / build-linux (push) Successful in 5m17s
Build App / sync-to-github (push) Successful in 11s
Claude Code's /voice command incorrectly detects containers running
on WSL2 hosts as unsupported WSL environments. Remove the mic button
from project cards and microphone settings from the settings panel,
but keep useVoice hook and MicrophoneSettings component for re-enabling
once the upstream issue is resolved.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 07:12:35 -08:00
33f02e65c0 Move mic button from terminal overlay to project action buttons
All checks were successful
Build App / build-macos (push) Successful in 2m53s
Build App / build-windows (push) Successful in 3m26s
Build App / build-linux (push) Successful in 5m59s
Build App / sync-to-github (push) Successful in 11s
Relocates the voice/mic toggle from a floating overlay on the terminal
view to the project command row (alongside Stop, Terminal, Config) so
it no longer blocks access to the terminal window.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 06:58:03 -08:00
c5e28f9caa feat: add microphone selection to settings
All checks were successful
Build App / build-macos (push) Successful in 2m21s
Build App / build-windows (push) Successful in 3m28s
Build App / build-linux (push) Successful in 5m42s
Build App / sync-to-github (push) Successful in 18s
Adds a dropdown in Settings to choose which audio input device to
use for voice mode. Enumerates devices via the browser's
mediaDevices API and persists the selection in AppSettings.
The useVoice hook passes the selected deviceId to getUserMedia().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 06:15:47 -08:00
86176d8830 feat: add voice mode support via mic passthrough to container
Some checks failed
Build App / build-macos (push) Successful in 2m21s
Build App / build-windows (push) Successful in 3m24s
Build App / sync-to-github (push) Has been cancelled
Build App / build-linux (push) Has been cancelled
Build Container / build-container (push) Successful in 54s
Enables Claude Code's /voice command inside Docker containers by
capturing microphone audio in the Tauri webview and streaming it
into the container via a FIFO pipe.

Container: fake rec/arecord shims read PCM from a FIFO instead of
a real mic. Audio bridge exec writes PCM from Tauri into the FIFO.
Frontend: getUserMedia() + AudioWorklet captures 16kHz mono PCM
and streams it to the container via invoke("send_audio_data").
UI: "Mic Off/On" toggle button in the terminal view.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 06:11:33 -08:00
58a10c65e9 feat: add OSC 52 clipboard support for container-to-host copy
All checks were successful
Build App / build-macos (push) Successful in 2m24s
Build App / build-windows (push) Successful in 3m57s
Build App / build-linux (push) Successful in 8m28s
Build Container / build-container (push) Successful in 1m47s
Build App / sync-to-github (push) Successful in 12s
Programs inside the container (e.g. Claude Code's "hit c to copy") can
now write to the host system clipboard. A shell script shim installed as
xclip/xsel/pbcopy emits OSC 52 escape sequences, which the xterm.js
frontend intercepts and forwards to navigator.clipboard.writeText().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 05:47:42 -08:00
d56c6e3845 fix: validate AWS SSO session before launching Claude for Bedrock Profile auth
All checks were successful
Build App / build-macos (push) Successful in 2m20s
Build App / build-windows (push) Successful in 3m21s
Build App / build-linux (push) Successful in 5m41s
Build Container / build-container (push) Successful in 1m27s
Build App / sync-to-github (push) Successful in 12s
When using AWS Profile auth (SSO) with Bedrock, expired SSO sessions
caused Claude Code to spin indefinitely. Three root causes fixed:

1. Mount host .aws at /tmp/.host-aws (read-only) and copy to
   /home/claude/.aws in entrypoint, mirroring the SSH key pattern.
   This gives AWS CLI writable sso/cache and cli/cache directories.

2. For Bedrock Profile projects, wrap the claude command in a bash
   script that validates credentials via `aws sts get-caller-identity`
   before launch. If SSO session is expired, runs `aws sso login`
   with the auth URL visible and clickable in the terminal.

3. Non-SSO profiles with bad creds get a warning but Claude still
   starts. Non-Bedrock projects are unaffected.

Note: existing containers need a rebuild to pick up the new mount path.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 11:41:42 -08:00
574fca633a fix: sync build artifacts to GitHub instead of empty source archives
All checks were successful
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 3m21s
Build App / build-linux (push) Successful in 4m37s
Build App / sync-to-github (push) Successful in 11s
Move GitHub release sync into build-app.yml as a final sync-to-github
job that runs after all 3 platform builds complete. This eliminates the
race condition where sync-release.yml triggered before artifacts were
uploaded to Gitea. The old sync-release.yml is changed to manual-only.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 10:57:27 -08:00
e07c0e6150 fix: use SHA-256 for container fingerprints instead of DefaultHasher
All checks were successful
Build App / build-macos (push) Successful in 2m23s
Build App / build-windows (push) Successful in 3m17s
Build App / build-linux (push) Successful in 4m30s
Sync Release to GitHub / sync-release (release) Successful in 2s
DefaultHasher is not stable across Rust compiler versions or binary
rebuilds, causing unnecessary container recreations on every app update.
Replace with SHA-256 for deterministic, cross-build-stable fingerprints.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 10:42:06 -08:00
20a07c84f2 feat: upgrade MCP to Docker-based architecture (Beta)
All checks were successful
Build App / build-macos (push) Successful in 2m21s
Build App / build-windows (push) Successful in 3m50s
Build App / build-linux (push) Successful in 5m28s
Sync Release to GitHub / sync-release (release) Successful in 2s
Each MCP server can now run as its own Docker container on a dedicated
per-project bridge network, enabling proper isolation and lifecycle
management. SSE transport is removed (deprecated per MCP spec) with
backward-compatible serde alias. Docker socket access is auto-enabled
when stdio+Docker MCP servers are configured.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 10:21:05 -08:00
625d48a6ed feat: add MCP server support with global library and per-project toggles
All checks were successful
Build App / build-macos (push) Successful in 2m20s
Build App / build-windows (push) Successful in 3m21s
Build App / build-linux (push) Successful in 5m8s
Build Container / build-container (push) Successful in 1m4s
Sync Release to GitHub / sync-release (release) Successful in 2s
Add Model Context Protocol (MCP) server configuration support. Users can
define MCP servers globally (new sidebar tab) and enable them per-project.
Enabled servers are injected into containers as MCP_SERVERS_JSON env var
and merged into ~/.claude.json by the entrypoint.

Backend: McpServer model, McpStore (JSON + atomic writes), 4 CRUD commands,
container injection with fingerprint-based recreation detection.
Frontend: MCP sidebar tab, McpPanel/McpServerCard components, useMcpServers
hook, per-project MCP checkboxes in ProjectCard config.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 08:57:12 -08:00
2ddc705925 feat: show container progress in modal and add terminal jump-to-bottom button
All checks were successful
Build App / build-macos (push) Successful in 2m21s
Build App / build-windows (push) Successful in 3m19s
Build App / build-linux (push) Successful in 4m31s
Sync Release to GitHub / sync-release (release) Successful in 3s
Show container start/stop/rebuild progress as a modal popup instead of
inline text that was never visible. Add optimistic status updates so the
status dot turns yellow immediately. Also add a "Jump to Current" button
in the terminal when scrolled away from the bottom.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 08:22:35 -08:00
1aced2d860 feat: add progress feedback during slow container starts
All checks were successful
Build App / build-macos (push) Successful in 2m20s
Build App / build-windows (push) Successful in 3m17s
Build App / build-linux (push) Successful in 7m23s
Sync Release to GitHub / sync-release (release) Successful in 2s
Emit container-progress events from Rust at key milestones (checking
image, saving state, recreating, starting, stopping) and display them
in ProjectCard instead of the static "starting.../stopping..." text.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 07:43:01 -08:00
652e451afe fix: prevent projects from getting stuck in starting/stopping state
All checks were successful
Build App / build-macos (push) Successful in 2m21s
Build App / build-linux (push) Successful in 5m46s
Sync Release to GitHub / sync-release (release) Successful in 1s
Build App / build-windows (push) Successful in 6m35s
Reconcile stale transient statuses on app startup, add Force Stop button
for transient states, and harden stop_project_container error handling so
Docker failures don't leave projects permanently stuck.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 07:12:49 -08:00
eb86aa95b7 fix: persist full container state across stop/start and config-change recreation
All checks were successful
Build App / build-macos (push) Successful in 2m25s
Build App / build-windows (push) Successful in 2m29s
Build App / build-linux (push) Successful in 4m34s
Sync Release to GitHub / sync-release (release) Successful in 1s
- Add home volume (triple-c-home-{id}) for /home/claude to persist
  .claude.json, .local, and other user-level state across restarts
- Add docker commit before recreation: when container_needs_recreation()
  triggers, snapshot the container to preserve system-level changes
  (apt/pip/npm installs), then create the new container from that snapshot
- On Reset/removal: delete snapshot image + both volumes for clean slate
- Remove commit from stop_project_container (stop/start preserves the
  writable layer naturally; no commit needed)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 16:16:19 -08:00
3228e6cdd7 fix: Docker status never updates after Docker starts
All checks were successful
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 3m22s
Build App / build-linux (push) Successful in 5m56s
Sync Release to GitHub / sync-release (release) Successful in 1s
Replace OnceLock with Mutex<Option<Docker>> in the Rust backend so
failed Docker connections are retried instead of cached permanently.
Add frontend polling (every 5s) when Docker is initially unavailable,
stopping once detected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 14:46:59 -08:00
3344ce1cbf fix: prevent spurious container recreation on every start
All checks were successful
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 4m1s
Build App / build-linux (push) Successful in 5m6s
Sync Release to GitHub / sync-release (release) Successful in 1s
The CLAUDE_INSTRUCTIONS env var was computed differently during container
creation (with port mapping docs + scheduler instructions appended) vs
the recreation check (bare merge only). This caused
container_needs_recreation() to always return true, triggering a full
recreate on every stop/start cycle.

Extract build_claude_instructions() helper used by both code paths so
the expected value always matches what was set at creation time.

Also add TODO.md noting planned tauri-plugin-updater integration for
seamless in-app updates on all platforms.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 14:22:25 -08:00
d642cc64de fix: use browser_download_url for Gitea asset downloads
The API endpoint /releases/assets/{id} returns JSON metadata, not the
binary file. Use the browser_download_url from the asset object instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:09:41 -08:00
e3502876eb rename Triple-C.md to README.md
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:07:16 -08:00
4f41f0d98b feat: add Gitea actions to sync releases to GitHub
Add two new workflows:
- sync-release.yml: automatically mirrors releases (with assets) to GitHub when published on Gitea
- backfill-releases.yml: manual workflow to bulk-sync all existing Gitea releases to GitHub

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:06:29 -08:00
c9dc232fc4 fix: remove Node.js from actual path location on Act runner
Some checks failed
Build App / build-windows (push) Successful in 3m40s
Build App / build-linux (push) Successful in 7m4s
Build App / build-macos (push) Failing after 11m34s
The Act runner has Node 18 at /opt/acttoolcache/node/18.20.3/x64/bin/,
not at /usr/local/bin/. Use $(dirname "$(which node)") to find and
remove the actual binary location before installing Node 22.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 18:08:46 -08:00
2d4fce935f fix: remove old Node.js before installing v22 on Linux runner
Some checks failed
Build App / build-linux (push) Failing after 2m34s
Build App / build-windows (push) Successful in 3m40s
Build App / build-macos (push) Has been cancelled
The Act runner has Node 18 at /usr/local/bin/node which takes
precedence over the apt-installed /usr/bin/node. Even after
running nodesource setup and apt-get install, the old Node 18
binary remained in the PATH. Now removes old binaries and uses
hash -r to force path re-lookup. Also removes package-lock.json
before npm install to ensure correct platform-specific bindings.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 18:03:31 -08:00
e739f6aaff fix: check Node.js version, not just presence, in CI
Some checks failed
Build App / build-macos (push) Successful in 2m23s
Build App / build-linux (push) Failing after 3m38s
Build App / build-windows (push) Successful in 4m1s
The Act runner has Node.js v18 pre-installed, so the check
`command -v node` passes and skips installing v22. Node 18 is
too old for dependencies like vitest, jsdom, and tailwindcss/oxide.
Now checks the major version and upgrades if < 22.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:54:36 -08:00
550159fc63 Fix native binding error: use npm install instead of npm ci
Some checks failed
Build App / build-linux (push) Failing after 2m25s
Build App / build-macos (push) Successful in 2m34s
Build App / build-windows (push) Successful in 4m8s
@tailwindcss/oxide has platform-specific native bindings. The
package-lock.json was generated on a different platform, so npm ci
installs the wrong native binary. Switching to rm -rf node_modules
+ npm install lets npm resolve the correct platform-specific
optional dependency (e.g., @tailwindcss/oxide-linux-x64-gnu on
Linux, oxide-darwin-arm64 on macOS).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:44:03 -08:00
e3c874bc75 Fix cargo PATH: use explicit export in every step that needs it
Some checks failed
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 4m1s
Build App / build-linux (push) Failing after 1m30s
The Gitea Act runner's Docker container does not reliably support
$GITHUB_PATH or sourcing ~/.cargo/env across steps. Both mechanisms
failed because the runner spawns a fresh shell for each step.

Adopted the same pattern that already works for the Windows job:
explicitly set PATH at the top of every step that calls cargo or
npx tauri. This is the most portable approach across all runner
environments (Act Docker containers, bare metal macOS, Windows).

Build history shows Linux succeeded through run#46 (JS-based
actions) and failed from run#49 onward (shell-based installs).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:36:02 -08:00
6cae0e7feb Source cargo env before using rustup in install step
Some checks failed
Build App / build-macos (push) Successful in 2m22s
Build App / build-linux (push) Has been cancelled
Build App / build-windows (push) Has been cancelled
GITHUB_PATH only takes effect in subsequent steps, but rustup/rustc/cargo
are called within the same step. Adding `. "$HOME/.cargo/env"` immediately
after install puts cargo/rustup in PATH for the remainder of the step.
Fixed in both Linux and macOS jobs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:32:28 -08:00
b566446b75 Trigger multi-arch container build for ARM64 support
All checks were successful
Build Container / build-container (push) Successful in 8m40s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:31:13 -08:00
601a2db3cf Move Node.js install before checkout in Linux build
Some checks failed
Build App / build-macos (push) Failing after 6s
Build App / build-windows (push) Successful in 3m38s
Build App / build-linux (push) Failing after 3m56s
The Gitea Linux runner also lacks Node.js, causing actions/checkout@v4
(a JS action) to fail with "node: executable file not found in PATH".
Same fix as the macOS job: install Node.js via shell before any
JS-based actions run.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:28:02 -08:00
b795e27251 Fix Linux build PATH and add ARM64 container support
Some checks failed
Build App / build-linux (push) Failing after 2s
Build App / build-macos (push) Failing after 11s
Build App / build-windows (push) Successful in 3m38s
Linux app build: cargo was not in PATH for subsequent steps after
shell-based install. Fixed by adding $HOME/.cargo/bin to GITHUB_PATH
(persists across steps) and setting it in the job-level env. Also
removed the now-unnecessary per-step PATH override in the macOS job.

Container build: added QEMU setup and platforms: linux/amd64,linux/arm64
to produce a multi-arch manifest. The Dockerfile already uses
arch-aware commands (dpkg --print-architecture, uname -m) so it
builds natively on both architectures. This fixes the "no matching
manifest for linux/arm64/v8" error on Apple Silicon Macs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:16:45 -08:00
19d4cbce27 Use shell-based check-before-install for all build jobs
Some checks failed
Build App / build-macos (push) Successful in 2m34s
Build App / build-windows (push) Successful in 2m33s
Build App / build-linux (push) Failing after 3m40s
Replace JS-based GitHub Actions (dtolnay/rust-toolchain,
actions/setup-node) in the Linux job with shell commands that
check if Rust and Node.js are already present before installing.
All three jobs (Linux, macOS, Windows) now use the same pattern:
skip installation if the tool is already available on the runner.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:03:17 -08:00
946ea03956 Fix macOS CI build and add macOS build instructions
Some checks failed
Build App / build-windows (push) Has started running
Build App / build-linux (push) Has been cancelled
Build App / build-macos (push) Has been cancelled
The macOS Gitea runner lacks Node.js, causing actions/checkout@v4
(a JS action) to fail with "Cannot find: node in PATH". Fixed by
installing Node.js via Homebrew before checkout and replacing all
JS-based actions (setup-node, rust-toolchain, rust-cache) with
shell equivalents.

Also adds macOS section to BUILDING.md covering Xcode CLI tools,
universal binary targets, and Gatekeeper bypass instructions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:00:48 -08:00
ba4cb4176d Add macOS build job to app build workflow
Some checks failed
Build App / build-macos (push) Failing after 48s
Build App / build-windows (push) Successful in 3m9s
Build App / build-linux (push) Has been cancelled
Builds a universal binary (aarch64 + x86_64) targeting both Apple
Silicon and Intel Macs. Produces .dmg and .app.tar.gz artifacts,
uploaded to a separate Gitea release tagged with -mac suffix.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 16:57:35 -08:00
4b56610ff5 Add CLAUDE.md and HOW-TO-USE.md documentation
CLAUDE.md provides guidance for Claude Code instances working in this
repo (build commands, architecture overview, key conventions).

HOW-TO-USE.md is a user-facing guide covering prerequisites, setup,
all application features, and troubleshooting.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 15:59:53 -08:00
db51abb970 Add image paste support for xterm.js terminal
All checks were successful
Build App / build-linux (push) Successful in 2m41s
Build App / build-windows (push) Successful in 3m56s
Intercept clipboard paste events containing images in the terminal,
upload them into the Docker container via bollard's tar upload API,
and inject the resulting file path into terminal stdin so Claude Code
can reference the image.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 10:52:08 -08:00
52 changed files with 3615 additions and 152 deletions

View File

@@ -0,0 +1,84 @@
name: Backfill Releases to GitHub
on:
workflow_dispatch:
jobs:
backfill:
runs-on: ubuntu-latest
steps:
- name: Backfill all Gitea releases to GitHub
env:
GH_PAT: ${{ secrets.GH_PAT }}
GITEA_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
GITEA_API: https://repo.anhonesthost.net/api/v1
GITEA_REPO: cybercovellc/triple-c
GITHUB_REPO: shadowdao/triple-c
run: |
set -e
echo "==> Fetching releases from Gitea..."
RELEASES=$(curl -sf \
-H "Authorization: token $GITEA_TOKEN" \
"$GITEA_API/repos/$GITEA_REPO/releases?limit=50")
echo "$RELEASES" | jq -c '.[]' | while read release; do
TAG=$(echo "$release" | jq -r '.tag_name')
NAME=$(echo "$release" | jq -r '.name')
BODY=$(echo "$release" | jq -r '.body')
IS_PRERELEASE=$(echo "$release" | jq -r '.prerelease')
IS_DRAFT=$(echo "$release" | jq -r '.draft')
EXISTS=$(curl -sf \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/$GITHUB_REPO/releases/tags/$TAG" \
-o /dev/null -w "%{http_code}" || true)
if [ "$EXISTS" = "200" ]; then
echo "==> Skipping $TAG (already exists on GitHub)"
continue
fi
echo "==> Creating release $TAG..."
RESPONSE=$(curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/json" \
https://api.github.com/repos/$GITHUB_REPO/releases \
-d "{
\"tag_name\": \"$TAG\",
\"name\": \"$NAME\",
\"body\": $(echo "$BODY" | jq -Rs .),
\"draft\": $IS_DRAFT,
\"prerelease\": $IS_PRERELEASE
}")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url' | sed 's/{?name,label}//')
echo "$release" | jq -c '.assets[]?' | while read asset; do
ASSET_NAME=$(echo "$asset" | jq -r '.name')
ASSET_ID=$(echo "$asset" | jq -r '.id')
echo " ==> Downloading $ASSET_NAME..."
DOWNLOAD_URL=$(echo "$asset" | jq -r '.browser_download_url')
curl -sfL -o "/tmp/$ASSET_NAME" \
-H "Authorization: token $GITEA_TOKEN" \
"$DOWNLOAD_URL"
echo " ==> Uploading $ASSET_NAME to GitHub..."
ENCODED_NAME=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" "$ASSET_NAME")
curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/octet-stream" \
--data-binary "@/tmp/$ASSET_NAME" \
"$UPLOAD_URL?name=$ENCODED_NAME"
echo " Uploaded: $ASSET_NAME"
done
echo "==> Done: $TAG"
done
echo "==> Backfill complete."

View File

@@ -19,7 +19,35 @@ env:
jobs: jobs:
build-linux: build-linux:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs:
version: ${{ steps.version.outputs.VERSION }}
steps: steps:
- name: Install Node.js 22
run: |
NEED_INSTALL=false
if command -v node >/dev/null 2>&1; then
NODE_MAJOR=$(node --version | sed 's/v\([0-9]*\).*/\1/')
OLD_NODE_DIR=$(dirname "$(which node)")
echo "Found Node.js $(node --version) at $(which node) (major: ${NODE_MAJOR})"
if [ "$NODE_MAJOR" -lt 22 ]; then
echo "Node.js ${NODE_MAJOR} is too old, removing before installing 22..."
sudo rm -f "${OLD_NODE_DIR}/node" "${OLD_NODE_DIR}/npm" "${OLD_NODE_DIR}/npx" "${OLD_NODE_DIR}/corepack"
hash -r
NEED_INSTALL=true
fi
else
echo "Node.js not found, installing 22..."
NEED_INSTALL=true
fi
if [ "$NEED_INSTALL" = true ]; then
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs
hash -r
fi
echo "Node.js at: $(which node)"
node --version
npm --version
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
@@ -61,29 +89,35 @@ jobs:
xdg-utils xdg-utils
- name: Install Rust stable - name: Install Rust stable
uses: dtolnay/rust-toolchain@stable run: |
if command -v rustup >/dev/null 2>&1; then
- name: Rust cache echo "Rust already installed: $(rustc --version)"
uses: swatinem/rust-cache@v2 rustup update stable
with: rustup default stable
workspaces: "./app/src-tauri -> target" else
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
- name: Install Node.js fi
uses: actions/setup-node@v4 export PATH="$HOME/.cargo/bin:$PATH"
with: rustc --version
node-version: "22" cargo --version
- name: Install frontend dependencies - name: Install frontend dependencies
working-directory: ./app working-directory: ./app
run: npm ci run: |
rm -rf node_modules package-lock.json
npm install
- name: Install Tauri CLI - name: Install Tauri CLI
working-directory: ./app working-directory: ./app
run: npx tauri --version || npm install @tauri-apps/cli run: |
export PATH="$HOME/.cargo/bin:$PATH"
npx tauri --version || npm install @tauri-apps/cli
- name: Build Tauri app - name: Build Tauri app
working-directory: ./app working-directory: ./app
run: npx tauri build run: |
export PATH="$HOME/.cargo/bin:$PATH"
npx tauri build
- name: Collect artifacts - name: Collect artifacts
run: | run: |
@@ -119,6 +153,116 @@ jobs:
"${GITEA_URL}/api/v1/repos/${REPO}/releases/${RELEASE_ID}/assets?name=${filename}" "${GITEA_URL}/api/v1/repos/${REPO}/releases/${RELEASE_ID}/assets?name=${filename}"
done done
build-macos:
runs-on: macos-latest
steps:
- name: Install Node.js 22
run: |
NEED_INSTALL=false
if command -v node >/dev/null 2>&1; then
NODE_MAJOR=$(node --version | sed 's/v\([0-9]*\).*/\1/')
echo "Found Node.js $(node --version) (major: ${NODE_MAJOR})"
if [ "$NODE_MAJOR" -lt 22 ]; then
echo "Node.js ${NODE_MAJOR} is too old, upgrading to 22..."
NEED_INSTALL=true
fi
else
echo "Node.js not found, installing 22..."
NEED_INSTALL=true
fi
if [ "$NEED_INSTALL" = true ]; then
brew install node@22
brew link --overwrite node@22
fi
node --version
npm --version
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Compute version
id: version
run: |
COMMIT_COUNT=$(git rev-list --count HEAD)
VERSION="0.1.${COMMIT_COUNT}"
echo "VERSION=${VERSION}" >> $GITHUB_OUTPUT
echo "Computed version: ${VERSION}"
- name: Set app version
run: |
VERSION="${{ steps.version.outputs.VERSION }}"
sed -i '' "s/\"version\": \".*\"/\"version\": \"${VERSION}\"/" app/src-tauri/tauri.conf.json
sed -i '' "s/\"version\": \".*\"/\"version\": \"${VERSION}\"/" app/package.json
sed -i '' "s/^version = \".*\"/version = \"${VERSION}\"/" app/src-tauri/Cargo.toml
echo "Patched version to ${VERSION}"
- name: Install Rust stable
run: |
if command -v rustup >/dev/null 2>&1; then
echo "Rust already installed: $(rustc --version)"
rustup update stable
rustup default stable
else
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
fi
export PATH="$HOME/.cargo/bin:$PATH"
rustup target add aarch64-apple-darwin x86_64-apple-darwin
rustc --version
cargo --version
- name: Install frontend dependencies
working-directory: ./app
run: |
rm -rf node_modules
npm install
- name: Install Tauri CLI
working-directory: ./app
run: |
export PATH="$HOME/.cargo/bin:$PATH"
npx tauri --version || npm install @tauri-apps/cli
- name: Build Tauri app (universal)
working-directory: ./app
run: |
export PATH="$HOME/.cargo/bin:$PATH"
npx tauri build --target universal-apple-darwin
- name: Collect artifacts
run: |
mkdir -p artifacts
cp app/src-tauri/target/universal-apple-darwin/release/bundle/dmg/*.dmg artifacts/ 2>/dev/null || true
cp app/src-tauri/target/universal-apple-darwin/release/bundle/macos/*.app.tar.gz artifacts/ 2>/dev/null || true
ls -la artifacts/
- name: Upload to Gitea release
if: gitea.event_name == 'push'
env:
TOKEN: ${{ secrets.REGISTRY_TOKEN }}
run: |
TAG="v${{ steps.version.outputs.VERSION }}-mac"
# Create release
curl -s -X POST \
-H "Authorization: token ${TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"tag_name\": \"${TAG}\", \"name\": \"Triple-C v${{ steps.version.outputs.VERSION }} (macOS)\", \"body\": \"Automated build from commit ${{ gitea.sha }}\"}" \
"${GITEA_URL}/api/v1/repos/${REPO}/releases" > release.json
RELEASE_ID=$(cat release.json | grep -o '"id":[0-9]*' | head -1 | grep -o '[0-9]*')
echo "Release ID: ${RELEASE_ID}"
# Upload each artifact
for file in artifacts/*; do
[ -f "$file" ] || continue
filename=$(basename "$file")
echo "Uploading ${filename}..."
curl -s -X POST \
-H "Authorization: token ${TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${file}" \
"${GITEA_URL}/api/v1/repos/${REPO}/releases/${RELEASE_ID}/assets?name=${filename}"
done
build-windows: build-windows:
runs-on: windows-latest runs-on: windows-latest
defaults: defaults:
@@ -232,3 +376,96 @@ jobs:
echo Uploading %%~nxf... echo Uploading %%~nxf...
curl -s -X POST -H "Authorization: token %TOKEN%" -H "Content-Type: application/octet-stream" --data-binary "@%%f" "%GITEA_URL%/api/v1/repos/%REPO%/releases/%RELEASE_ID%/assets?name=%%~nxf" curl -s -X POST -H "Authorization: token %TOKEN%" -H "Content-Type: application/octet-stream" --data-binary "@%%f" "%GITEA_URL%/api/v1/repos/%REPO%/releases/%RELEASE_ID%/assets?name=%%~nxf"
) )
sync-to-github:
runs-on: ubuntu-latest
needs: [build-linux, build-macos, build-windows]
if: gitea.event_name == 'push'
env:
GH_PAT: ${{ secrets.GH_PAT }}
GITHUB_REPO: shadowdao/triple-c
steps:
- name: Download artifacts from Gitea releases
env:
TOKEN: ${{ secrets.REGISTRY_TOKEN }}
VERSION: ${{ needs.build-linux.outputs.version }}
run: |
set -e
mkdir -p artifacts
# Download assets from all 3 platform releases
for TAG_SUFFIX in "" "-mac" "-win"; do
TAG="v${VERSION}${TAG_SUFFIX}"
echo "==> Fetching assets for release ${TAG}..."
RELEASE_JSON=$(curl -sf \
-H "Authorization: token ${TOKEN}" \
"${GITEA_URL}/api/v1/repos/${REPO}/releases/tags/${TAG}" 2>/dev/null || echo "{}")
echo "$RELEASE_JSON" | jq -r '.assets[]? | "\(.name) \(.browser_download_url)"' | while read -r NAME URL; do
[ -z "$NAME" ] && continue
echo " Downloading ${NAME}..."
curl -sfL \
-H "Authorization: token ${TOKEN}" \
-o "artifacts/${NAME}" \
"$URL"
done
done
echo "==> All downloaded artifacts:"
ls -la artifacts/
- name: Create GitHub release and upload artifacts
env:
VERSION: ${{ needs.build-linux.outputs.version }}
COMMIT_SHA: ${{ gitea.sha }}
run: |
set -e
TAG="v${VERSION}"
echo "==> Creating unified release ${TAG} on GitHub..."
# Delete existing release if present (idempotent re-runs)
EXISTING=$(curl -sf \
-H "Authorization: Bearer ${GH_PAT}" \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/${GITHUB_REPO}/releases/tags/${TAG}" 2>/dev/null || echo "{}")
EXISTING_ID=$(echo "$EXISTING" | jq -r '.id // empty')
if [ -n "$EXISTING_ID" ]; then
echo " Deleting existing GitHub release ${TAG} (id: ${EXISTING_ID})..."
curl -sf -X DELETE \
-H "Authorization: Bearer ${GH_PAT}" \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/${GITHUB_REPO}/releases/${EXISTING_ID}"
fi
RESPONSE=$(curl -sf -X POST \
-H "Authorization: Bearer ${GH_PAT}" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/json" \
"https://api.github.com/repos/${GITHUB_REPO}/releases" \
-d "{
\"tag_name\": \"${TAG}\",
\"name\": \"Triple-C ${TAG}\",
\"body\": \"Automated build from commit ${COMMIT_SHA}\n\nIncludes Linux, macOS, and Windows artifacts.\",
\"draft\": false,
\"prerelease\": false
}")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url' | sed 's/{?name,label}//')
echo "==> Upload URL: ${UPLOAD_URL}"
for file in artifacts/*; do
[ -f "$file" ] || continue
FILENAME=$(basename "$file")
MIME="application/octet-stream"
echo "==> Uploading ${FILENAME}..."
curl -sf -X POST \
-H "Authorization: Bearer ${GH_PAT}" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: ${MIME}" \
--data-binary "@${file}" \
"${UPLOAD_URL}?name=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" "${FILENAME}")"
done
echo "==> GitHub release sync complete."

View File

@@ -21,6 +21,9 @@ jobs:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
@@ -36,6 +39,7 @@ jobs:
with: with:
context: ./container context: ./container
file: ./container/Dockerfile file: ./container/Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ gitea.event_name == 'push' }} push: ${{ gitea.event_name == 'push' }}
tags: | tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest

View File

@@ -0,0 +1,59 @@
name: Sync Release to GitHub
on:
workflow_dispatch:
jobs:
sync-release:
runs-on: ubuntu-latest
steps:
- name: Mirror release to GitHub
env:
GH_PAT: ${{ secrets.GH_PAT }}
GITHUB_REPO: shadowdao/triple-c
RELEASE_TAG: ${{ gitea.event.release.tag_name }}
RELEASE_NAME: ${{ gitea.event.release.name }}
RELEASE_BODY: ${{ gitea.event.release.body }}
IS_PRERELEASE: ${{ gitea.event.release.prerelease }}
IS_DRAFT: ${{ gitea.event.release.draft }}
run: |
set -e
echo "==> Creating release $RELEASE_TAG on GitHub..."
RESPONSE=$(curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/json" \
https://api.github.com/repos/$GITHUB_REPO/releases \
-d "{
\"tag_name\": \"$RELEASE_TAG\",
\"name\": \"$RELEASE_NAME\",
\"body\": $(echo "$RELEASE_BODY" | jq -Rs .),
\"draft\": $IS_DRAFT,
\"prerelease\": $IS_PRERELEASE
}")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url' | sed 's/{?name,label}//')
echo "Release created. Upload URL: $UPLOAD_URL"
echo '${{ toJSON(gitea.event.release.assets) }}' | jq -c '.[]' | while read asset; do
ASSET_NAME=$(echo "$asset" | jq -r '.name')
ASSET_URL=$(echo "$asset" | jq -r '.browser_download_url')
echo "==> Downloading asset: $ASSET_NAME"
curl -sfL -o "/tmp/$ASSET_NAME" "$ASSET_URL"
echo "==> Uploading $ASSET_NAME to GitHub..."
ENCODED_NAME=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" "$ASSET_NAME")
curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/octet-stream" \
--data-binary "@/tmp/$ASSET_NAME" \
"$UPLOAD_URL?name=$ENCODED_NAME"
echo " Uploaded: $ASSET_NAME"
done
echo "==> Release sync complete."

View File

@@ -1,6 +1,6 @@
# Building Triple-C # Building Triple-C
Triple-C is a Tauri v2 desktop application with a React/TypeScript frontend and a Rust backend. This guide covers building the app from source on Linux and Windows. Triple-C is a Tauri v2 desktop application with a React/TypeScript frontend and a Rust backend. This guide covers building the app from source on Linux, macOS, and Windows.
## Prerequisites (All Platforms) ## Prerequisites (All Platforms)
@@ -79,6 +79,57 @@ Build artifacts are located in `app/src-tauri/target/release/bundle/`:
| Debian pkg | `deb/*.deb` | | Debian pkg | `deb/*.deb` |
| RPM pkg | `rpm/*.rpm` | | RPM pkg | `rpm/*.rpm` |
## macOS
### 1. Install prerequisites
- **Xcode Command Line Tools** — required for the C/C++ toolchain and system headers:
```bash
xcode-select --install
```
No additional system libraries are needed — macOS includes WebKit natively.
### 2. Install Rust targets (universal binary)
To build a universal binary that runs on both Apple Silicon and Intel Macs:
```bash
rustup target add aarch64-apple-darwin x86_64-apple-darwin
```
### 3. Install frontend dependencies
```bash
cd app
npm ci
```
### 4. Build
For a universal binary (recommended for distribution):
```bash
npx tauri build --target universal-apple-darwin
```
For the current architecture only (faster, for local development):
```bash
npx tauri build
```
Build artifacts are located in `app/src-tauri/target/universal-apple-darwin/release/bundle/` (or `target/release/bundle/` for single-arch builds):
| Format | Path |
|--------|------|
| DMG | `dmg/*.dmg` |
| macOS App | `macos/*.app` |
| macOS App (compressed) | `macos/*.app.tar.gz` |
> **Note:** The app is not signed or notarized. On first launch, macOS Gatekeeper may block it. Right-click the app and select "Open" to bypass, or remove the quarantine attribute: `xattr -cr /Applications/Triple-C.app`
## Windows ## Windows
### 1. Install prerequisites ### 1. Install prerequisites

115
CLAUDE.md Normal file
View File

@@ -0,0 +1,115 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Triple-C (Claude-Code-Container) is a Tauri v2 desktop application that sandboxes Claude Code inside Docker containers. It has two main parts: a React/TypeScript frontend, a Rust backend, and a Docker container image definition.
## Build & Development Commands
All frontend/tauri commands run from the `app/` directory:
```bash
cd app
npm ci # Install dependencies (required first time)
npx tauri dev # Launch app in dev mode with hot reload (Vite on port 1420)
npx tauri build # Production build (outputs to src-tauri/target/release/bundle/)
npm run build # Frontend-only build (tsc + vite)
npm run test # Run Vitest once
npm run test:watch # Run Vitest in watch mode
```
Rust backend is compiled automatically by `tauri dev`/`tauri build`. To check Rust independently:
```bash
cd app/src-tauri
cargo check # Type-check without full build
cargo build # Build Rust backend only
```
Container image:
```bash
docker build -t triple-c-sandbox ./container
```
### Linux Build Dependencies (Ubuntu/Debian)
```bash
sudo apt-get install -y libgtk-3-dev libwebkit2gtk-4.1-dev libayatana-appindicator3-dev librsvg2-dev libsoup-3.0-dev patchelf libssl-dev pkg-config build-essential
```
## Architecture
### Two-Process Model (Tauri IPC)
- **React frontend** (`app/src/`) renders UI in the OS webview
- **Rust backend** (`app/src-tauri/src/`) handles Docker API, credential storage, and terminal I/O
- Communication uses two patterns:
- `invoke()` — request/response for discrete operations (CRUD, start/stop containers)
- `emit()`/`listen()` — event streaming for continuous data (terminal I/O)
### Terminal I/O Flow
```
User keystroke → xterm.js onData() → invoke("terminal_input") → mpsc channel → docker exec stdin
docker exec stdout → tokio task → emit("terminal-output-{sessionId}") → listen() → xterm.js write()
```
### Frontend Structure (`app/src/`)
- **`store/appState.ts`** — Single Zustand store for all app state (projects, sessions, UI)
- **`hooks/`** — All Tauri IPC calls are encapsulated in hooks (`useTerminal`, `useProjects`, `useDocker`, `useSettings`)
- **`lib/tauri-commands.ts`** — Typed `invoke()` wrappers; TypeScript types in `lib/types.ts` must match Rust models
- **`components/terminal/TerminalView.tsx`** — xterm.js integration with WebGL rendering, URL detection for OAuth flow
- **`components/layout/`** — TopBar (tabs + status), Sidebar (project list), StatusBar
- **`components/projects/`** — ProjectCard, ProjectList, AddProjectDialog
- **`components/settings/`** — Settings panels for API keys, Docker, AWS
### Backend Structure (`app/src-tauri/src/`)
- **`commands/`** — Tauri command handlers (docker, project, settings, terminal). These are the IPC entry points called by `invoke()`.
- **`docker/`** — Docker API layer using bollard:
- `client.rs` — Singleton Docker connection via `OnceLock`
- `container.rs` — Container lifecycle (create, start, stop, remove, inspect)
- `exec.rs` — PTY exec sessions with bidirectional stdin/stdout streaming
- `image.rs` — Image build/pull with progress streaming
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
- **`storage/`** — Persistence: `projects_store.rs` (JSON file with atomic writes), `secure.rs` (OS keychain via `keyring` crate), `settings_store.rs`
### Container (`container/`)
- **`Dockerfile`** — Ubuntu 24.04 base with Claude Code, Node.js 22, Python 3.12, Rust, Docker CLI, git, gh, AWS CLI v2, ripgrep, pnpm, uv, ruff pre-installed
- **`entrypoint.sh`** — UID/GID remapping to match host user, SSH key setup, git config, docker socket permissions, then `sleep infinity`
- **`triple-c-scheduler`** — Bash-based scheduled task system for recurring Claude Code invocations
### Container Lifecycle
Containers use a **stop/start** model (not create/destroy). Installed packages persist across stops. The `.claude` config dir uses a named Docker volume (`triple-c-claude-config-{projectId}`) so OAuth tokens survive even container resets.
### Authentication
Per-project, independently configured:
- **Anthropic (OAuth)** — `claude login` in terminal, token persists in config volume
- **AWS Bedrock** — Static keys, profile, or bearer token injected as env vars
## Styling
- **Tailwind CSS v4** with the Vite plugin (`@tailwindcss/vite`). No separate tailwind config file.
- All colors use CSS custom properties in `index.css` `:root` (e.g., `--bg-primary`, `--text-secondary`, `--accent`)
- `color-scheme: dark` is set on `:root` for native dark-mode controls
- **Do not** add a global `* { padding: 0 }` reset — Tailwind v4 uses CSS `@layer`, and unlayered CSS overrides all layered utilities
## Key Conventions
- Frontend types in `lib/types.ts` must stay in sync with Rust structs in `models/`
- Tauri commands are registered in `lib.rs` via `.invoke_handler(tauri::generate_handler![...])`
- Tauri v2 permissions are declared in `capabilities/default.json` — new IPC commands need permission grants there
- The `projects.json` file uses atomic writes (write to `.tmp`, then `rename()`). Corrupted files are backed up to `.bak`.
- Cross-platform paths: Docker socket is `/var/run/docker.sock` on Linux/macOS, `//./pipe/docker_engine` on Windows
## Testing
Frontend tests use Vitest with jsdom environment and React Testing Library. Setup file at `src/test/setup.ts`. Run a single test file:
```bash
cd app
npx vitest run src/path/to/test.test.ts
```

397
HOW-TO-USE.md Normal file
View File

@@ -0,0 +1,397 @@
# How to Use Triple-C
Triple-C (Claude-Code-Container) is a desktop application that runs Claude Code inside isolated Docker containers. Each project gets its own sandboxed environment with bind-mounted directories, so Claude only has access to the files you explicitly provide.
---
## Prerequisites
### Docker
Triple-C requires a running Docker daemon. Install one of the following:
| Platform | Option | Link |
|----------|--------|------|
| **Windows** | Docker Desktop | https://docs.docker.com/desktop/install/windows-install/ |
| **macOS** | Docker Desktop | https://docs.docker.com/desktop/install/mac-install/ |
| **Linux** | Docker Engine | https://docs.docker.com/engine/install/ |
| **Linux** | Docker Desktop (alternative) | https://docs.docker.com/desktop/install/linux/ |
After installation, verify Docker is running:
```bash
docker info
```
> **Windows note:** Docker Desktop must be running before launching Triple-C. The app communicates with Docker through the named pipe at `//./pipe/docker_engine`.
> **Linux note:** Your user must have permission to access the Docker socket (`/var/run/docker.sock`). Either add your user to the `docker` group (`sudo usermod -aG docker $USER`, then log out and back in) or run Docker in rootless mode.
### Claude Code Account
You need access to Claude Code through one of:
- **Anthropic account** — Sign up at https://claude.ai and use `claude login` (OAuth) inside the terminal
- **AWS Bedrock** — An AWS account with Bedrock access and Claude models enabled
---
## First Launch
### 1. Get the Container Image
When you first open Triple-C, go to the **Settings** tab in the sidebar. Under **Docker**, you'll see:
- **Docker Status** — Should show "Connected" (green). If it shows "Not Available", make sure Docker is running.
- **Image Status** — Will show "Not Found" on first launch.
Choose an **Image Source**:
| Source | Description | When to Use |
|--------|-------------|-------------|
| **Registry** | Pulls the pre-built image from `repo.anhonesthost.net` | Fastest setup — recommended for most users |
| **Local Build** | Builds the image locally from the embedded Dockerfile | If you can't reach the registry, or want a custom build |
| **Custom** | Use any Docker image you specify | Advanced — bring your own sandbox image |
Click **Pull Image** (for Registry/Custom) or **Build Image** (for Local Build). A progress log will stream below the button. When complete, the status changes to "Ready" (green).
### 2. Create Your First Project
Switch to the **Projects** tab in the sidebar and click the **+** button.
1. **Project Name** — Give it a meaningful name (e.g., "my-web-app").
2. **Folders** — Click **Browse** to select a directory on your host machine. This directory will be mounted into the container at `/workspace/<folder-name>`. You can add multiple folders with the **+** button at the bottom of the folder list.
3. Click **Add Project**.
### 3. Start the Container
Select your project in the sidebar and click **Start**. The status dot changes from gray (stopped) to orange (starting) to green (running).
### 4. Open a Terminal
Click the **Terminal** button (highlighted in accent color) to open an interactive terminal session. A new tab appears in the top bar and an xterm.js terminal loads in the main area.
Claude Code launches automatically with `--dangerously-skip-permissions` inside the sandboxed container.
### 5. Authenticate
**Anthropic (OAuth) — default:**
1. Type `claude login` or `/login` in the terminal.
2. Claude prints an OAuth URL. Triple-C detects long URLs and shows a clickable toast at the top of the terminal — click **Open** to open it in your browser.
3. Complete the login in your browser. The token is saved and persists across container stops and resets.
**AWS Bedrock:**
1. Stop the container first (settings can only be changed while stopped).
2. In the project card, switch the auth mode to **Bedrock**.
3. Expand the **Config** panel and fill in your AWS credentials (see [AWS Bedrock Configuration](#aws-bedrock-configuration) below).
4. Start the container again.
---
## The Interface
```
┌─────────────────────────────────────────────────────┐
│ TopBar [ Terminal Tabs ] Docker ● Image ●│
├────────────┬────────────────────────────────────────┤
│ Sidebar │ │
│ │ Terminal View │
│ Projects │ (xterm.js) │
│ Settings │ │
│ │ │
├────────────┴────────────────────────────────────────┤
│ StatusBar X projects · X running · X terminals │
└─────────────────────────────────────────────────────┘
```
- **TopBar** — Terminal tabs for switching between sessions. Status dots on the right show Docker connection (green = connected) and image availability (green = ready).
- **Sidebar** — Toggle between the **Projects** list and **Settings** panel.
- **Terminal View** — Interactive terminal powered by xterm.js with WebGL rendering.
- **StatusBar** — Counts of total projects, running containers, and open terminal sessions.
---
## Project Management
### Project Status
Each project shows a colored status dot:
| Color | Status | Meaning |
|-------|--------|---------|
| Gray | Stopped | Container is not running |
| Orange | Starting / Stopping | Container is transitioning |
| Green | Running | Container is active, ready for terminals |
| Red | Error | Something went wrong (check error message) |
### Project Actions
Select a project in the sidebar to see its action buttons:
| Button | When Available | What It Does |
|--------|---------------|--------------|
| **Start** | Stopped | Creates (if needed) and starts the container |
| **Stop** | Running | Stops the container but preserves its state |
| **Terminal** | Running | Opens a new terminal session in this container |
| **Reset** | Stopped | Destroys and recreates the container from scratch |
| **Config** | Always | Toggles the configuration panel |
| **Remove** | Stopped | Deletes the project and its container (with confirmation) |
### Container Lifecycle
Containers use a **stop/start** model. When you stop a container, everything inside it is preserved — installed packages, modified files, downloaded tools. Starting it again resumes where you left off.
**Reset** removes the container and creates a fresh one. However, your Claude Code configuration (including OAuth tokens from `claude login`) is stored in a separate Docker volume and survives resets.
Only **Remove** deletes everything, including the config volume and any stored credentials.
---
## Project Configuration
Click **Config** on a selected project to expand the configuration panel. Settings can only be changed when the container is **stopped** (an orange warning box appears if the container is running).
### Mounted Folders
Each project mounts one or more host directories into the container. The mount appears at `/workspace/<mount-name>` inside the container.
- Click **Browse** ("...") to change the host path
- Edit the mount name to control where it appears inside `/workspace/`
- Click **+** to add more folders, or **x** to remove one
- Mount names must be unique and use only letters, numbers, dashes, underscores, and dots
### SSH Keys
Specify the path to your SSH key directory (typically `~/.ssh`). Keys are mounted read-only and copied into the container with correct permissions. This enables `git clone` via SSH inside the container.
### Git Configuration
- **Git Name / Email** — Sets `git config user.name` and `user.email` inside the container.
- **Git HTTPS Token** — A personal access token (e.g., from GitHub) for HTTPS git operations. Stored securely in your OS keychain — never written to disk in plaintext.
### Allow Container Spawning
When enabled, the host Docker socket is mounted into the container so Claude Code can create sibling containers (e.g., for running databases, test environments). This is **off by default** for security.
> Toggling this requires stopping and restarting the container to take effect.
### Environment Variables
Click **Edit** to open the environment variables modal. Add key-value pairs that will be injected into the container. Per-project variables override global variables with the same key.
> Reserved prefixes (`ANTHROPIC_`, `AWS_`, `GIT_`, `HOST_`, `CLAUDE_`, `TRIPLE_C_`) are filtered out to prevent conflicts with internal variables.
### Port Mappings
Click **Edit** to map host ports to container ports. This is useful when Claude Code starts a web server or other service inside the container and you want to access it from your host browser.
Each mapping specifies:
- **Host Port** — The port on your machine (165535)
- **Container Port** — The port inside the container (165535)
- **Protocol** — TCP (default) or UDP
### Claude Instructions
Click **Edit** to write per-project instructions for Claude Code. These are written to `~/.claude/CLAUDE.md` inside the container and provide project-specific context. If you also have global instructions (in Settings), the global instructions come first, followed by the per-project instructions.
---
## AWS Bedrock Configuration
To use Claude via AWS Bedrock instead of Anthropic's API, switch the auth mode to **Bedrock** on the project card.
### Authentication Methods
| Method | Fields | Use Case |
|--------|--------|----------|
| **Keys** | Access Key ID, Secret Access Key, Session Token (optional) | Direct credentials — simplest setup |
| **Profile** | AWS Profile name | Uses `~/.aws/config` and `~/.aws/credentials` on the host |
| **Token** | Bearer Token | Temporary bearer token authentication |
### Additional Bedrock Settings
- **AWS Region** — Required. The region where your Bedrock models are deployed (e.g., `us-east-1`).
- **Model ID** — Optional. Override the default Claude model (e.g., `anthropic.claude-sonnet-4-20250514-v1:0`).
### Global AWS Defaults
In **Settings > AWS Configuration**, you can set defaults that apply to all Bedrock projects:
- **AWS Config Path** — Path to your `~/.aws` directory. Click **Detect** to auto-find it.
- **Default Profile** — Select from profiles found in your AWS config.
- **Default Region** — Fallback region for projects that don't specify one.
Per-project settings always override these global defaults.
---
## Settings
Access global settings via the **Settings** tab in the sidebar.
### Docker Settings
- **Docker Status** — Connection status to the Docker daemon.
- **Image Source** — Where to get the sandbox container image (Registry, Local Build, or Custom).
- **Pull / Build Image** — Download or build the image. Progress streams in real time.
- **Refresh** — Re-check Docker and image status.
### Container Timezone
Set the timezone for all containers (IANA format, e.g., `America/New_York`, `Europe/London`, `UTC`). Auto-detected from your host on first launch. This affects scheduled task timing inside containers.
### Global Claude Instructions
Instructions applied to **all** projects. Written to `~/.claude/CLAUDE.md` in every container, before any per-project instructions.
### Global Environment Variables
Environment variables applied to **all** project containers. Per-project variables with the same key take precedence.
### Updates
- **Current Version** — The installed version of Triple-C.
- **Auto-check** — Toggle automatic update checks (every 24 hours).
- **Check now** — Manually check for updates.
When an update is available, a pulsing **Update** button appears in the top bar. Click it to see release notes and download links.
---
## Terminal Features
### Multiple Sessions
You can open multiple terminal sessions (even for the same project). Each session gets its own tab in the top bar. Click a tab to switch, or click the **x** on a tab to close it.
### URL Detection
When Claude Code prints a long URL (e.g., during `claude login`), Triple-C detects it and shows a toast notification at the top of the terminal with an **Open** button. Clicking it opens the URL in your default browser. The toast auto-dismisses after 30 seconds.
Shorter URLs in terminal output are also clickable directly.
### Image Paste
You can paste images from your clipboard into the terminal (Ctrl+V / Cmd+V). The image is uploaded to the container and the file path is injected into the terminal input so Claude Code can reference it.
### Terminal Rendering
The terminal uses WebGL for hardware-accelerated rendering of the active tab. Inactive tabs fall back to canvas rendering to conserve GPU resources. The terminal automatically resizes when you resize the window.
---
## Scheduled Tasks (Inside the Container)
Once inside a running container terminal, you can set up recurring or one-time tasks using `triple-c-scheduler`. Tasks run as separate Claude Code sessions.
### Create a Recurring Task
```bash
triple-c-scheduler add --name "daily-review" --schedule "0 9 * * *" --prompt "Review open issues and summarize"
```
### Create a One-Time Task
```bash
triple-c-scheduler add --name "migrate-db" --at "2026-03-05 14:00" --prompt "Run database migrations"
```
One-time tasks automatically remove themselves after execution.
### Manage Tasks
```bash
triple-c-scheduler list # List all tasks
triple-c-scheduler enable --id abc123 # Enable a task
triple-c-scheduler disable --id abc123 # Disable a task
triple-c-scheduler remove --id abc123 # Delete a task
triple-c-scheduler run --id abc123 # Trigger a task immediately
triple-c-scheduler logs --id abc123 # View logs for a task
triple-c-scheduler logs --tail 20 # View last 20 log entries (all tasks)
triple-c-scheduler notifications # View completion notifications
triple-c-scheduler notifications --clear # Clear notifications
```
### Cron Schedule Format
Standard 5-field cron: `minute hour day-of-month month day-of-week`
| Example | Meaning |
|---------|---------|
| `*/30 * * * *` | Every 30 minutes |
| `0 9 * * 1-5` | 9:00 AM on weekdays |
| `0 */2 * * *` | Every 2 hours |
| `0 0 1 * *` | Midnight on the 1st of each month |
### Working Directory
By default, tasks run in `/workspace`. Use `--working-dir` to specify a different directory:
```bash
triple-c-scheduler add --name "test" --schedule "0 */6 * * *" --prompt "Run tests" --working-dir /workspace/my-project
```
---
## What's Inside the Container
The sandbox container (Ubuntu 24.04) comes pre-installed with:
| Tool | Version | Purpose |
|------|---------|---------|
| Claude Code | Latest | AI coding assistant (the tool being sandboxed) |
| Node.js | 22 LTS | JavaScript/TypeScript development |
| pnpm | Latest | Fast Node.js package manager |
| Python | 3.12 | Python development |
| uv | Latest | Fast Python package manager |
| ruff | Latest | Python linter/formatter |
| Rust | Stable | Rust development (via rustup) |
| Docker CLI | Latest | Container management (when spawning is enabled) |
| git | Latest | Version control |
| GitHub CLI (gh) | Latest | GitHub integration |
| AWS CLI | v2 | AWS services and Bedrock |
| ripgrep | Latest | Fast code search |
| build-essential | — | C/C++ compiler toolchain |
| openssh-client | — | SSH for git and remote access |
You can install additional tools at runtime with `sudo apt install`, `pip install`, `npm install -g`, etc. Installed packages persist across container stops (but not across resets).
---
## Troubleshooting
### Docker is "Not Available"
- **Is Docker running?** Start Docker Desktop or the Docker daemon (`sudo systemctl start docker`).
- **Permissions?** On Linux, ensure your user is in the `docker` group or the socket is accessible.
- **Custom socket path?** If your Docker socket is not at the default location, set it in Settings. The app expects `/var/run/docker.sock` on Linux/macOS or `//./pipe/docker_engine` on Windows.
### Image is "Not Found"
- Click **Pull Image** or **Build Image** in Settings > Docker.
- If pulling fails, check your network connection and whether you can reach the registry.
- Try switching to **Local Build** as an alternative.
### Container Won't Start
- Check that the Docker image is "Ready" in Settings.
- Verify that the mounted folder paths exist on your host.
- Look at the error message displayed in red below the project card.
### OAuth Login URL Not Opening
- Triple-C detects long URLs printed by `claude login` and shows a toast with an **Open** button.
- If the toast doesn't appear, try scrolling up in the terminal — the URL may have already been printed.
- You can also manually copy the URL from the terminal output and paste it into your browser.
### File Permission Issues
- Triple-C automatically remaps the container user's UID/GID to match your host user, so files created inside the container should have the correct ownership on your host.
- If you see permission errors, try resetting the container (stop, then click **Reset**).
### Settings Won't Save
- Most project settings can only be changed when the container is **stopped**. Stop the container first, make your changes, then start it again.
- Some changes (like toggling Docker access or changing mounted folders) trigger an automatic container recreation on the next start.

60
TODO.md Normal file
View File

@@ -0,0 +1,60 @@
# TODO / Future Improvements
## In-App Auto-Update via `tauri-plugin-updater`
**Priority:** High
**Status:** Planned
Currently the app detects available updates via the Gitea API (`check_for_updates` command) but cannot apply them. Users must manually download and install the new version. On macOS and Linux this is a poor experience compared to Windows (where NSIS handles upgrades cleanly).
### Recommended approach: `tauri-plugin-updater`
Full in-app auto-update: detects, downloads, verifies, and applies updates seamlessly on all platforms. The user clicks "Update" and the app restarts with the new version.
### Requirements
1. **Generate a Tauri update signing key pair** (this is Tauri's own Ed25519 key, not OS code signing):
```bash
npx @tauri-apps/cli signer generate -w ~/.tauri/triple-c.key
```
Set `TAURI_SIGNING_PRIVATE_KEY` and `TAURI_SIGNING_PRIVATE_KEY_PASSWORD` in CI.
2. **Add `tauri-plugin-updater`** to Rust and JS dependencies.
3. **Create an update endpoint** that returns Tauri's expected JSON format:
```json
{
"version": "v0.1.100",
"notes": "Changelog here",
"pub_date": "2026-03-01T00:00:00Z",
"platforms": {
"darwin-x86_64": { "signature": "...", "url": "https://..." },
"darwin-aarch64": { "signature": "...", "url": "https://..." },
"linux-x86_64": { "signature": "...", "url": "https://..." },
"windows-x86_64": { "signature": "...", "url": "https://..." }
}
}
```
This could be a static JSON file uploaded alongside release assets, or a small API that reads from Gitea releases and reformats.
4. **Configure the updater** in `tauri.conf.json`:
```json
"plugins": {
"updater": {
"endpoints": ["https://repo.anhonesthost.net/...update-endpoint..."],
"pubkey": "<public key from step 1>"
}
}
```
5. **Add frontend UI** for the update prompt (replace or enhance the existing update check flow).
6. **Update CI pipeline** to:
- Sign bundles with the Tauri key during build
- Upload `.sig` files alongside installers
- Generate/upload the update endpoint JSON
### References
- https://v2.tauri.app/plugin/updater/
- Existing update check code: `app/src-tauri/src/commands/update_commands.rs`
- Existing models: `app/src-tauri/src/models/update_info.rs`

View File

@@ -0,0 +1,17 @@
class AudioCaptureProcessor extends AudioWorkletProcessor {
process(inputs, outputs, parameters) {
const input = inputs[0];
if (input && input.length > 0 && input[0].length > 0) {
const samples = input[0]; // Float32Array, mono channel
const int16 = new Int16Array(samples.length);
for (let i = 0; i < samples.length; i++) {
const s = Math.max(-1, Math.min(1, samples[i]));
int16[i] = s < 0 ? s * 0x8000 : s * 0x7FFF;
}
this.port.postMessage(int16.buffer, [int16.buffer]);
}
return true;
}
}
registerProcessor('audio-capture-processor', AudioCaptureProcessor);

View File

@@ -4681,6 +4681,7 @@ dependencies = [
"reqwest 0.12.28", "reqwest 0.12.28",
"serde", "serde",
"serde_json", "serde_json",
"sha2",
"tar", "tar",
"tauri", "tauri",
"tauri-build", "tauri-build",

View File

@@ -30,6 +30,7 @@ fern = { version = "0.7", features = ["date-based"] }
tar = "0.4" tar = "0.4"
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] } reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
iana-time-zone = "0.1" iana-time-zone = "0.1"
sha2 = "0.10"
[build-dependencies] [build-dependencies]
tauri-build = { version = "2", features = [] } tauri-build = { version = "2", features = [] }

View File

@@ -0,0 +1,38 @@
use tauri::State;
use crate::models::McpServer;
use crate::AppState;
#[tauri::command]
pub async fn list_mcp_servers(state: State<'_, AppState>) -> Result<Vec<McpServer>, String> {
Ok(state.mcp_store.list())
}
#[tauri::command]
pub async fn add_mcp_server(
name: String,
state: State<'_, AppState>,
) -> Result<McpServer, String> {
let name = name.trim().to_string();
if name.is_empty() {
return Err("MCP server name cannot be empty.".to_string());
}
let server = McpServer::new(name);
state.mcp_store.add(server)
}
#[tauri::command]
pub async fn update_mcp_server(
server: McpServer,
state: State<'_, AppState>,
) -> Result<McpServer, String> {
state.mcp_store.update(server)
}
#[tauri::command]
pub async fn remove_mcp_server(
server_id: String,
state: State<'_, AppState>,
) -> Result<(), String> {
state.mcp_store.remove(&server_id)
}

View File

@@ -1,4 +1,5 @@
pub mod docker_commands; pub mod docker_commands;
pub mod mcp_commands;
pub mod project_commands; pub mod project_commands;
pub mod settings_commands; pub mod settings_commands;
pub mod terminal_commands; pub mod terminal_commands;

View File

@@ -1,10 +1,20 @@
use tauri::State; use tauri::{Emitter, State};
use crate::docker; use crate::docker;
use crate::models::{container_config, AuthMode, Project, ProjectPath, ProjectStatus}; use crate::models::{container_config, AuthMode, McpServer, Project, ProjectPath, ProjectStatus};
use crate::storage::secure; use crate::storage::secure;
use crate::AppState; use crate::AppState;
fn emit_progress(app_handle: &tauri::AppHandle, project_id: &str, message: &str) {
let _ = app_handle.emit(
"container-progress",
serde_json::json!({
"project_id": project_id,
"message": message,
}),
);
}
/// Extract secret fields from a project and store them in the OS keychain. /// Extract secret fields from a project and store them in the OS keychain.
fn store_secrets_for_project(project: &Project) -> Result<(), String> { fn store_secrets_for_project(project: &Project) -> Result<(), String> {
if let Some(ref token) = project.git_token { if let Some(ref token) = project.git_token {
@@ -43,6 +53,19 @@ fn load_secrets_for_project(project: &mut Project) {
} }
} }
/// Resolve enabled MCP servers and filter to Docker-only ones.
fn resolve_mcp_servers(project: &Project, state: &AppState) -> (Vec<McpServer>, Vec<McpServer>) {
let all_mcp_servers = state.mcp_store.list();
let enabled_mcp: Vec<McpServer> = project.enabled_mcp_servers.iter()
.filter_map(|id| all_mcp_servers.iter().find(|s| &s.id == id).cloned())
.collect();
let docker_mcp: Vec<McpServer> = enabled_mcp.iter()
.filter(|s| s.is_docker())
.cloned()
.collect();
(enabled_mcp, docker_mcp)
}
#[tauri::command] #[tauri::command]
pub async fn list_projects(state: State<'_, AppState>) -> Result<Vec<Project>, String> { pub async fn list_projects(state: State<'_, AppState>) -> Result<Vec<Project>, String> {
Ok(state.projects_store.list()) Ok(state.projects_store.list())
@@ -81,12 +104,31 @@ pub async fn remove_project(
state: State<'_, AppState>, state: State<'_, AppState>,
) -> Result<(), String> { ) -> Result<(), String> {
// Stop and remove container if it exists // Stop and remove container if it exists
if let Some(project) = state.projects_store.get(&project_id) { if let Some(ref project) = state.projects_store.get(&project_id) {
if let Some(ref container_id) = project.container_id { if let Some(ref container_id) = project.container_id {
state.exec_manager.close_sessions_for_container(container_id).await; state.exec_manager.close_sessions_for_container(container_id).await;
let _ = docker::stop_container(container_id).await; let _ = docker::stop_container(container_id).await;
let _ = docker::remove_container(container_id).await; let _ = docker::remove_container(container_id).await;
} }
// Remove MCP containers and network
let (_enabled_mcp, docker_mcp) = resolve_mcp_servers(project, &state);
if !docker_mcp.is_empty() {
if let Err(e) = docker::remove_mcp_containers(&docker_mcp).await {
log::warn!("Failed to remove MCP containers for project {}: {}", project_id, e);
}
}
if let Err(e) = docker::remove_project_network(&project.id).await {
log::warn!("Failed to remove project network for project {}: {}", project_id, e);
}
// Clean up the snapshot image + volumes
if let Err(e) = docker::remove_snapshot_image(project).await {
log::warn!("Failed to remove snapshot image for project {}: {}", project_id, e);
}
if let Err(e) = docker::remove_project_volumes(project).await {
log::warn!("Failed to remove project volumes for project {}: {}", project_id, e);
}
} }
// Clean up keychain secrets for this project // Clean up keychain secrets for this project
@@ -109,6 +151,7 @@ pub async fn update_project(
#[tauri::command] #[tauri::command]
pub async fn start_project_container( pub async fn start_project_container(
project_id: String, project_id: String,
app_handle: tauri::AppHandle,
state: State<'_, AppState>, state: State<'_, AppState>,
) -> Result<Project, String> { ) -> Result<Project, String> {
let mut project = state let mut project = state
@@ -124,6 +167,9 @@ pub async fn start_project_container(
let settings = state.settings_store.get(); let settings = state.settings_store.get();
let image_name = container_config::resolve_image_name(&settings.image_source, &settings.custom_image_name); let image_name = container_config::resolve_image_name(&settings.image_source, &settings.custom_image_name);
// Resolve enabled MCP servers for this project
let (enabled_mcp, docker_mcp) = resolve_mcp_servers(&project, &state);
// Validate auth mode requirements // Validate auth mode requirements
if project.auth_mode == AuthMode::Bedrock { if project.auth_mode == AuthMode::Bedrock {
let bedrock = project.bedrock_config.as_ref() let bedrock = project.bedrock_config.as_ref()
@@ -140,6 +186,7 @@ pub async fn start_project_container(
// Wrap container operations so that any failure resets status to Stopped. // Wrap container operations so that any failure resets status to Stopped.
let result: Result<String, String> = async { let result: Result<String, String> = async {
// Ensure image exists // Ensure image exists
emit_progress(&app_handle, &project_id, "Checking image...");
if !docker::image_exists(&image_name).await? { if !docker::image_exists(&image_name).await? {
return Err(format!("Docker image '{}' not found. Please pull or build the image first.", image_name)); return Err(format!("Docker image '{}' not found. Please pull or build the image first.", image_name));
} }
@@ -153,48 +200,93 @@ pub async fn start_project_container(
// AWS config path from global settings // AWS config path from global settings
let aws_config_path = settings.global_aws.aws_config_path.clone(); let aws_config_path = settings.global_aws.aws_config_path.clone();
// Check for existing container // Set up Docker network and MCP containers if needed
let network_name = if !docker_mcp.is_empty() {
emit_progress(&app_handle, &project_id, "Setting up MCP network...");
let net = docker::ensure_project_network(&project.id).await?;
emit_progress(&app_handle, &project_id, "Starting MCP containers...");
docker::start_mcp_containers(&docker_mcp, &net).await?;
Some(net)
} else {
None
};
let container_id = if let Some(existing_id) = docker::find_existing_container(&project).await? { let container_id = if let Some(existing_id) = docker::find_existing_container(&project).await? {
let needs_recreation = docker::container_needs_recreation( // Check if config changed — if so, snapshot + recreate
&existing_id, let needs_recreate = docker::container_needs_recreation(
&project, &existing_id,
settings.global_claude_instructions.as_deref(), &project,
&settings.global_custom_env_vars, settings.global_claude_instructions.as_deref(),
settings.timezone.as_deref(), &settings.global_custom_env_vars,
) settings.timezone.as_deref(),
.await &enabled_mcp,
.unwrap_or(false); ).await.unwrap_or(false);
if needs_recreation {
log::info!("Container config changed, recreating container for project {}", project.id); if needs_recreate {
log::info!("Container config changed for project {} — committing snapshot and recreating", project.id);
// Snapshot the filesystem before destroying
emit_progress(&app_handle, &project_id, "Saving container state...");
if let Err(e) = docker::commit_container_snapshot(&existing_id, &project).await {
log::warn!("Failed to snapshot container before recreation: {}", e);
}
emit_progress(&app_handle, &project_id, "Recreating container...");
let _ = docker::stop_container(&existing_id).await; let _ = docker::stop_container(&existing_id).await;
docker::remove_container(&existing_id).await?; docker::remove_container(&existing_id).await?;
// Create from snapshot image (preserves system-level changes)
let snapshot_image = docker::get_snapshot_image_name(&project);
let create_image = if docker::image_exists(&snapshot_image).await.unwrap_or(false) {
snapshot_image
} else {
image_name.clone()
};
let new_id = docker::create_container( let new_id = docker::create_container(
&project, &project,
&docker_socket, &docker_socket,
&image_name, &create_image,
aws_config_path.as_deref(), aws_config_path.as_deref(),
&settings.global_aws, &settings.global_aws,
settings.global_claude_instructions.as_deref(), settings.global_claude_instructions.as_deref(),
&settings.global_custom_env_vars, &settings.global_custom_env_vars,
settings.timezone.as_deref(), settings.timezone.as_deref(),
&enabled_mcp,
network_name.as_deref(),
).await?; ).await?;
emit_progress(&app_handle, &project_id, "Starting container...");
docker::start_container(&new_id).await?; docker::start_container(&new_id).await?;
new_id new_id
} else { } else {
emit_progress(&app_handle, &project_id, "Starting container...");
docker::start_container(&existing_id).await?; docker::start_container(&existing_id).await?;
existing_id existing_id
} }
} else { } else {
// Container doesn't exist (first start, or Docker pruned it).
// Check for a snapshot image first — it preserves system-level
// changes (apt/pip/npm installs) from the previous session.
let snapshot_image = docker::get_snapshot_image_name(&project);
let create_image = if docker::image_exists(&snapshot_image).await.unwrap_or(false) {
log::info!("Creating container from snapshot image for project {}", project.id);
snapshot_image
} else {
image_name.clone()
};
emit_progress(&app_handle, &project_id, "Creating container...");
let new_id = docker::create_container( let new_id = docker::create_container(
&project, &project,
&docker_socket, &docker_socket,
&image_name, &create_image,
aws_config_path.as_deref(), aws_config_path.as_deref(),
&settings.global_aws, &settings.global_aws,
settings.global_claude_instructions.as_deref(), settings.global_claude_instructions.as_deref(),
&settings.global_custom_env_vars, &settings.global_custom_env_vars,
settings.timezone.as_deref(), settings.timezone.as_deref(),
&enabled_mcp,
network_name.as_deref(),
).await?; ).await?;
emit_progress(&app_handle, &project_id, "Starting container...");
docker::start_container(&new_id).await?; docker::start_container(&new_id).await?;
new_id new_id
}; };
@@ -222,6 +314,7 @@ pub async fn start_project_container(
#[tauri::command] #[tauri::command]
pub async fn stop_project_container( pub async fn stop_project_container(
project_id: String, project_id: String,
app_handle: tauri::AppHandle,
state: State<'_, AppState>, state: State<'_, AppState>,
) -> Result<(), String> { ) -> Result<(), String> {
let project = state let project = state
@@ -229,22 +322,35 @@ pub async fn stop_project_container(
.get(&project_id) .get(&project_id)
.ok_or_else(|| format!("Project {} not found", project_id))?; .ok_or_else(|| format!("Project {} not found", project_id))?;
if let Some(ref container_id) = project.container_id { state.projects_store.update_status(&project_id, ProjectStatus::Stopping)?;
state.projects_store.update_status(&project_id, ProjectStatus::Stopping)?;
if let Some(ref container_id) = project.container_id {
// Close exec sessions for this project // Close exec sessions for this project
emit_progress(&app_handle, &project_id, "Stopping container...");
state.exec_manager.close_sessions_for_container(container_id).await; state.exec_manager.close_sessions_for_container(container_id).await;
docker::stop_container(container_id).await?; if let Err(e) = docker::stop_container(container_id).await {
state.projects_store.update_status(&project_id, ProjectStatus::Stopped)?; log::warn!("Docker stop failed for container {} (project {}): {} — resetting to Stopped anyway", container_id, project_id, e);
}
} }
// Stop MCP containers (best-effort)
let (_enabled_mcp, docker_mcp) = resolve_mcp_servers(&project, &state);
if !docker_mcp.is_empty() {
emit_progress(&app_handle, &project_id, "Stopping MCP containers...");
if let Err(e) = docker::stop_mcp_containers(&docker_mcp).await {
log::warn!("Failed to stop MCP containers for project {}: {}", project_id, e);
}
}
state.projects_store.update_status(&project_id, ProjectStatus::Stopped)?;
Ok(()) Ok(())
} }
#[tauri::command] #[tauri::command]
pub async fn rebuild_project_container( pub async fn rebuild_project_container(
project_id: String, project_id: String,
app_handle: tauri::AppHandle,
state: State<'_, AppState>, state: State<'_, AppState>,
) -> Result<Project, String> { ) -> Result<Project, String> {
let project = state let project = state
@@ -260,8 +366,24 @@ pub async fn rebuild_project_container(
state.projects_store.set_container_id(&project_id, None)?; state.projects_store.set_container_id(&project_id, None)?;
} }
// Remove MCP containers before rebuild
let (_enabled_mcp, docker_mcp) = resolve_mcp_servers(&project, &state);
if !docker_mcp.is_empty() {
if let Err(e) = docker::remove_mcp_containers(&docker_mcp).await {
log::warn!("Failed to remove MCP containers for project {}: {}", project_id, e);
}
}
// Remove snapshot image + volumes so Reset creates from the clean base image
if let Err(e) = docker::remove_snapshot_image(&project).await {
log::warn!("Failed to remove snapshot image for project {}: {}", project_id, e);
}
if let Err(e) = docker::remove_project_volumes(&project).await {
log::warn!("Failed to remove project volumes for project {}: {}", project_id, e);
}
// Start fresh // Start fresh
start_project_container(project_id, state).await start_project_container(project_id, app_handle, state).await
} }
fn default_docker_socket() -> String { fn default_docker_socket() -> String {

View File

@@ -1,7 +1,74 @@
use tauri::{AppHandle, Emitter, State}; use tauri::{AppHandle, Emitter, State};
use crate::models::{AuthMode, BedrockAuthMethod, Project};
use crate::AppState; use crate::AppState;
/// Build the command to run in the container terminal.
///
/// For Bedrock Profile projects, wraps `claude` in a bash script that validates
/// the AWS session first. If the SSO session is expired, runs `aws sso login`
/// so the user can re-authenticate (the URL is clickable via xterm.js WebLinksAddon).
fn build_terminal_cmd(project: &Project, state: &AppState) -> Vec<String> {
let is_bedrock_profile = project.auth_mode == AuthMode::Bedrock
&& project
.bedrock_config
.as_ref()
.map(|b| b.auth_method == BedrockAuthMethod::Profile)
.unwrap_or(false);
if !is_bedrock_profile {
return vec![
"claude".to_string(),
"--dangerously-skip-permissions".to_string(),
];
}
// Resolve AWS profile: project-level → global settings → "default"
let profile = project
.bedrock_config
.as_ref()
.and_then(|b| b.aws_profile.clone())
.or_else(|| state.settings_store.get().global_aws.aws_profile.clone())
.unwrap_or_else(|| "default".to_string());
// Build a bash wrapper that validates credentials, re-auths if needed,
// then exec's into claude.
let script = format!(
r#"
echo "Validating AWS session for profile '{profile}'..."
if aws sts get-caller-identity --profile '{profile}' >/dev/null 2>&1; then
echo "AWS session valid."
else
echo "AWS session expired or invalid."
# Check if this profile uses SSO (has sso_start_url configured)
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1; then
echo "Starting SSO login — click the URL below to authenticate:"
echo ""
aws sso login --profile '{profile}'
if [ $? -ne 0 ]; then
echo ""
echo "SSO login failed or was cancelled. Starting Claude anyway..."
echo "You may see authentication errors."
echo ""
fi
else
echo "Profile '{profile}' does not use SSO. Check your AWS credentials."
echo "Starting Claude anyway..."
echo ""
fi
fi
exec claude --dangerously-skip-permissions
"#,
profile = profile
);
vec![
"bash".to_string(),
"-c".to_string(),
script,
]
}
#[tauri::command] #[tauri::command]
pub async fn open_terminal_session( pub async fn open_terminal_session(
project_id: String, project_id: String,
@@ -19,10 +86,7 @@ pub async fn open_terminal_session(
.as_ref() .as_ref()
.ok_or_else(|| "Container not running".to_string())?; .ok_or_else(|| "Container not running".to_string())?;
let cmd = vec![ let cmd = build_terminal_cmd(&project, &state);
"claude".to_string(),
"--dangerously-skip-permissions".to_string(),
];
let output_event = format!("terminal-output-{}", session_id); let output_event = format!("terminal-output-{}", session_id);
let exit_event = format!("terminal-exit-{}", session_id); let exit_event = format!("terminal-exit-{}", session_id);
@@ -69,6 +133,80 @@ pub async fn close_terminal_session(
session_id: String, session_id: String,
state: State<'_, AppState>, state: State<'_, AppState>,
) -> Result<(), String> { ) -> Result<(), String> {
// Close audio bridge if it exists
let audio_session_id = format!("audio-{}", session_id);
state.exec_manager.close_session(&audio_session_id).await;
// Close terminal session
state.exec_manager.close_session(&session_id).await; state.exec_manager.close_session(&session_id).await;
Ok(()) Ok(())
} }
#[tauri::command]
pub async fn paste_image_to_terminal(
session_id: String,
image_data: Vec<u8>,
state: State<'_, AppState>,
) -> Result<String, String> {
let container_id = state.exec_manager.get_container_id(&session_id).await?;
let timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let file_name = format!("clipboard_{}.png", timestamp);
state
.exec_manager
.write_file_to_container(&container_id, &file_name, &image_data)
.await
}
#[tauri::command]
pub async fn start_audio_bridge(
session_id: String,
state: State<'_, AppState>,
) -> Result<(), String> {
// Get container_id from the terminal session
let container_id = state.exec_manager.get_container_id(&session_id).await?;
// Create audio bridge exec session with ID "audio-{session_id}"
// The loop handles reconnection when the FIFO reader (fake rec) is killed and restarted
let audio_session_id = format!("audio-{}", session_id);
let cmd = vec![
"bash".to_string(),
"-c".to_string(),
"FIFO=/tmp/triple-c-audio-input; [ -p \"$FIFO\" ] || mkfifo \"$FIFO\"; trap '' PIPE; while true; do cat > \"$FIFO\" 2>/dev/null; sleep 0.1; done".to_string(),
];
state
.exec_manager
.create_session_with_tty(
&container_id,
&audio_session_id,
cmd,
false,
|_data| { /* ignore output from the audio bridge */ },
Box::new(|| { /* no exit handler needed */ }),
)
.await
}
#[tauri::command]
pub async fn send_audio_data(
session_id: String,
data: Vec<u8>,
state: State<'_, AppState>,
) -> Result<(), String> {
let audio_session_id = format!("audio-{}", session_id);
state.exec_manager.send_input(&audio_session_id, data).await
}
#[tauri::command]
pub async fn stop_audio_bridge(
session_id: String,
state: State<'_, AppState>,
) -> Result<(), String> {
let audio_session_id = format!("audio-{}", session_id);
state.exec_manager.close_session(&audio_session_id).await;
Ok(())
}

View File

@@ -1,23 +1,28 @@
use bollard::Docker; use bollard::Docker;
use std::sync::OnceLock; use std::sync::Mutex;
static DOCKER: OnceLock<Result<Docker, String>> = OnceLock::new(); static DOCKER: Mutex<Option<Docker>> = Mutex::new(None);
pub fn get_docker() -> Result<&'static Docker, String> { pub fn get_docker() -> Result<Docker, String> {
let result = DOCKER.get_or_init(|| { let mut guard = DOCKER.lock().map_err(|e| format!("Lock poisoned: {}", e))?;
Docker::connect_with_local_defaults() if let Some(docker) = guard.as_ref() {
.map_err(|e| format!("Failed to connect to Docker daemon: {}", e)) return Ok(docker.clone());
});
match result {
Ok(docker) => Ok(docker),
Err(e) => Err(e.clone()),
} }
let docker = Docker::connect_with_local_defaults()
.map_err(|e| format!("Failed to connect to Docker daemon: {}", e))?;
guard.replace(docker.clone());
Ok(docker)
} }
pub async fn check_docker_available() -> Result<bool, String> { pub async fn check_docker_available() -> Result<bool, String> {
let docker = get_docker()?; let docker = get_docker()?;
match docker.ping().await { match docker.ping().await {
Ok(_) => Ok(true), Ok(_) => Ok(true),
Err(e) => Err(format!("Docker daemon not responding: {}", e)), Err(_) => {
// Connection object exists but daemon not responding — clear cache
let mut guard = DOCKER.lock().map_err(|e| format!("Lock poisoned: {}", e))?;
*guard = None;
Ok(false)
}
} }
} }

View File

@@ -2,13 +2,13 @@ use bollard::container::{
Config, CreateContainerOptions, ListContainersOptions, RemoveContainerOptions, Config, CreateContainerOptions, ListContainersOptions, RemoveContainerOptions,
StartContainerOptions, StopContainerOptions, StartContainerOptions, StopContainerOptions,
}; };
use bollard::image::{CommitContainerOptions, RemoveImageOptions};
use bollard::models::{ContainerSummary, HostConfig, Mount, MountTypeEnum, PortBinding}; use bollard::models::{ContainerSummary, HostConfig, Mount, MountTypeEnum, PortBinding};
use std::collections::HashMap; use std::collections::HashMap;
use std::collections::hash_map::DefaultHasher; use sha2::{Sha256, Digest};
use std::hash::{Hash, Hasher};
use super::client::get_docker; use super::client::get_docker;
use crate::models::{AuthMode, BedrockAuthMethod, ContainerInfo, EnvVar, GlobalAwsSettings, PortMapping, Project, ProjectPath}; use crate::models::{AuthMode, BedrockAuthMethod, ContainerInfo, EnvVar, GlobalAwsSettings, McpServer, McpTransportType, PortMapping, Project, ProjectPath};
const SCHEDULER_INSTRUCTIONS: &str = r#"## Scheduled Tasks const SCHEDULER_INSTRUCTIONS: &str = r#"## Scheduled Tasks
@@ -40,6 +40,42 @@ After tasks run, check notifications with `triple-c-scheduler notifications` and
### Timezone ### Timezone
Scheduled times use the container's configured timezone (check with `date`). If no timezone is configured, UTC is used."#; Scheduled times use the container's configured timezone (check with `date`). If no timezone is configured, UTC is used."#;
/// Build the full CLAUDE_INSTRUCTIONS value by merging global + project
/// instructions, appending port mapping docs, and appending scheduler docs.
/// Used by both create_container() and container_needs_recreation() to ensure
/// the same value is produced in both paths.
fn build_claude_instructions(
global_instructions: Option<&str>,
project_instructions: Option<&str>,
port_mappings: &[PortMapping],
) -> Option<String> {
let mut combined = merge_claude_instructions(global_instructions, project_instructions);
if !port_mappings.is_empty() {
let mut port_lines: Vec<String> = Vec::new();
port_lines.push("## Available Port Mappings".to_string());
port_lines.push("The following ports are mapped from the host to this container. Use these container ports when starting services that need to be accessible from the host:".to_string());
for pm in port_mappings {
port_lines.push(format!(
"- Host port {} -> Container port {} ({})",
pm.host_port, pm.container_port, pm.protocol
));
}
let port_info = port_lines.join("\n");
combined = Some(match combined {
Some(existing) => format!("{}\n\n{}", existing, port_info),
None => port_info,
});
}
combined = Some(match combined {
Some(existing) => format!("{}\n\n{}", existing, SCHEDULER_INSTRUCTIONS),
None => SCHEDULER_INSTRUCTIONS.to_string(),
});
combined
}
/// Compute a fingerprint string for the custom environment variables. /// Compute a fingerprint string for the custom environment variables.
/// Sorted alphabetically so order changes do not cause spurious recreation. /// Sorted alphabetically so order changes do not cause spurious recreation.
fn compute_env_fingerprint(custom_env_vars: &[EnvVar]) -> String { fn compute_env_fingerprint(custom_env_vars: &[EnvVar]) -> String {
@@ -92,20 +128,28 @@ fn merge_claude_instructions(
} }
} }
/// Hash a string with SHA-256 and return the hex digest.
fn sha256_hex(input: &str) -> String {
let mut hasher = Sha256::new();
hasher.update(input.as_bytes());
format!("{:x}", hasher.finalize())
}
/// Compute a fingerprint for the Bedrock configuration so we can detect changes. /// Compute a fingerprint for the Bedrock configuration so we can detect changes.
fn compute_bedrock_fingerprint(project: &Project) -> String { fn compute_bedrock_fingerprint(project: &Project) -> String {
if let Some(ref bedrock) = project.bedrock_config { if let Some(ref bedrock) = project.bedrock_config {
let mut hasher = DefaultHasher::new(); let parts = vec![
format!("{:?}", bedrock.auth_method).hash(&mut hasher); format!("{:?}", bedrock.auth_method),
bedrock.aws_region.hash(&mut hasher); bedrock.aws_region.clone(),
bedrock.aws_access_key_id.hash(&mut hasher); bedrock.aws_access_key_id.as_deref().unwrap_or("").to_string(),
bedrock.aws_secret_access_key.hash(&mut hasher); bedrock.aws_secret_access_key.as_deref().unwrap_or("").to_string(),
bedrock.aws_session_token.hash(&mut hasher); bedrock.aws_session_token.as_deref().unwrap_or("").to_string(),
bedrock.aws_profile.hash(&mut hasher); bedrock.aws_profile.as_deref().unwrap_or("").to_string(),
bedrock.aws_bearer_token.hash(&mut hasher); bedrock.aws_bearer_token.as_deref().unwrap_or("").to_string(),
bedrock.model_id.hash(&mut hasher); bedrock.model_id.as_deref().unwrap_or("").to_string(),
bedrock.disable_prompt_caching.hash(&mut hasher); format!("{}", bedrock.disable_prompt_caching),
format!("{:x}", hasher.finish()) ];
sha256_hex(&parts.join("|"))
} else { } else {
String::new() String::new()
} }
@@ -120,9 +164,7 @@ fn compute_paths_fingerprint(paths: &[ProjectPath]) -> String {
.collect(); .collect();
parts.sort(); parts.sort();
let joined = parts.join(","); let joined = parts.join(",");
let mut hasher = DefaultHasher::new(); sha256_hex(&joined)
joined.hash(&mut hasher);
format!("{:x}", hasher.finish())
} }
/// Compute a fingerprint for port mappings so we can detect changes. /// Compute a fingerprint for port mappings so we can detect changes.
@@ -134,9 +176,84 @@ fn compute_ports_fingerprint(port_mappings: &[PortMapping]) -> String {
.collect(); .collect();
parts.sort(); parts.sort();
let joined = parts.join(","); let joined = parts.join(",");
let mut hasher = DefaultHasher::new(); sha256_hex(&joined)
joined.hash(&mut hasher); }
format!("{:x}", hasher.finish())
/// Build the JSON value for MCP servers config to be injected into ~/.claude.json.
/// Produces `{"mcpServers": {"name": {"type": "stdio", ...}, ...}}`.
///
/// Handles 4 modes:
/// - Stdio+Docker: `docker exec -i <mcp-container-name> <command> ...args`
/// - Stdio+Manual: `<command> ...args` (existing behavior)
/// - HTTP+Docker: `streamableHttp` URL pointing to `http://<mcp-container-name>:<port>/mcp`
/// - HTTP+Manual: `streamableHttp` with user-provided URL + headers
fn build_mcp_servers_json(servers: &[McpServer]) -> String {
let mut mcp_map = serde_json::Map::new();
for server in servers {
let mut entry = serde_json::Map::new();
match server.transport_type {
McpTransportType::Stdio => {
entry.insert("type".to_string(), serde_json::json!("stdio"));
if server.is_docker() {
// Stdio+Docker: use `docker exec` to communicate with MCP container
entry.insert("command".to_string(), serde_json::json!("docker"));
let mut args = vec![
"exec".to_string(),
"-i".to_string(),
server.mcp_container_name(),
];
if let Some(ref cmd) = server.command {
args.push(cmd.clone());
}
args.extend(server.args.iter().cloned());
entry.insert("args".to_string(), serde_json::json!(args));
} else {
// Stdio+Manual: existing behavior
if let Some(ref cmd) = server.command {
entry.insert("command".to_string(), serde_json::json!(cmd));
}
if !server.args.is_empty() {
entry.insert("args".to_string(), serde_json::json!(server.args));
}
}
if !server.env.is_empty() {
entry.insert("env".to_string(), serde_json::json!(server.env));
}
}
McpTransportType::Http => {
entry.insert("type".to_string(), serde_json::json!("streamableHttp"));
if server.is_docker() {
// HTTP+Docker: point to MCP container by name on the shared network
let url = format!(
"http://{}:{}/mcp",
server.mcp_container_name(),
server.effective_container_port()
);
entry.insert("url".to_string(), serde_json::json!(url));
} else {
// HTTP+Manual: user-provided URL + headers
if let Some(ref url) = server.url {
entry.insert("url".to_string(), serde_json::json!(url));
}
if !server.headers.is_empty() {
entry.insert("headers".to_string(), serde_json::json!(server.headers));
}
}
}
}
mcp_map.insert(server.name.clone(), serde_json::Value::Object(entry));
}
let wrapper = serde_json::json!({ "mcpServers": mcp_map });
serde_json::to_string(&wrapper).unwrap_or_default()
}
/// Compute a fingerprint for MCP server configuration so we can detect changes.
fn compute_mcp_fingerprint(servers: &[McpServer]) -> String {
if servers.is_empty() {
return String::new();
}
let json = build_mcp_servers_json(servers);
sha256_hex(&json)
} }
pub async fn find_existing_container(project: &Project) -> Result<Option<String>, String> { pub async fn find_existing_container(project: &Project) -> Result<Option<String>, String> {
@@ -178,6 +295,8 @@ pub async fn create_container(
global_claude_instructions: Option<&str>, global_claude_instructions: Option<&str>,
global_custom_env_vars: &[EnvVar], global_custom_env_vars: &[EnvVar],
timezone: Option<&str>, timezone: Option<&str>,
mcp_servers: &[McpServer],
network_name: Option<&str>,
) -> Result<String, String> { ) -> Result<String, String> {
let docker = get_docker()?; let docker = get_docker()?;
let container_name = project.container_name(); let container_name = project.container_name();
@@ -307,38 +426,23 @@ pub async fn create_container(
} }
} }
// Claude instructions (global + per-project, plus port mapping info) // Claude instructions (global + per-project, plus port mapping info + scheduler docs)
let mut combined_instructions = merge_claude_instructions( let combined_instructions = build_claude_instructions(
global_claude_instructions, global_claude_instructions,
project.claude_instructions.as_deref(), project.claude_instructions.as_deref(),
&project.port_mappings,
); );
if !project.port_mappings.is_empty() {
let mut port_lines: Vec<String> = Vec::new();
port_lines.push("## Available Port Mappings".to_string());
port_lines.push("The following ports are mapped from the host to this container. Use these container ports when starting services that need to be accessible from the host:".to_string());
for pm in &project.port_mappings {
port_lines.push(format!(
"- Host port {} -> Container port {} ({})",
pm.host_port, pm.container_port, pm.protocol
));
}
let port_info = port_lines.join("\n");
combined_instructions = Some(match combined_instructions {
Some(existing) => format!("{}\n\n{}", existing, port_info),
None => port_info,
});
}
// Scheduler instructions (always appended so all containers get scheduling docs)
let scheduler_docs = SCHEDULER_INSTRUCTIONS;
combined_instructions = Some(match combined_instructions {
Some(existing) => format!("{}\n\n{}", existing, scheduler_docs),
None => scheduler_docs.to_string(),
});
if let Some(ref instructions) = combined_instructions { if let Some(ref instructions) = combined_instructions {
env_vars.push(format!("CLAUDE_INSTRUCTIONS={}", instructions)); env_vars.push(format!("CLAUDE_INSTRUCTIONS={}", instructions));
} }
// MCP servers config
if !mcp_servers.is_empty() {
let mcp_json = build_mcp_servers_json(mcp_servers);
env_vars.push(format!("MCP_SERVERS_JSON={}", mcp_json));
}
let mut mounts: Vec<Mount> = Vec::new(); let mut mounts: Vec<Mount> = Vec::new();
// Project directories -> /workspace/{mount_name} // Project directories -> /workspace/{mount_name}
@@ -352,7 +456,19 @@ pub async fn create_container(
}); });
} }
// Named volume for claude config persistence // Named volume for the entire home directory — preserves ~/.claude.json,
// ~/.local (pip/npm globals), and any other user-level state across
// container stop/start cycles.
mounts.push(Mount {
target: Some("/home/claude".to_string()),
source: Some(format!("triple-c-home-{}", project.id)),
typ: Some(MountTypeEnum::VOLUME),
read_only: Some(false),
..Default::default()
});
// Named volume for claude config persistence — mounted as a nested volume
// inside the home volume; Docker gives the more-specific mount precedence.
mounts.push(Mount { mounts.push(Mount {
target: Some("/home/claude/.claude".to_string()), target: Some("/home/claude/.claude".to_string()),
source: Some(format!("triple-c-claude-config-{}", project.id)), source: Some(format!("triple-c-claude-config-{}", project.id)),
@@ -392,7 +508,7 @@ pub async fn create_container(
if let Some(ref aws_path) = aws_dir { if let Some(ref aws_path) = aws_dir {
if aws_path.exists() { if aws_path.exists() {
mounts.push(Mount { mounts.push(Mount {
target: Some("/home/claude/.aws".to_string()), target: Some("/tmp/.host-aws".to_string()),
source: Some(aws_path.to_string_lossy().to_string()), source: Some(aws_path.to_string_lossy().to_string()),
typ: Some(MountTypeEnum::BIND), typ: Some(MountTypeEnum::BIND),
read_only: Some(true), read_only: Some(true),
@@ -402,8 +518,12 @@ pub async fn create_container(
} }
} }
// Docker socket (only if allowed) // Docker socket (if allowed, or auto-enabled for stdio+Docker MCP servers)
if project.allow_docker_access { let needs_docker_for_mcp = any_stdio_docker_mcp(mcp_servers);
if project.allow_docker_access || needs_docker_for_mcp {
if needs_docker_for_mcp && !project.allow_docker_access {
log::info!("Auto-enabling Docker socket access for stdio+Docker MCP servers");
}
// On Windows, the named pipe (//./pipe/docker_engine) cannot be // On Windows, the named pipe (//./pipe/docker_engine) cannot be
// bind-mounted into a Linux container. Docker Desktop exposes the // bind-mounted into a Linux container. Docker Desktop exposes the
// daemon socket as /var/run/docker.sock for container mounts. // daemon socket as /var/run/docker.sock for container mounts.
@@ -446,11 +566,14 @@ pub async fn create_container(
labels.insert("triple-c.ports-fingerprint".to_string(), compute_ports_fingerprint(&project.port_mappings)); labels.insert("triple-c.ports-fingerprint".to_string(), compute_ports_fingerprint(&project.port_mappings));
labels.insert("triple-c.image".to_string(), image_name.to_string()); labels.insert("triple-c.image".to_string(), image_name.to_string());
labels.insert("triple-c.timezone".to_string(), timezone.unwrap_or("").to_string()); labels.insert("triple-c.timezone".to_string(), timezone.unwrap_or("").to_string());
labels.insert("triple-c.mcp-fingerprint".to_string(), compute_mcp_fingerprint(mcp_servers));
let host_config = HostConfig { let host_config = HostConfig {
mounts: Some(mounts), mounts: Some(mounts),
port_bindings: if port_bindings.is_empty() { None } else { Some(port_bindings) }, port_bindings: if port_bindings.is_empty() { None } else { Some(port_bindings) },
init: Some(true), init: Some(true),
// Connect to project network if specified (for MCP container communication)
network_mode: network_name.map(|n| n.to_string()),
..Default::default() ..Default::default()
}; };
@@ -523,6 +646,83 @@ pub async fn remove_container(container_id: &str) -> Result<(), String> {
.map_err(|e| format!("Failed to remove container: {}", e)) .map_err(|e| format!("Failed to remove container: {}", e))
} }
/// Return the snapshot image name for a project.
pub fn get_snapshot_image_name(project: &Project) -> String {
format!("triple-c-snapshot-{}:latest", project.id)
}
/// Commit the container's filesystem to a snapshot image so that system-level
/// changes (apt/pip/npm installs, ~/.claude.json, etc.) survive container
/// removal. The Config is left empty so that secrets injected as env vars are
/// NOT baked into the image.
pub async fn commit_container_snapshot(container_id: &str, project: &Project) -> Result<(), String> {
let docker = get_docker()?;
let image_name = get_snapshot_image_name(project);
// Parse repo:tag
let (repo, tag) = match image_name.rsplit_once(':') {
Some((r, t)) => (r.to_string(), t.to_string()),
None => (image_name.clone(), "latest".to_string()),
};
let options = CommitContainerOptions {
container: container_id.to_string(),
repo: repo.clone(),
tag: tag.clone(),
pause: true,
..Default::default()
};
// Empty config — no env vars / cmd baked in
let config = Config::<String> {
..Default::default()
};
docker
.commit_container(options, config)
.await
.map_err(|e| format!("Failed to commit container snapshot: {}", e))?;
log::info!("Committed container {} as snapshot {}:{}", container_id, repo, tag);
Ok(())
}
/// Remove the snapshot image for a project (used on Reset / project removal).
pub async fn remove_snapshot_image(project: &Project) -> Result<(), String> {
let docker = get_docker()?;
let image_name = get_snapshot_image_name(project);
docker
.remove_image(
&image_name,
Some(RemoveImageOptions {
force: true,
noprune: false,
}),
None,
)
.await
.map_err(|e| format!("Failed to remove snapshot image {}: {}", image_name, e))?;
log::info!("Removed snapshot image {}", image_name);
Ok(())
}
/// Remove both named volumes for a project (used on Reset / project removal).
pub async fn remove_project_volumes(project: &Project) -> Result<(), String> {
let docker = get_docker()?;
for vol in [
format!("triple-c-home-{}", project.id),
format!("triple-c-claude-config-{}", project.id),
] {
match docker.remove_volume(&vol, None).await {
Ok(_) => log::info!("Removed volume {}", vol),
Err(e) => log::warn!("Failed to remove volume {} (may not exist): {}", vol, e),
}
}
Ok(())
}
/// Check whether the existing container's configuration still matches the /// Check whether the existing container's configuration still matches the
/// current project settings. Returns `true` when the container must be /// current project settings. Returns `true` when the container must be
/// recreated (mounts or env vars differ). /// recreated (mounts or env vars differ).
@@ -532,6 +732,7 @@ pub async fn container_needs_recreation(
global_claude_instructions: Option<&str>, global_claude_instructions: Option<&str>,
global_custom_env_vars: &[EnvVar], global_custom_env_vars: &[EnvVar],
timezone: Option<&str>, timezone: Option<&str>,
mcp_servers: &[McpServer],
) -> Result<bool, String> { ) -> Result<bool, String> {
let docker = get_docker()?; let docker = get_docker()?;
let info = docker let info = docker
@@ -685,9 +886,10 @@ pub async fn container_needs_recreation(
} }
// ── Claude instructions ─────────────────────────────────────────────── // ── Claude instructions ───────────────────────────────────────────────
let expected_instructions = merge_claude_instructions( let expected_instructions = build_claude_instructions(
global_claude_instructions, global_claude_instructions,
project.claude_instructions.as_deref(), project.claude_instructions.as_deref(),
&project.port_mappings,
); );
let container_instructions = get_env("CLAUDE_INSTRUCTIONS"); let container_instructions = get_env("CLAUDE_INSTRUCTIONS");
if container_instructions.as_deref() != expected_instructions.as_deref() { if container_instructions.as_deref() != expected_instructions.as_deref() {
@@ -695,6 +897,14 @@ pub async fn container_needs_recreation(
return Ok(true); return Ok(true);
} }
// ── MCP servers fingerprint ─────────────────────────────────────────
let expected_mcp_fp = compute_mcp_fingerprint(mcp_servers);
let container_mcp_fp = get_label("triple-c.mcp-fingerprint").unwrap_or_default();
if container_mcp_fp != expected_mcp_fp {
log::info!("MCP servers fingerprint mismatch (container={:?}, expected={:?})", container_mcp_fp, expected_mcp_fp);
return Ok(true);
}
Ok(false) Ok(false)
} }
@@ -753,3 +963,178 @@ pub async fn list_sibling_containers() -> Result<Vec<ContainerSummary>, String>
Ok(siblings) Ok(siblings)
} }
// ── MCP Container Lifecycle ─────────────────────────────────────────────
/// Returns true if any MCP server uses stdio transport with Docker.
pub fn any_stdio_docker_mcp(servers: &[McpServer]) -> bool {
servers.iter().any(|s| s.is_docker() && s.transport_type == McpTransportType::Stdio)
}
/// Returns true if any MCP server uses Docker.
pub fn any_docker_mcp(servers: &[McpServer]) -> bool {
servers.iter().any(|s| s.is_docker())
}
/// Find an existing MCP container by its expected name.
pub async fn find_mcp_container(server: &McpServer) -> Result<Option<String>, String> {
let docker = get_docker()?;
let container_name = server.mcp_container_name();
let filters: HashMap<String, Vec<String>> = HashMap::from([
("name".to_string(), vec![container_name.clone()]),
]);
let containers: Vec<ContainerSummary> = docker
.list_containers(Some(ListContainersOptions {
all: true,
filters,
..Default::default()
}))
.await
.map_err(|e| format!("Failed to list MCP containers: {}", e))?;
let expected = format!("/{}", container_name);
for c in &containers {
if let Some(names) = &c.names {
if names.iter().any(|n| n == &expected) {
return Ok(c.id.clone());
}
}
}
Ok(None)
}
/// Create a Docker container for an MCP server.
pub async fn create_mcp_container(
server: &McpServer,
network_name: &str,
) -> Result<String, String> {
let docker = get_docker()?;
let container_name = server.mcp_container_name();
let image = server
.docker_image
.as_ref()
.ok_or_else(|| format!("MCP server '{}' has no docker_image", server.name))?;
let mut env_vars: Vec<String> = Vec::new();
for (k, v) in &server.env {
env_vars.push(format!("{}={}", k, v));
}
// Build command + args as Cmd
let mut cmd: Vec<String> = Vec::new();
if let Some(ref command) = server.command {
cmd.push(command.clone());
}
cmd.extend(server.args.iter().cloned());
let mut labels = HashMap::new();
labels.insert("triple-c.managed".to_string(), "true".to_string());
labels.insert("triple-c.mcp-server".to_string(), server.id.clone());
let host_config = HostConfig {
network_mode: Some(network_name.to_string()),
..Default::default()
};
let config = Config {
image: Some(image.clone()),
env: if env_vars.is_empty() { None } else { Some(env_vars) },
cmd: if cmd.is_empty() { None } else { Some(cmd) },
labels: Some(labels),
host_config: Some(host_config),
..Default::default()
};
let options = CreateContainerOptions {
name: container_name.clone(),
..Default::default()
};
let response = docker
.create_container(Some(options), config)
.await
.map_err(|e| format!("Failed to create MCP container '{}': {}", container_name, e))?;
log::info!(
"Created MCP container {} (image: {}) on network {}",
container_name,
image,
network_name
);
Ok(response.id)
}
/// Start all Docker-based MCP server containers. Finds or creates each one.
pub async fn start_mcp_containers(
servers: &[McpServer],
network_name: &str,
) -> Result<(), String> {
for server in servers {
if !server.is_docker() {
continue;
}
let container_id = if let Some(existing_id) = find_mcp_container(server).await? {
log::debug!("Found existing MCP container for '{}'", server.name);
existing_id
} else {
create_mcp_container(server, network_name).await?
};
// Start the container (ignore already-started errors)
if let Err(e) = start_container(&container_id).await {
let err_str = e.to_string();
if err_str.contains("already started") || err_str.contains("304") {
log::debug!("MCP container '{}' already running", server.name);
} else {
return Err(format!(
"Failed to start MCP container '{}': {}",
server.name, e
));
}
}
log::info!("MCP container '{}' started", server.name);
}
Ok(())
}
/// Stop all Docker-based MCP server containers (best-effort).
pub async fn stop_mcp_containers(servers: &[McpServer]) -> Result<(), String> {
for server in servers {
if !server.is_docker() {
continue;
}
if let Ok(Some(container_id)) = find_mcp_container(server).await {
if let Err(e) = stop_container(&container_id).await {
log::warn!("Failed to stop MCP container '{}': {}", server.name, e);
} else {
log::info!("Stopped MCP container '{}'", server.name);
}
}
}
Ok(())
}
/// Stop and remove all Docker-based MCP server containers (best-effort).
pub async fn remove_mcp_containers(servers: &[McpServer]) -> Result<(), String> {
for server in servers {
if !server.is_docker() {
continue;
}
if let Ok(Some(container_id)) = find_mcp_container(server).await {
let _ = stop_container(&container_id).await;
if let Err(e) = remove_container(&container_id).await {
log::warn!("Failed to remove MCP container '{}': {}", server.name, e);
} else {
log::info!("Removed MCP container '{}'", server.name);
}
}
}
Ok(())
}

View File

@@ -1,3 +1,4 @@
use bollard::container::UploadToContainerOptions;
use bollard::exec::{CreateExecOptions, ResizeExecOptions, StartExecResults}; use bollard::exec::{CreateExecOptions, ResizeExecOptions, StartExecResults};
use futures_util::StreamExt; use futures_util::StreamExt;
use std::collections::HashMap; use std::collections::HashMap;
@@ -59,6 +60,22 @@ impl ExecSessionManager {
on_output: F, on_output: F,
on_exit: Box<dyn FnOnce() + Send>, on_exit: Box<dyn FnOnce() + Send>,
) -> Result<(), String> ) -> Result<(), String>
where
F: Fn(Vec<u8>) + Send + 'static,
{
self.create_session_with_tty(container_id, session_id, cmd, true, on_output, on_exit)
.await
}
pub async fn create_session_with_tty<F>(
&self,
container_id: &str,
session_id: &str,
cmd: Vec<String>,
tty: bool,
on_output: F,
on_exit: Box<dyn FnOnce() + Send>,
) -> Result<(), String>
where where
F: Fn(Vec<u8>) + Send + 'static, F: Fn(Vec<u8>) + Send + 'static,
{ {
@@ -71,7 +88,7 @@ impl ExecSessionManager {
attach_stdin: Some(true), attach_stdin: Some(true),
attach_stdout: Some(true), attach_stdout: Some(true),
attach_stderr: Some(true), attach_stderr: Some(true),
tty: Some(true), tty: Some(tty),
cmd: Some(cmd), cmd: Some(cmd),
user: Some("claude".to_string()), user: Some("claude".to_string()),
working_dir: Some("/workspace".to_string()), working_dir: Some("/workspace".to_string()),
@@ -212,4 +229,51 @@ impl ExecSessionManager {
session.shutdown(); session.shutdown();
} }
} }
pub async fn get_container_id(&self, session_id: &str) -> Result<String, String> {
let sessions = self.sessions.lock().await;
let session = sessions
.get(session_id)
.ok_or_else(|| format!("Session {} not found", session_id))?;
Ok(session.container_id.clone())
}
pub async fn write_file_to_container(
&self,
container_id: &str,
file_name: &str,
data: &[u8],
) -> Result<String, String> {
let docker = get_docker()?;
// Build a tar archive in memory containing the file
let mut tar_buf = Vec::new();
{
let mut builder = tar::Builder::new(&mut tar_buf);
let mut header = tar::Header::new_gnu();
header.set_size(data.len() as u64);
header.set_mode(0o644);
header.set_cksum();
builder
.append_data(&mut header, file_name, data)
.map_err(|e| format!("Failed to create tar entry: {}", e))?;
builder
.finish()
.map_err(|e| format!("Failed to finalize tar: {}", e))?;
}
docker
.upload_to_container(
container_id,
Some(UploadToContainerOptions {
path: "/tmp".to_string(),
..Default::default()
}),
tar_buf.into(),
)
.await
.map_err(|e| format!("Failed to upload file to container: {}", e))?;
Ok(format!("/tmp/{}", file_name))
}
} }

View File

@@ -2,8 +2,10 @@ pub mod client;
pub mod container; pub mod container;
pub mod image; pub mod image;
pub mod exec; pub mod exec;
pub mod network;
pub use client::*; pub use client::*;
pub use container::*; pub use container::*;
pub use image::*; pub use image::*;
pub use exec::*; pub use exec::*;
pub use network::*;

View File

@@ -0,0 +1,128 @@
use bollard::network::{CreateNetworkOptions, InspectNetworkOptions};
use std::collections::HashMap;
use super::client::get_docker;
/// Network name for a project's MCP containers.
fn project_network_name(project_id: &str) -> String {
format!("triple-c-net-{}", project_id)
}
/// Ensure a Docker bridge network exists for the project.
/// Returns the network name.
pub async fn ensure_project_network(project_id: &str) -> Result<String, String> {
let docker = get_docker()?;
let network_name = project_network_name(project_id);
// Check if network already exists
match docker
.inspect_network(&network_name, None::<InspectNetworkOptions<String>>)
.await
{
Ok(_) => {
log::debug!("Network {} already exists", network_name);
return Ok(network_name);
}
Err(_) => {
// Network doesn't exist, create it
}
}
let options = CreateNetworkOptions {
name: network_name.clone(),
driver: "bridge".to_string(),
labels: HashMap::from([
("triple-c.managed".to_string(), "true".to_string()),
("triple-c.project-id".to_string(), project_id.to_string()),
]),
..Default::default()
};
docker
.create_network(options)
.await
.map_err(|e| format!("Failed to create network {}: {}", network_name, e))?;
log::info!("Created Docker network {}", network_name);
Ok(network_name)
}
/// Connect a container to the project network.
pub async fn connect_container_to_network(
container_id: &str,
network_name: &str,
) -> Result<(), String> {
let docker = get_docker()?;
let config = bollard::network::ConnectNetworkOptions {
container: container_id.to_string(),
..Default::default()
};
docker
.connect_network(network_name, config)
.await
.map_err(|e| {
format!(
"Failed to connect container {} to network {}: {}",
container_id, network_name, e
)
})?;
log::debug!(
"Connected container {} to network {}",
container_id,
network_name
);
Ok(())
}
/// Remove the project network (best-effort). Disconnects all containers first.
pub async fn remove_project_network(project_id: &str) -> Result<(), String> {
let docker = get_docker()?;
let network_name = project_network_name(project_id);
// Inspect to get connected containers
let info = match docker
.inspect_network(&network_name, None::<InspectNetworkOptions<String>>)
.await
{
Ok(info) => info,
Err(_) => {
log::debug!(
"Network {} not found, nothing to remove",
network_name
);
return Ok(());
}
};
// Disconnect all containers
if let Some(containers) = info.containers {
for (container_id, _) in containers {
let disconnect_opts = bollard::network::DisconnectNetworkOptions {
container: container_id.clone(),
force: true,
};
if let Err(e) = docker
.disconnect_network(&network_name, disconnect_opts)
.await
{
log::warn!(
"Failed to disconnect container {} from network {}: {}",
container_id,
network_name,
e
);
}
}
}
// Remove the network
match docker.remove_network(&network_name).await {
Ok(_) => log::info!("Removed Docker network {}", network_name),
Err(e) => log::warn!("Failed to remove network {}: {}", network_name, e),
}
Ok(())
}

View File

@@ -7,11 +7,13 @@ mod storage;
use docker::exec::ExecSessionManager; use docker::exec::ExecSessionManager;
use storage::projects_store::ProjectsStore; use storage::projects_store::ProjectsStore;
use storage::settings_store::SettingsStore; use storage::settings_store::SettingsStore;
use storage::mcp_store::McpStore;
use tauri::Manager; use tauri::Manager;
pub struct AppState { pub struct AppState {
pub projects_store: ProjectsStore, pub projects_store: ProjectsStore,
pub settings_store: SettingsStore, pub settings_store: SettingsStore,
pub mcp_store: McpStore,
pub exec_manager: ExecSessionManager, pub exec_manager: ExecSessionManager,
} }
@@ -32,6 +34,13 @@ pub fn run() {
panic!("Failed to initialize settings store: {}", e); panic!("Failed to initialize settings store: {}", e);
} }
}; };
let mcp_store = match McpStore::new() {
Ok(s) => s,
Err(e) => {
log::error!("Failed to initialize MCP store: {}", e);
panic!("Failed to initialize MCP store: {}", e);
}
};
tauri::Builder::default() tauri::Builder::default()
.plugin(tauri_plugin_store::Builder::default().build()) .plugin(tauri_plugin_store::Builder::default().build())
@@ -40,6 +49,7 @@ pub fn run() {
.manage(AppState { .manage(AppState {
projects_store, projects_store,
settings_store, settings_store,
mcp_store,
exec_manager: ExecSessionManager::new(), exec_manager: ExecSessionManager::new(),
}) })
.setup(|app| { .setup(|app| {
@@ -90,6 +100,15 @@ pub fn run() {
commands::terminal_commands::terminal_input, commands::terminal_commands::terminal_input,
commands::terminal_commands::terminal_resize, commands::terminal_commands::terminal_resize,
commands::terminal_commands::close_terminal_session, commands::terminal_commands::close_terminal_session,
commands::terminal_commands::paste_image_to_terminal,
commands::terminal_commands::start_audio_bridge,
commands::terminal_commands::send_audio_data,
commands::terminal_commands::stop_audio_bridge,
// MCP
commands::mcp_commands::list_mcp_servers,
commands::mcp_commands::add_mcp_server,
commands::mcp_commands::update_mcp_server,
commands::mcp_commands::remove_mcp_server,
// Updates // Updates
commands::update_commands::get_app_version, commands::update_commands::get_app_version,
commands::update_commands::check_for_updates, commands::update_commands::check_for_updates,

View File

@@ -70,6 +70,8 @@ pub struct AppSettings {
pub dismissed_update_version: Option<String>, pub dismissed_update_version: Option<String>,
#[serde(default)] #[serde(default)]
pub timezone: Option<String>, pub timezone: Option<String>,
#[serde(default)]
pub default_microphone: Option<String>,
} }
impl Default for AppSettings { impl Default for AppSettings {
@@ -87,6 +89,7 @@ impl Default for AppSettings {
auto_check_updates: true, auto_check_updates: true,
dismissed_update_version: None, dismissed_update_version: None,
timezone: None, timezone: None,
default_microphone: None,
} }
} }
} }

View File

@@ -0,0 +1,70 @@
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "snake_case")]
pub enum McpTransportType {
Stdio,
#[serde(alias = "sse")]
Http,
}
impl Default for McpTransportType {
fn default() -> Self {
Self::Stdio
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpServer {
pub id: String,
pub name: String,
#[serde(default)]
pub transport_type: McpTransportType,
pub command: Option<String>,
#[serde(default)]
pub args: Vec<String>,
#[serde(default)]
pub env: HashMap<String, String>,
pub url: Option<String>,
#[serde(default)]
pub headers: HashMap<String, String>,
#[serde(default)]
pub docker_image: Option<String>,
#[serde(default)]
pub container_port: Option<u16>,
pub created_at: String,
pub updated_at: String,
}
impl McpServer {
pub fn new(name: String) -> Self {
let now = chrono::Utc::now().to_rfc3339();
Self {
id: uuid::Uuid::new_v4().to_string(),
name,
transport_type: McpTransportType::default(),
command: None,
args: Vec::new(),
env: HashMap::new(),
url: None,
headers: HashMap::new(),
docker_image: None,
container_port: None,
created_at: now.clone(),
updated_at: now,
}
}
pub fn is_docker(&self) -> bool {
self.docker_image.is_some()
}
pub fn mcp_container_name(&self) -> String {
format!("triple-c-mcp-{}", self.id)
}
pub fn effective_container_port(&self) -> u16 {
self.container_port.unwrap_or(3000)
}
}

View File

@@ -2,8 +2,10 @@ pub mod project;
pub mod container_config; pub mod container_config;
pub mod app_settings; pub mod app_settings;
pub mod update_info; pub mod update_info;
pub mod mcp_server;
pub use project::*; pub use project::*;
pub use container_config::*; pub use container_config::*;
pub use app_settings::*; pub use app_settings::*;
pub use update_info::*; pub use update_info::*;
pub use mcp_server::*;

View File

@@ -35,7 +35,7 @@ pub struct Project {
pub bedrock_config: Option<BedrockConfig>, pub bedrock_config: Option<BedrockConfig>,
pub allow_docker_access: bool, pub allow_docker_access: bool,
pub ssh_key_path: Option<String>, pub ssh_key_path: Option<String>,
#[serde(skip_serializing)] #[serde(skip_serializing, default)]
pub git_token: Option<String>, pub git_token: Option<String>,
pub git_user_name: Option<String>, pub git_user_name: Option<String>,
pub git_user_email: Option<String>, pub git_user_email: Option<String>,
@@ -45,6 +45,8 @@ pub struct Project {
pub port_mappings: Vec<PortMapping>, pub port_mappings: Vec<PortMapping>,
#[serde(default)] #[serde(default)]
pub claude_instructions: Option<String>, pub claude_instructions: Option<String>,
#[serde(default)]
pub enabled_mcp_servers: Vec<String>,
pub created_at: String, pub created_at: String,
pub updated_at: String, pub updated_at: String,
} }
@@ -98,14 +100,14 @@ impl Default for BedrockAuthMethod {
pub struct BedrockConfig { pub struct BedrockConfig {
pub auth_method: BedrockAuthMethod, pub auth_method: BedrockAuthMethod,
pub aws_region: String, pub aws_region: String,
#[serde(skip_serializing)] #[serde(skip_serializing, default)]
pub aws_access_key_id: Option<String>, pub aws_access_key_id: Option<String>,
#[serde(skip_serializing)] #[serde(skip_serializing, default)]
pub aws_secret_access_key: Option<String>, pub aws_secret_access_key: Option<String>,
#[serde(skip_serializing)] #[serde(skip_serializing, default)]
pub aws_session_token: Option<String>, pub aws_session_token: Option<String>,
pub aws_profile: Option<String>, pub aws_profile: Option<String>,
#[serde(skip_serializing)] #[serde(skip_serializing, default)]
pub aws_bearer_token: Option<String>, pub aws_bearer_token: Option<String>,
pub model_id: Option<String>, pub model_id: Option<String>,
pub disable_prompt_caching: bool, pub disable_prompt_caching: bool,
@@ -130,6 +132,7 @@ impl Project {
custom_env_vars: Vec::new(), custom_env_vars: Vec::new(),
port_mappings: Vec::new(), port_mappings: Vec::new(),
claude_instructions: None, claude_instructions: None,
enabled_mcp_servers: Vec::new(),
created_at: now.clone(), created_at: now.clone(),
updated_at: now, updated_at: now,
} }

View File

@@ -0,0 +1,106 @@
use std::fs;
use std::path::PathBuf;
use std::sync::Mutex;
use crate::models::McpServer;
pub struct McpStore {
servers: Mutex<Vec<McpServer>>,
file_path: PathBuf,
}
impl McpStore {
pub fn new() -> Result<Self, String> {
let data_dir = dirs::data_dir()
.ok_or_else(|| "Could not determine data directory. Set XDG_DATA_HOME on Linux.".to_string())?
.join("triple-c");
fs::create_dir_all(&data_dir).ok();
let file_path = data_dir.join("mcp_servers.json");
let servers = if file_path.exists() {
match fs::read_to_string(&file_path) {
Ok(data) => {
match serde_json::from_str::<Vec<McpServer>>(&data) {
Ok(parsed) => parsed,
Err(e) => {
log::error!("Failed to parse mcp_servers.json: {}. Starting with empty list.", e);
let backup = file_path.with_extension("json.bak");
if let Err(be) = fs::copy(&file_path, &backup) {
log::error!("Failed to back up corrupted mcp_servers.json: {}", be);
}
Vec::new()
}
}
}
Err(e) => {
log::error!("Failed to read mcp_servers.json: {}", e);
Vec::new()
}
}
} else {
Vec::new()
};
Ok(Self {
servers: Mutex::new(servers),
file_path,
})
}
fn lock(&self) -> std::sync::MutexGuard<'_, Vec<McpServer>> {
self.servers.lock().unwrap_or_else(|e| e.into_inner())
}
fn save(&self, servers: &[McpServer]) -> Result<(), String> {
let data = serde_json::to_string_pretty(servers)
.map_err(|e| format!("Failed to serialize MCP servers: {}", e))?;
// Atomic write: write to temp file, then rename
let tmp_path = self.file_path.with_extension("json.tmp");
fs::write(&tmp_path, data)
.map_err(|e| format!("Failed to write temp MCP servers file: {}", e))?;
fs::rename(&tmp_path, &self.file_path)
.map_err(|e| format!("Failed to rename MCP servers file: {}", e))?;
Ok(())
}
pub fn list(&self) -> Vec<McpServer> {
self.lock().clone()
}
pub fn get(&self, id: &str) -> Option<McpServer> {
self.lock().iter().find(|s| s.id == id).cloned()
}
pub fn add(&self, server: McpServer) -> Result<McpServer, String> {
let mut servers = self.lock();
let cloned = server.clone();
servers.push(server);
self.save(&servers)?;
Ok(cloned)
}
pub fn update(&self, updated: McpServer) -> Result<McpServer, String> {
let mut servers = self.lock();
if let Some(s) = servers.iter_mut().find(|s| s.id == updated.id) {
*s = updated.clone();
self.save(&servers)?;
Ok(updated)
} else {
Err(format!("MCP server {} not found", updated.id))
}
}
pub fn remove(&self, id: &str) -> Result<(), String> {
let mut servers = self.lock();
let initial_len = servers.len();
servers.retain(|s| s.id != id);
if servers.len() == initial_len {
return Err(format!("MCP server {} not found", id));
}
self.save(&servers)?;
Ok(())
}
}

View File

@@ -1,7 +1,9 @@
pub mod projects_store; pub mod projects_store;
pub mod secure; pub mod secure;
pub mod settings_store; pub mod settings_store;
pub mod mcp_store;
pub use projects_store::*; pub use projects_store::*;
pub use secure::*; pub use secure::*;
pub use settings_store::*; pub use settings_store::*;
pub use mcp_store::*;

View File

@@ -70,17 +70,38 @@ impl ProjectsStore {
(Vec::new(), false) (Vec::new(), false)
}; };
// Reconcile stale transient statuses: on a cold app start no Docker
// operations can be in flight, so Starting/Stopping are always stale.
let mut projects = projects;
let mut needs_save = needs_save;
for p in projects.iter_mut() {
match p.status {
crate::models::ProjectStatus::Starting | crate::models::ProjectStatus::Stopping => {
log::warn!(
"Reconciling stale '{}' status for project '{}' ({}) → Stopped",
serde_json::to_string(&p.status).unwrap_or_default().trim_matches('"'),
p.name,
p.id
);
p.status = crate::models::ProjectStatus::Stopped;
p.updated_at = chrono::Utc::now().to_rfc3339();
needs_save = true;
}
_ => {}
}
}
let store = Self { let store = Self {
projects: Mutex::new(projects), projects: Mutex::new(projects),
file_path, file_path,
}; };
// Persist migrated format back to disk // Persist migrated/reconciled format back to disk
if needs_save { if needs_save {
log::info!("Migrated projects.json from single-path to multi-path format"); log::info!("Saving reconciled/migrated projects.json to disk");
let projects = store.lock(); let projects = store.lock();
if let Err(e) = store.save(&projects) { if let Err(e) = store.save(&projects) {
log::error!("Failed to save migrated projects: {}", e); log::error!("Failed to save projects: {}", e);
} }
} }

View File

@@ -7,13 +7,15 @@ import TerminalView from "./components/terminal/TerminalView";
import { useDocker } from "./hooks/useDocker"; import { useDocker } from "./hooks/useDocker";
import { useSettings } from "./hooks/useSettings"; import { useSettings } from "./hooks/useSettings";
import { useProjects } from "./hooks/useProjects"; import { useProjects } from "./hooks/useProjects";
import { useMcpServers } from "./hooks/useMcpServers";
import { useUpdates } from "./hooks/useUpdates"; import { useUpdates } from "./hooks/useUpdates";
import { useAppState } from "./store/appState"; import { useAppState } from "./store/appState";
export default function App() { export default function App() {
const { checkDocker, checkImage } = useDocker(); const { checkDocker, checkImage, startDockerPolling } = useDocker();
const { loadSettings } = useSettings(); const { loadSettings } = useSettings();
const { refresh } = useProjects(); const { refresh } = useProjects();
const { refresh: refreshMcp } = useMcpServers();
const { loadVersion, checkForUpdates, startPeriodicCheck } = useUpdates(); const { loadVersion, checkForUpdates, startPeriodicCheck } = useUpdates();
const { sessions, activeSessionId } = useAppState( const { sessions, activeSessionId } = useAppState(
useShallow(s => ({ sessions: s.sessions, activeSessionId: s.activeSessionId })) useShallow(s => ({ sessions: s.sessions, activeSessionId: s.activeSessionId }))
@@ -22,10 +24,16 @@ export default function App() {
// Initialize on mount // Initialize on mount
useEffect(() => { useEffect(() => {
loadSettings(); loadSettings();
let stopPolling: (() => void) | undefined;
checkDocker().then((available) => { checkDocker().then((available) => {
if (available) checkImage(); if (available) {
checkImage();
} else {
stopPolling = startDockerPolling();
}
}); });
refresh(); refresh();
refreshMcp();
// Update detection // Update detection
loadVersion(); loadVersion();
@@ -34,6 +42,7 @@ export default function App() {
return () => { return () => {
clearTimeout(updateTimer); clearTimeout(updateTimer);
cleanup?.(); cleanup?.();
stopPolling?.();
}; };
}, []); // eslint-disable-line react-hooks/exhaustive-deps }, []); // eslint-disable-line react-hooks/exhaustive-deps

View File

@@ -19,6 +19,9 @@ vi.mock("../projects/ProjectList", () => ({
vi.mock("../settings/SettingsPanel", () => ({ vi.mock("../settings/SettingsPanel", () => ({
default: () => <div data-testid="settings-panel">SettingsPanel</div>, default: () => <div data-testid="settings-panel">SettingsPanel</div>,
})); }));
vi.mock("../mcp/McpPanel", () => ({
default: () => <div data-testid="mcp-panel">McpPanel</div>,
}));
describe("Sidebar", () => { describe("Sidebar", () => {
beforeEach(() => { beforeEach(() => {

View File

@@ -1,6 +1,7 @@
import { useShallow } from "zustand/react/shallow"; import { useShallow } from "zustand/react/shallow";
import { useAppState } from "../../store/appState"; import { useAppState } from "../../store/appState";
import ProjectList from "../projects/ProjectList"; import ProjectList from "../projects/ProjectList";
import McpPanel from "../mcp/McpPanel";
import SettingsPanel from "../settings/SettingsPanel"; import SettingsPanel from "../settings/SettingsPanel";
export default function Sidebar() { export default function Sidebar() {
@@ -8,35 +9,37 @@ export default function Sidebar() {
useShallow(s => ({ sidebarView: s.sidebarView, setSidebarView: s.setSidebarView })) useShallow(s => ({ sidebarView: s.sidebarView, setSidebarView: s.setSidebarView }))
); );
const tabCls = (view: typeof sidebarView) =>
`flex-1 px-3 py-2 text-sm font-medium transition-colors ${
sidebarView === view
? "text-[var(--accent)] border-b-2 border-[var(--accent)]"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)]"
}`;
return ( return (
<div className="flex flex-col h-full w-[25%] min-w-56 max-w-80 bg-[var(--bg-secondary)] border border-[var(--border-color)] rounded-lg overflow-hidden"> <div className="flex flex-col h-full w-[25%] min-w-56 max-w-80 bg-[var(--bg-secondary)] border border-[var(--border-color)] rounded-lg overflow-hidden">
{/* Nav tabs */} {/* Nav tabs */}
<div className="flex border-b border-[var(--border-color)]"> <div className="flex border-b border-[var(--border-color)]">
<button <button onClick={() => setSidebarView("projects")} className={tabCls("projects")}>
onClick={() => setSidebarView("projects")}
className={`flex-1 px-3 py-2 text-sm font-medium transition-colors ${
sidebarView === "projects"
? "text-[var(--accent)] border-b-2 border-[var(--accent)]"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)]"
}`}
>
Projects Projects
</button> </button>
<button <button onClick={() => setSidebarView("mcp")} className={tabCls("mcp")}>
onClick={() => setSidebarView("settings")} MCP <span className="text-[0.6rem] px-1 py-0.5 rounded bg-yellow-500/20 text-yellow-400 ml-0.5">Beta</span>
className={`flex-1 px-3 py-2 text-sm font-medium transition-colors ${ </button>
sidebarView === "settings" <button onClick={() => setSidebarView("settings")} className={tabCls("settings")}>
? "text-[var(--accent)] border-b-2 border-[var(--accent)]"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)]"
}`}
>
Settings Settings
</button> </button>
</div> </div>
{/* Content */} {/* Content */}
<div className="flex-1 overflow-y-auto overflow-x-hidden p-1 min-w-0"> <div className="flex-1 overflow-y-auto overflow-x-hidden p-1 min-w-0">
{sidebarView === "projects" ? <ProjectList /> : <SettingsPanel />} {sidebarView === "projects" ? (
<ProjectList />
) : sidebarView === "mcp" ? (
<McpPanel />
) : (
<SettingsPanel />
)}
</div> </div>
</div> </div>
); );

View File

@@ -0,0 +1,79 @@
import { useState, useEffect } from "react";
import { useMcpServers } from "../../hooks/useMcpServers";
import McpServerCard from "./McpServerCard";
export default function McpPanel() {
const { mcpServers, refresh, add, update, remove } = useMcpServers();
const [newName, setNewName] = useState("");
const [error, setError] = useState<string | null>(null);
useEffect(() => {
refresh();
}, []); // eslint-disable-line react-hooks/exhaustive-deps
const handleAdd = async () => {
const name = newName.trim();
if (!name) return;
setError(null);
try {
await add(name);
setNewName("");
} catch (e) {
setError(String(e));
}
};
return (
<div className="space-y-3 p-2">
<div>
<h2 className="text-sm font-semibold text-[var(--text-primary)]">
MCP Servers{" "}
<span className="text-xs px-1.5 py-0.5 rounded bg-yellow-500/20 text-yellow-400">Beta</span>
</h2>
<p className="text-xs text-[var(--text-secondary)] mt-0.5">
Define MCP servers globally, then enable them per-project.
</p>
</div>
{/* Add new server */}
<div className="flex gap-1">
<input
value={newName}
onChange={(e) => setNewName(e.target.value)}
onKeyDown={(e) => { if (e.key === "Enter") handleAdd(); }}
placeholder="Server name..."
className="flex-1 px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)]"
/>
<button
onClick={handleAdd}
disabled={!newName.trim()}
className="px-3 py-1 text-xs bg-[var(--accent)] text-white rounded hover:bg-[var(--accent-hover)] disabled:opacity-50 transition-colors"
>
Add
</button>
</div>
{error && (
<div className="text-xs text-[var(--error)]">{error}</div>
)}
{/* Server list */}
<div className="space-y-2">
{mcpServers.length === 0 ? (
<p className="text-xs text-[var(--text-secondary)] italic">
No MCP servers configured.
</p>
) : (
mcpServers.map((server) => (
<McpServerCard
key={server.id}
server={server}
onUpdate={update}
onRemove={remove}
/>
))
)}
</div>
</div>
);
}

View File

@@ -0,0 +1,323 @@
import { useState, useEffect } from "react";
import type { McpServer, McpTransportType } from "../../lib/types";
interface Props {
server: McpServer;
onUpdate: (server: McpServer) => Promise<McpServer | void>;
onRemove: (id: string) => Promise<void>;
}
export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
const [expanded, setExpanded] = useState(false);
const [name, setName] = useState(server.name);
const [transportType, setTransportType] = useState<McpTransportType>(server.transport_type);
const [command, setCommand] = useState(server.command ?? "");
const [args, setArgs] = useState(server.args.join(" "));
const [envPairs, setEnvPairs] = useState<[string, string][]>(Object.entries(server.env));
const [url, setUrl] = useState(server.url ?? "");
const [headerPairs, setHeaderPairs] = useState<[string, string][]>(Object.entries(server.headers));
const [dockerImage, setDockerImage] = useState(server.docker_image ?? "");
const [containerPort, setContainerPort] = useState(server.container_port?.toString() ?? "3000");
useEffect(() => {
setName(server.name);
setTransportType(server.transport_type);
setCommand(server.command ?? "");
setArgs(server.args.join(" "));
setEnvPairs(Object.entries(server.env));
setUrl(server.url ?? "");
setHeaderPairs(Object.entries(server.headers));
setDockerImage(server.docker_image ?? "");
setContainerPort(server.container_port?.toString() ?? "3000");
}, [server]);
const saveServer = async (patch: Partial<McpServer>) => {
try {
await onUpdate({ ...server, ...patch });
} catch (err) {
console.error("Failed to update MCP server:", err);
}
};
const handleNameBlur = () => {
if (name !== server.name) saveServer({ name });
};
const handleTransportChange = (t: McpTransportType) => {
setTransportType(t);
saveServer({ transport_type: t });
};
const handleCommandBlur = () => {
saveServer({ command: command || null });
};
const handleArgsBlur = () => {
const parsed = args.trim() ? args.trim().split(/\s+/) : [];
saveServer({ args: parsed });
};
const handleUrlBlur = () => {
saveServer({ url: url || null });
};
const handleDockerImageBlur = () => {
saveServer({ docker_image: dockerImage || null });
};
const handleContainerPortBlur = () => {
const port = parseInt(containerPort, 10);
saveServer({ container_port: isNaN(port) ? null : port });
};
const saveEnv = (pairs: [string, string][]) => {
const env: Record<string, string> = {};
for (const [k, v] of pairs) {
if (k.trim()) env[k.trim()] = v;
}
saveServer({ env });
};
const saveHeaders = (pairs: [string, string][]) => {
const headers: Record<string, string> = {};
for (const [k, v] of pairs) {
if (k.trim()) headers[k.trim()] = v;
}
saveServer({ headers });
};
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)]";
const isDocker = !!dockerImage;
const transportBadge = {
stdio: "Stdio",
http: "HTTP",
}[transportType];
const modeBadge = isDocker ? "Docker" : "Manual";
return (
<div className="border border-[var(--border-color)] rounded bg-[var(--bg-primary)]">
{/* Header */}
<div className="flex items-center gap-2 px-3 py-2">
<button
onClick={() => setExpanded(!expanded)}
className="flex-1 flex items-center gap-2 text-left min-w-0"
>
<span className="text-xs text-[var(--text-secondary)]">{expanded ? "\u25BC" : "\u25B6"}</span>
<span className="text-sm font-medium truncate">{server.name}</span>
<span className="text-xs px-1.5 py-0.5 rounded bg-[var(--bg-secondary)] text-[var(--text-secondary)]">
{transportBadge}
</span>
<span className={`text-xs px-1.5 py-0.5 rounded ${isDocker ? "bg-blue-500/20 text-blue-400" : "bg-[var(--bg-secondary)] text-[var(--text-secondary)]"}`}>
{modeBadge}
</span>
</button>
<button
onClick={() => { if (confirm(`Remove MCP server "${server.name}"?`)) onRemove(server.id); }}
className="text-xs px-2 py-0.5 text-[var(--error)] hover:bg-[var(--bg-secondary)] rounded transition-colors"
>
Remove
</button>
</div>
{/* Expanded config */}
{expanded && (
<div className="px-3 pb-3 space-y-2 border-t border-[var(--border-color)] pt-2">
{/* Name */}
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Name</label>
<input
value={name}
onChange={(e) => setName(e.target.value)}
onBlur={handleNameBlur}
className={inputCls}
/>
</div>
{/* Docker Image (primary field — determines Docker vs Manual mode) */}
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Docker Image</label>
<input
value={dockerImage}
onChange={(e) => setDockerImage(e.target.value)}
onBlur={handleDockerImageBlur}
placeholder="e.g. mcp/filesystem:latest (leave empty for manual mode)"
className={inputCls}
/>
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
Set a Docker image to run this MCP server as a container. Leave empty for manual mode.
</p>
</div>
{/* Transport type */}
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Transport</label>
<div className="flex items-center gap-1">
{(["stdio", "http"] as McpTransportType[]).map((t) => (
<button
key={t}
onClick={() => handleTransportChange(t)}
className={`px-2 py-0.5 text-xs rounded transition-colors ${
transportType === t
? "bg-[var(--accent)] text-white"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-secondary)]"
}`}
>
{t === "stdio" ? "Stdio" : "HTTP"}
</button>
))}
</div>
</div>
{/* Container Port (HTTP+Docker only) */}
{transportType === "http" && isDocker && (
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Container Port</label>
<input
value={containerPort}
onChange={(e) => setContainerPort(e.target.value)}
onBlur={handleContainerPortBlur}
placeholder="3000"
className={inputCls}
/>
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
Port inside the MCP container (default: 3000)
</p>
</div>
)}
{/* Stdio fields */}
{transportType === "stdio" && (
<>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Command</label>
<input
value={command}
onChange={(e) => setCommand(e.target.value)}
onBlur={handleCommandBlur}
placeholder={isDocker ? "Command inside container" : "npx"}
className={inputCls}
/>
</div>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Arguments (space-separated)</label>
<input
value={args}
onChange={(e) => setArgs(e.target.value)}
onBlur={handleArgsBlur}
placeholder="-y @modelcontextprotocol/server-filesystem /path"
className={inputCls}
/>
</div>
<KeyValueEditor
label="Environment Variables"
pairs={envPairs}
onChange={(pairs) => { setEnvPairs(pairs); }}
onSave={saveEnv}
/>
</>
)}
{/* HTTP fields (only for manual mode — Docker mode auto-generates URL) */}
{transportType === "http" && !isDocker && (
<>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">URL</label>
<input
value={url}
onChange={(e) => setUrl(e.target.value)}
onBlur={handleUrlBlur}
placeholder="http://localhost:3000/mcp"
className={inputCls}
/>
</div>
<KeyValueEditor
label="Headers"
pairs={headerPairs}
onChange={(pairs) => { setHeaderPairs(pairs); }}
onSave={saveHeaders}
/>
</>
)}
{/* Environment variables for HTTP+Docker */}
{transportType === "http" && isDocker && (
<KeyValueEditor
label="Environment Variables"
pairs={envPairs}
onChange={(pairs) => { setEnvPairs(pairs); }}
onSave={saveEnv}
/>
)}
</div>
)}
</div>
);
}
function KeyValueEditor({
label,
pairs,
onChange,
onSave,
}: {
label: string;
pairs: [string, string][];
onChange: (pairs: [string, string][]) => void;
onSave: (pairs: [string, string][]) => void;
}) {
const inputCls = "flex-1 min-w-0 px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)]";
return (
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">{label}</label>
{pairs.map(([key, value], i) => (
<div key={i} className="flex gap-1 items-center mb-1">
<input
value={key}
onChange={(e) => {
const updated = [...pairs] as [string, string][];
updated[i] = [e.target.value, value];
onChange(updated);
}}
onBlur={() => onSave(pairs)}
placeholder="KEY"
className={inputCls}
/>
<span className="text-xs text-[var(--text-secondary)]">=</span>
<input
value={value}
onChange={(e) => {
const updated = [...pairs] as [string, string][];
updated[i] = [key, e.target.value];
onChange(updated);
}}
onBlur={() => onSave(pairs)}
placeholder="value"
className={inputCls}
/>
<button
onClick={() => {
const updated = pairs.filter((_, j) => j !== i);
onChange(updated);
onSave(updated);
}}
className="flex-shrink-0 px-1.5 py-1 text-xs text-[var(--error)] hover:bg-[var(--bg-secondary)] rounded transition-colors"
>
x
</button>
</div>
))}
<button
onClick={() => {
onChange([...pairs, ["", ""]]);
}}
className="text-xs text-[var(--accent)] hover:text-[var(--accent-hover)] transition-colors"
>
+ Add
</button>
</div>
);
}

View File

@@ -0,0 +1,109 @@
import { useEffect, useRef, useCallback } from "react";
interface Props {
projectName: string;
operation: "starting" | "stopping" | "resetting";
progressMsg: string | null;
error: string | null;
completed: boolean;
onForceStop: () => void;
onClose: () => void;
}
const operationLabels: Record<string, string> = {
starting: "Starting",
stopping: "Stopping",
resetting: "Resetting",
};
export default function ContainerProgressModal({
projectName,
operation,
progressMsg,
error,
completed,
onForceStop,
onClose,
}: Props) {
const overlayRef = useRef<HTMLDivElement>(null);
// Auto-close on success after 800ms
useEffect(() => {
if (completed && !error) {
const timer = setTimeout(onClose, 800);
return () => clearTimeout(timer);
}
}, [completed, error, onClose]);
// Escape to close (only when completed or error)
useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
if (e.key === "Escape" && (completed || error)) onClose();
};
document.addEventListener("keydown", handleKeyDown);
return () => document.removeEventListener("keydown", handleKeyDown);
}, [completed, error, onClose]);
const handleOverlayClick = useCallback(
(e: React.MouseEvent<HTMLDivElement>) => {
if (e.target === overlayRef.current && (completed || error)) onClose();
},
[completed, error, onClose],
);
const inProgress = !completed && !error;
return (
<div
ref={overlayRef}
onClick={handleOverlayClick}
className="fixed inset-0 bg-black/50 flex items-center justify-center z-50"
>
<div className="bg-[var(--bg-secondary)] border border-[var(--border-color)] rounded-lg p-6 w-80 shadow-xl text-center">
<h3 className="text-sm font-semibold mb-4">
{operationLabels[operation]} &ldquo;{projectName}&rdquo;
</h3>
{/* Spinner / checkmark / error icon */}
<div className="flex justify-center mb-3">
{error ? (
<span className="text-3xl text-[var(--error)]"></span>
) : completed ? (
<span className="text-3xl text-[var(--success)]"></span>
) : (
<div className="w-8 h-8 border-2 border-[var(--accent)] border-t-transparent rounded-full animate-spin" />
)}
</div>
{/* Progress message */}
<p className="text-xs text-[var(--text-secondary)] min-h-[1.25rem] mb-4">
{error
? <span className="text-[var(--error)]">{error}</span>
: completed
? "Done!"
: progressMsg ?? `${operationLabels[operation]}...`}
</p>
{/* Buttons */}
<div className="flex justify-center gap-2">
{inProgress && (
<button
onClick={(e) => { e.stopPropagation(); onForceStop(); }}
className="px-3 py-1.5 text-xs text-[var(--error)] border border-[var(--error)]/30 rounded hover:bg-[var(--error)]/10 transition-colors"
>
Force Stop
</button>
)}
{(completed || error) && (
<button
onClick={(e) => { e.stopPropagation(); onClose(); }}
className="px-3 py-1.5 text-xs text-[var(--text-secondary)] hover:text-[var(--text-primary)] border border-[var(--border-color)] rounded transition-colors"
>
Close
</button>
)}
</div>
</div>
</div>
);
}

View File

@@ -31,6 +31,16 @@ vi.mock("../../hooks/useTerminal", () => ({
}), }),
})); }));
vi.mock("../../hooks/useMcpServers", () => ({
useMcpServers: () => ({
mcpServers: [],
refresh: vi.fn(),
add: vi.fn(),
update: vi.fn(),
remove: vi.fn(),
}),
}));
let mockSelectedProjectId: string | null = null; let mockSelectedProjectId: string | null = null;
vi.mock("../../store/appState", () => ({ vi.mock("../../store/appState", () => ({
useAppState: vi.fn((selector) => useAppState: vi.fn((selector) =>
@@ -55,7 +65,9 @@ const mockProject: Project = {
git_user_name: null, git_user_name: null,
git_user_email: null, git_user_email: null,
custom_env_vars: [], custom_env_vars: [],
port_mappings: [],
claude_instructions: null, claude_instructions: null,
enabled_mcp_servers: [],
created_at: "2026-01-01T00:00:00Z", created_at: "2026-01-01T00:00:00Z",
updated_at: "2026-01-01T00:00:00Z", updated_at: "2026-01-01T00:00:00Z",
}; };

View File

@@ -1,12 +1,15 @@
import { useState, useEffect } from "react"; import { useState, useEffect } from "react";
import { open } from "@tauri-apps/plugin-dialog"; import { open } from "@tauri-apps/plugin-dialog";
import { listen } from "@tauri-apps/api/event";
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod } from "../../lib/types"; import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod } from "../../lib/types";
import { useProjects } from "../../hooks/useProjects"; import { useProjects } from "../../hooks/useProjects";
import { useMcpServers } from "../../hooks/useMcpServers";
import { useTerminal } from "../../hooks/useTerminal"; import { useTerminal } from "../../hooks/useTerminal";
import { useAppState } from "../../store/appState"; import { useAppState } from "../../store/appState";
import EnvVarsModal from "./EnvVarsModal"; import EnvVarsModal from "./EnvVarsModal";
import PortMappingsModal from "./PortMappingsModal"; import PortMappingsModal from "./PortMappingsModal";
import ClaudeInstructionsModal from "./ClaudeInstructionsModal"; import ClaudeInstructionsModal from "./ClaudeInstructionsModal";
import ContainerProgressModal from "./ContainerProgressModal";
interface Props { interface Props {
project: Project; project: Project;
@@ -16,6 +19,7 @@ export default function ProjectCard({ project }: Props) {
const selectedProjectId = useAppState(s => s.selectedProjectId); const selectedProjectId = useAppState(s => s.selectedProjectId);
const setSelectedProject = useAppState(s => s.setSelectedProject); const setSelectedProject = useAppState(s => s.setSelectedProject);
const { start, stop, rebuild, remove, update } = useProjects(); const { start, stop, rebuild, remove, update } = useProjects();
const { mcpServers } = useMcpServers();
const { open: openTerminal } = useTerminal(); const { open: openTerminal } = useTerminal();
const [loading, setLoading] = useState(false); const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null); const [error, setError] = useState<string | null>(null);
@@ -23,6 +27,12 @@ export default function ProjectCard({ project }: Props) {
const [showEnvVarsModal, setShowEnvVarsModal] = useState(false); const [showEnvVarsModal, setShowEnvVarsModal] = useState(false);
const [showPortMappingsModal, setShowPortMappingsModal] = useState(false); const [showPortMappingsModal, setShowPortMappingsModal] = useState(false);
const [showClaudeInstructionsModal, setShowClaudeInstructionsModal] = useState(false); const [showClaudeInstructionsModal, setShowClaudeInstructionsModal] = useState(false);
const [progressMsg, setProgressMsg] = useState<string | null>(null);
const [activeOperation, setActiveOperation] = useState<"starting" | "stopping" | "resetting" | null>(null);
const [operationCompleted, setOperationCompleted] = useState(false);
const [showRemoveConfirm, setShowRemoveConfirm] = useState(false);
const [isEditingName, setIsEditingName] = useState(false);
const [editName, setEditName] = useState(project.name);
const isSelected = selectedProjectId === project.id; const isSelected = selectedProjectId === project.id;
const isStopped = project.status === "stopped" || project.status === "error"; const isStopped = project.status === "stopped" || project.status === "error";
@@ -47,6 +57,7 @@ export default function ProjectCard({ project }: Props) {
// Sync local state when project prop changes (e.g., after save or external update) // Sync local state when project prop changes (e.g., after save or external update)
useEffect(() => { useEffect(() => {
setEditName(project.name);
setPaths(project.paths ?? []); setPaths(project.paths ?? []);
setSshKeyPath(project.ssh_key_path ?? ""); setSshKeyPath(project.ssh_key_path ?? "");
setGitName(project.git_user_name ?? ""); setGitName(project.git_user_name ?? "");
@@ -64,9 +75,38 @@ export default function ProjectCard({ project }: Props) {
setBedrockModelId(project.bedrock_config?.model_id ?? ""); setBedrockModelId(project.bedrock_config?.model_id ?? "");
}, [project]); }, [project]);
// Listen for container progress events
useEffect(() => {
const unlisten = listen<{ project_id: string; message: string }>(
"container-progress",
(event) => {
if (event.payload.project_id === project.id) {
setProgressMsg(event.payload.message);
}
}
);
return () => { unlisten.then((f) => f()); };
}, [project.id]);
// Mark operation completed when status settles
useEffect(() => {
if (project.status === "running" || project.status === "stopped" || project.status === "error") {
if (activeOperation) {
setOperationCompleted(true);
}
// Clear progress if no modal is managing it
if (!activeOperation) {
setProgressMsg(null);
}
}
}, [project.status, activeOperation]);
const handleStart = async () => { const handleStart = async () => {
setLoading(true); setLoading(true);
setError(null); setError(null);
setProgressMsg(null);
setOperationCompleted(false);
setActiveOperation("starting");
try { try {
await start(project.id); await start(project.id);
} catch (e) { } catch (e) {
@@ -79,6 +119,9 @@ export default function ProjectCard({ project }: Props) {
const handleStop = async () => { const handleStop = async () => {
setLoading(true); setLoading(true);
setError(null); setError(null);
setProgressMsg(null);
setOperationCompleted(false);
setActiveOperation("stopping");
try { try {
await stop(project.id); await stop(project.id);
} catch (e) { } catch (e) {
@@ -96,6 +139,21 @@ export default function ProjectCard({ project }: Props) {
} }
}; };
const handleForceStop = async () => {
try {
await stop(project.id);
} catch (e) {
setError(String(e));
}
};
const closeModal = () => {
setActiveOperation(null);
setOperationCompleted(false);
setProgressMsg(null);
setError(null);
};
const defaultBedrockConfig: BedrockConfig = { const defaultBedrockConfig: BedrockConfig = {
auth_method: "static_credentials", auth_method: "static_credentials",
aws_region: "us-east-1", aws_region: "us-east-1",
@@ -255,7 +313,40 @@ export default function ProjectCard({ project }: Props) {
> >
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
<span className={`w-2 h-2 rounded-full flex-shrink-0 ${statusColor}`} /> <span className={`w-2 h-2 rounded-full flex-shrink-0 ${statusColor}`} />
<span className="text-sm font-medium truncate flex-1">{project.name}</span> {isEditingName ? (
<input
autoFocus
value={editName}
onChange={(e) => setEditName(e.target.value)}
onBlur={async () => {
setIsEditingName(false);
const trimmed = editName.trim();
if (trimmed && trimmed !== project.name) {
try {
await update({ ...project, name: trimmed });
} catch (err) {
console.error("Failed to rename project:", err);
setEditName(project.name);
}
} else {
setEditName(project.name);
}
}}
onKeyDown={(e) => {
if (e.key === "Enter") (e.target as HTMLInputElement).blur();
if (e.key === "Escape") { setEditName(project.name); setIsEditingName(false); }
}}
onClick={(e) => e.stopPropagation()}
className="text-sm font-medium flex-1 min-w-0 px-1 py-0 bg-[var(--bg-primary)] border border-[var(--accent)] rounded text-[var(--text-primary)] focus:outline-none"
/>
) : (
<span
className="text-sm font-medium truncate flex-1 cursor-text"
onDoubleClick={(e) => { e.stopPropagation(); setIsEditingName(true); }}
>
{project.name}
</span>
)}
</div> </div>
<div className="mt-0.5 ml-4 space-y-0.5"> <div className="mt-0.5 ml-4 space-y-0.5">
{project.paths.map((pp, i) => ( {project.paths.map((pp, i) => (
@@ -302,6 +393,10 @@ export default function ProjectCard({ project }: Props) {
<ActionButton <ActionButton
onClick={async () => { onClick={async () => {
setLoading(true); setLoading(true);
setError(null);
setProgressMsg(null);
setOperationCompleted(false);
setActiveOperation("resetting");
try { await rebuild(project.id); } catch (e) { setError(String(e)); } try { await rebuild(project.id); } catch (e) { setError(String(e)); }
setLoading(false); setLoading(false);
}} }}
@@ -315,25 +410,46 @@ export default function ProjectCard({ project }: Props) {
<ActionButton onClick={handleOpenTerminal} disabled={loading} label="Terminal" accent /> <ActionButton onClick={handleOpenTerminal} disabled={loading} label="Terminal" accent />
</> </>
) : ( ) : (
<span className="text-xs text-[var(--text-secondary)]"> <>
{project.status}... <span className="text-xs text-[var(--text-secondary)]">
</span> {progressMsg ?? `${project.status}...`}
</span>
<ActionButton onClick={handleStop} disabled={loading} label="Force Stop" danger />
</>
)} )}
<ActionButton <ActionButton
onClick={(e) => { e?.stopPropagation?.(); setShowConfig(!showConfig); }} onClick={(e) => { e?.stopPropagation?.(); setShowConfig(!showConfig); }}
disabled={false} disabled={false}
label={showConfig ? "Hide" : "Config"} label={showConfig ? "Hide" : "Config"}
/> />
<ActionButton {showRemoveConfirm ? (
onClick={async () => { <span className="inline-flex items-center gap-1 text-xs">
if (confirm(`Remove project "${project.name}"?`)) { <span className="text-[var(--text-secondary)]">Remove?</span>
await remove(project.id); <button
} onClick={async (e) => {
}} e.stopPropagation();
disabled={loading} setShowRemoveConfirm(false);
label="Remove" await remove(project.id);
danger }}
/> className="px-1.5 py-0.5 rounded text-white bg-[var(--error)] hover:opacity-80 transition-colors"
>
Yes
</button>
<button
onClick={(e) => { e.stopPropagation(); setShowRemoveConfirm(false); }}
className="px-1.5 py-0.5 rounded text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)] transition-colors"
>
No
</button>
</span>
) : (
<ActionButton
onClick={() => setShowRemoveConfirm(true)}
disabled={loading}
label="Remove"
danger
/>
)}
</div> </div>
{/* Config panel */} {/* Config panel */}
@@ -554,6 +670,49 @@ export default function ProjectCard({ project }: Props) {
</button> </button>
</div> </div>
{/* MCP Servers */}
{mcpServers.length > 0 && (
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-1">MCP Servers</label>
<div className="space-y-1">
{mcpServers.map((server) => {
const enabled = project.enabled_mcp_servers.includes(server.id);
const isDocker = !!server.docker_image;
return (
<label key={server.id} className="flex items-center gap-2 cursor-pointer">
<input
type="checkbox"
checked={enabled}
disabled={!isStopped}
onChange={async () => {
const updated = enabled
? project.enabled_mcp_servers.filter((id) => id !== server.id)
: [...project.enabled_mcp_servers, server.id];
try {
await update({ ...project, enabled_mcp_servers: updated });
} catch (err) {
console.error("Failed to update MCP servers:", err);
}
}}
className="rounded border-[var(--border-color)] disabled:opacity-50"
/>
<span className="text-xs text-[var(--text-primary)]">{server.name}</span>
<span className="text-xs text-[var(--text-secondary)]">({server.transport_type})</span>
<span className={`text-xs px-1 py-0.5 rounded ${isDocker ? "bg-blue-500/20 text-blue-400" : "bg-[var(--bg-secondary)] text-[var(--text-secondary)]"}`}>
{isDocker ? "Docker" : "Manual"}
</span>
</label>
);
})}
</div>
{mcpServers.some((s) => s.docker_image && s.transport_type === "stdio" && project.enabled_mcp_servers.includes(s.id)) && (
<p className="text-xs text-[var(--text-secondary)] mt-1 opacity-70">
Docker access will be auto-enabled for stdio+Docker MCP servers.
</p>
)}
</div>
)}
{/* Bedrock config */} {/* Bedrock config */}
{project.auth_mode === "bedrock" && (() => { {project.auth_mode === "bedrock" && (() => {
const bc = project.bedrock_config ?? defaultBedrockConfig; const bc = project.bedrock_config ?? defaultBedrockConfig;
@@ -722,6 +881,18 @@ export default function ProjectCard({ project }: Props) {
onClose={() => setShowClaudeInstructionsModal(false)} onClose={() => setShowClaudeInstructionsModal(false)}
/> />
)} )}
{activeOperation && (
<ContainerProgressModal
projectName={project.name}
operation={activeOperation}
progressMsg={progressMsg}
error={error}
completed={operationCompleted}
onForceStop={handleForceStop}
onClose={closeModal}
/>
)}
</div> </div>
); );
} }
@@ -753,3 +924,4 @@ function ActionButton({
</button> </button>
); );
} }

View File

@@ -0,0 +1,101 @@
import { useState, useEffect, useCallback } from "react";
import { useSettings } from "../../hooks/useSettings";
interface AudioDevice {
deviceId: string;
label: string;
}
export default function MicrophoneSettings() {
const { appSettings, saveSettings } = useSettings();
const [devices, setDevices] = useState<AudioDevice[]>([]);
const [selected, setSelected] = useState(appSettings?.default_microphone ?? "");
const [loading, setLoading] = useState(false);
const [permissionNeeded, setPermissionNeeded] = useState(false);
// Sync local state when appSettings change
useEffect(() => {
setSelected(appSettings?.default_microphone ?? "");
}, [appSettings?.default_microphone]);
const enumerateDevices = useCallback(async () => {
setLoading(true);
setPermissionNeeded(false);
try {
// Request mic permission first so device labels are available
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
stream.getTracks().forEach((t) => t.stop());
const allDevices = await navigator.mediaDevices.enumerateDevices();
const mics = allDevices
.filter((d) => d.kind === "audioinput")
.map((d) => ({
deviceId: d.deviceId,
label: d.label || `Microphone (${d.deviceId.slice(0, 8)}...)`,
}));
setDevices(mics);
} catch {
setPermissionNeeded(true);
} finally {
setLoading(false);
}
}, []);
// Enumerate devices on mount
useEffect(() => {
enumerateDevices();
}, [enumerateDevices]);
const handleChange = async (deviceId: string) => {
setSelected(deviceId);
if (appSettings) {
await saveSettings({ ...appSettings, default_microphone: deviceId || null });
}
};
return (
<div>
<label className="block text-sm font-medium mb-1">Microphone</label>
<p className="text-xs text-[var(--text-secondary)] mb-1.5">
Audio input device for Claude Code voice mode (/voice)
</p>
{permissionNeeded ? (
<div className="flex items-center gap-2">
<span className="text-xs text-[var(--text-secondary)]">
Microphone permission required
</span>
<button
onClick={enumerateDevices}
className="text-xs px-2 py-0.5 text-[var(--accent)] hover:text-[var(--accent-hover)] hover:bg-[var(--bg-primary)] rounded transition-colors"
>
Grant Access
</button>
</div>
) : (
<div className="flex items-center gap-2">
<select
value={selected}
onChange={(e) => handleChange(e.target.value)}
disabled={loading}
className="flex-1 px-2 py-1 text-sm bg-[var(--bg-primary)] border border-[var(--border-color)] rounded focus:outline-none focus:border-[var(--accent)]"
>
<option value="">System Default</option>
{devices.map((d) => (
<option key={d.deviceId} value={d.deviceId}>
{d.label}
</option>
))}
</select>
<button
onClick={enumerateDevices}
disabled={loading}
title="Refresh microphone list"
className="text-xs px-2 py-1 text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)] rounded transition-colors disabled:opacity-50"
>
{loading ? "..." : "Refresh"}
</button>
</div>
)}
</div>
);
}

View File

@@ -21,9 +21,11 @@ export default function TerminalView({ sessionId, active }: Props) {
const fitRef = useRef<FitAddon | null>(null); const fitRef = useRef<FitAddon | null>(null);
const webglRef = useRef<WebglAddon | null>(null); const webglRef = useRef<WebglAddon | null>(null);
const detectorRef = useRef<UrlDetector | null>(null); const detectorRef = useRef<UrlDetector | null>(null);
const { sendInput, resize, onOutput, onExit } = useTerminal(); const { sendInput, pasteImage, resize, onOutput, onExit } = useTerminal();
const [detectedUrl, setDetectedUrl] = useState<string | null>(null); const [detectedUrl, setDetectedUrl] = useState<string | null>(null);
const [imagePasteMsg, setImagePasteMsg] = useState<string | null>(null);
const [isAtBottom, setIsAtBottom] = useState(true);
useEffect(() => { useEffect(() => {
if (!containerRef.current) return; if (!containerRef.current) return;
@@ -80,11 +82,70 @@ export default function TerminalView({ sessionId, active }: Props) {
// Send initial size // Send initial size
resize(sessionId, term.cols, term.rows); resize(sessionId, term.cols, term.rows);
// Handle OSC 52 clipboard write sequences from programs inside the container.
// When a program (e.g. Claude Code) copies text via xclip/xsel/pbcopy, the
// container's shim emits an OSC 52 escape sequence which xterm.js routes here.
const osc52Disposable = term.parser.registerOscHandler(52, (data) => {
const idx = data.indexOf(";");
if (idx === -1) return false;
const payload = data.substring(idx + 1);
if (payload === "?") return false; // clipboard read request, not supported
try {
const decoded = atob(payload);
navigator.clipboard.writeText(decoded).catch((e) =>
console.error("OSC 52 clipboard write failed:", e),
);
} catch (e) {
console.error("OSC 52 decode failed:", e);
}
return true;
});
// Handle user input -> backend // Handle user input -> backend
const inputDisposable = term.onData((data) => { const inputDisposable = term.onData((data) => {
sendInput(sessionId, data); sendInput(sessionId, data);
}); });
// Track scroll position to show "Jump to Current" button
const scrollDisposable = term.onScroll(() => {
const buf = term.buffer.active;
setIsAtBottom(buf.viewportY >= buf.baseY);
});
// Handle image paste: intercept paste events with image data,
// upload to the container, and inject the file path into terminal input.
const handlePaste = (e: ClipboardEvent) => {
const items = e.clipboardData?.items;
if (!items) return;
for (const item of Array.from(items)) {
if (item.type.startsWith("image/")) {
e.preventDefault();
e.stopPropagation();
const blob = item.getAsFile();
if (!blob) return;
blob.arrayBuffer().then(async (buf) => {
try {
setImagePasteMsg("Uploading image...");
const data = new Uint8Array(buf);
const filePath = await pasteImage(sessionId, data);
// Inject the file path into terminal stdin
sendInput(sessionId, filePath);
setImagePasteMsg(`Image saved to ${filePath}`);
} catch (err) {
console.error("Image paste failed:", err);
setImagePasteMsg("Image paste failed");
}
});
return; // Only handle the first image
}
}
};
containerRef.current.addEventListener("paste", handlePaste, { capture: true });
// Handle backend output -> terminal // Handle backend output -> terminal
let aborted = false; let aborted = false;
@@ -128,7 +189,10 @@ export default function TerminalView({ sessionId, active }: Props) {
aborted = true; aborted = true;
detector.dispose(); detector.dispose();
detectorRef.current = null; detectorRef.current = null;
osc52Disposable.dispose();
inputDisposable.dispose(); inputDisposable.dispose();
scrollDisposable.dispose();
containerRef.current?.removeEventListener("paste", handlePaste, { capture: true });
outputPromise.then((fn) => fn?.()); outputPromise.then((fn) => fn?.());
exitPromise.then((fn) => fn?.()); exitPromise.then((fn) => fn?.());
if (resizeRafId !== null) cancelAnimationFrame(resizeRafId); if (resizeRafId !== null) cancelAnimationFrame(resizeRafId);
@@ -179,6 +243,13 @@ export default function TerminalView({ sessionId, active }: Props) {
return () => clearTimeout(timer); return () => clearTimeout(timer);
}, [detectedUrl]); }, [detectedUrl]);
// Auto-dismiss image paste message after 3 seconds
useEffect(() => {
if (!imagePasteMsg) return;
const timer = setTimeout(() => setImagePasteMsg(null), 3_000);
return () => clearTimeout(timer);
}, [imagePasteMsg]);
const handleOpenUrl = useCallback(() => { const handleOpenUrl = useCallback(() => {
if (detectedUrl) { if (detectedUrl) {
openUrl(detectedUrl).catch((e) => openUrl(detectedUrl).catch((e) =>
@@ -188,6 +259,11 @@ export default function TerminalView({ sessionId, active }: Props) {
} }
}, [detectedUrl]); }, [detectedUrl]);
const handleScrollToBottom = useCallback(() => {
termRef.current?.scrollToBottom();
setIsAtBottom(true);
}, []);
return ( return (
<div <div
ref={terminalContainerRef} ref={terminalContainerRef}
@@ -200,6 +276,22 @@ export default function TerminalView({ sessionId, active }: Props) {
onDismiss={() => setDetectedUrl(null)} onDismiss={() => setDetectedUrl(null)}
/> />
)} )}
{imagePasteMsg && (
<div
className="absolute top-2 left-1/2 -translate-x-1/2 z-50 px-3 py-1.5 rounded-md text-xs font-medium bg-[#1f2937] text-[#e6edf3] border border-[#30363d] shadow-lg"
onClick={() => setImagePasteMsg(null)}
>
{imagePasteMsg}
</div>
)}
{!isAtBottom && (
<button
onClick={handleScrollToBottom}
className="absolute bottom-4 right-4 z-50 px-3 py-1.5 rounded-md text-xs font-medium bg-[#1f2937] text-[#58a6ff] border border-[#30363d] shadow-lg hover:bg-[#2d3748] transition-colors cursor-pointer"
>
Jump to Current
</button>
)}
<div <div
ref={containerRef} ref={containerRef}
className="w-full h-full" className="w-full h-full"

View File

@@ -0,0 +1,59 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
/**
* Tests the OSC 52 clipboard parsing logic used in TerminalView.
* Extracted here to validate the decode/write path independently.
*/
// Mirrors the handler registered in TerminalView.tsx
function handleOsc52(data: string): string | null {
const idx = data.indexOf(";");
if (idx === -1) return null;
const payload = data.substring(idx + 1);
if (payload === "?") return null;
try {
return atob(payload);
} catch {
return null;
}
}
describe("OSC 52 clipboard handler", () => {
it("decodes a valid clipboard write sequence", () => {
// "c;BASE64" where BASE64 encodes "https://example.com"
const encoded = btoa("https://example.com");
const result = handleOsc52(`c;${encoded}`);
expect(result).toBe("https://example.com");
});
it("decodes multi-line content", () => {
const text = "line1\nline2\nline3";
const encoded = btoa(text);
const result = handleOsc52(`c;${encoded}`);
expect(result).toBe(text);
});
it("handles primary selection target (p)", () => {
const encoded = btoa("selected text");
const result = handleOsc52(`p;${encoded}`);
expect(result).toBe("selected text");
});
it("returns null for clipboard read request (?)", () => {
expect(handleOsc52("c;?")).toBe(null);
});
it("returns null for missing semicolon", () => {
expect(handleOsc52("invalid")).toBe(null);
});
it("returns null for invalid base64", () => {
expect(handleOsc52("c;!!!not-base64!!!")).toBe(null);
});
it("handles empty payload after selection target", () => {
// btoa("") = ""
const result = handleOsc52("c;");
expect(result).toBe("");
});
});

View File

@@ -1,4 +1,4 @@
import { useCallback } from "react"; import { useCallback, useRef } from "react";
import { useShallow } from "zustand/react/shallow"; import { useShallow } from "zustand/react/shallow";
import { listen } from "@tauri-apps/api/event"; import { listen } from "@tauri-apps/api/event";
import { useAppState } from "../store/appState"; import { useAppState } from "../store/appState";
@@ -59,6 +59,39 @@ export function useDocker() {
[setImageExists], [setImageExists],
); );
const pollingRef = useRef<ReturnType<typeof setInterval> | null>(null);
const startDockerPolling = useCallback(() => {
// Don't start if already polling
if (pollingRef.current) return () => {};
const interval = setInterval(async () => {
try {
const available = await commands.checkDocker();
if (available) {
clearInterval(interval);
pollingRef.current = null;
setDockerAvailable(true);
// Also check image once Docker is available
try {
const exists = await commands.checkImageExists();
setImageExists(exists);
} catch {
setImageExists(false);
}
}
} catch {
// Still not available, keep polling
}
}, 5000);
pollingRef.current = interval;
return () => {
clearInterval(interval);
pollingRef.current = null;
};
}, [setDockerAvailable, setImageExists]);
const pullImage = useCallback( const pullImage = useCallback(
async (imageName: string, onProgress?: (msg: string) => void) => { async (imageName: string, onProgress?: (msg: string) => void) => {
const unlisten = onProgress const unlisten = onProgress
@@ -84,5 +117,6 @@ export function useDocker() {
checkImage, checkImage,
buildImage, buildImage,
pullImage, pullImage,
startDockerPolling,
}; };
} }

View File

@@ -0,0 +1,55 @@
import { useCallback } from "react";
import { useShallow } from "zustand/react/shallow";
import { useAppState } from "../store/appState";
import * as commands from "../lib/tauri-commands";
import type { McpServer } from "../lib/types";
export function useMcpServers() {
const {
mcpServers,
setMcpServers,
updateMcpServerInList,
removeMcpServerFromList,
} = useAppState(
useShallow(s => ({
mcpServers: s.mcpServers,
setMcpServers: s.setMcpServers,
updateMcpServerInList: s.updateMcpServerInList,
removeMcpServerFromList: s.removeMcpServerFromList,
}))
);
const refresh = useCallback(async () => {
const list = await commands.listMcpServers();
setMcpServers(list);
}, [setMcpServers]);
const add = useCallback(
async (name: string) => {
const server = await commands.addMcpServer(name);
const list = await commands.listMcpServers();
setMcpServers(list);
return server;
},
[setMcpServers],
);
const update = useCallback(
async (server: McpServer) => {
const updated = await commands.updateMcpServer(server);
updateMcpServerInList(updated);
return updated;
},
[updateMcpServerInList],
);
const remove = useCallback(
async (id: string) => {
await commands.removeMcpServer(id);
removeMcpServerFromList(id);
},
[removeMcpServerFromList],
);
return { mcpServers, refresh, add, update, remove };
}

View File

@@ -50,31 +50,45 @@ export function useProjects() {
[removeProjectFromList], [removeProjectFromList],
); );
const setOptimisticStatus = useCallback(
(id: string, status: "starting" | "stopping") => {
const { projects } = useAppState.getState();
const project = projects.find((p) => p.id === id);
if (project) {
updateProjectInList({ ...project, status });
}
},
[updateProjectInList],
);
const start = useCallback( const start = useCallback(
async (id: string) => { async (id: string) => {
setOptimisticStatus(id, "starting");
const updated = await commands.startProjectContainer(id); const updated = await commands.startProjectContainer(id);
updateProjectInList(updated); updateProjectInList(updated);
return updated; return updated;
}, },
[updateProjectInList], [updateProjectInList, setOptimisticStatus],
); );
const stop = useCallback( const stop = useCallback(
async (id: string) => { async (id: string) => {
setOptimisticStatus(id, "stopping");
await commands.stopProjectContainer(id); await commands.stopProjectContainer(id);
const list = await commands.listProjects(); const list = await commands.listProjects();
setProjects(list); setProjects(list);
}, },
[setProjects], [setProjects, setOptimisticStatus],
); );
const rebuild = useCallback( const rebuild = useCallback(
async (id: string) => { async (id: string) => {
setOptimisticStatus(id, "starting");
const updated = await commands.rebuildProjectContainer(id); const updated = await commands.rebuildProjectContainer(id);
updateProjectInList(updated); updateProjectInList(updated);
return updated; return updated;
}, },
[updateProjectInList], [updateProjectInList, setOptimisticStatus],
); );
const update = useCallback( const update = useCallback(

View File

@@ -49,6 +49,14 @@ export function useTerminal() {
[], [],
); );
const pasteImage = useCallback(
async (sessionId: string, imageData: Uint8Array) => {
const bytes = Array.from(imageData);
return commands.pasteImageToTerminal(sessionId, bytes);
},
[],
);
const onOutput = useCallback( const onOutput = useCallback(
(sessionId: string, callback: (data: Uint8Array) => void) => { (sessionId: string, callback: (data: Uint8Array) => void) => {
const eventName = `terminal-output-${sessionId}`; const eventName = `terminal-output-${sessionId}`;
@@ -76,6 +84,7 @@ export function useTerminal() {
open, open,
close, close,
sendInput, sendInput,
pasteImage,
resize, resize,
onOutput, onOutput,
onExit, onExit,

103
app/src/hooks/useVoice.ts Normal file
View File

@@ -0,0 +1,103 @@
import { useCallback, useRef, useState } from "react";
import * as commands from "../lib/tauri-commands";
type VoiceState = "inactive" | "starting" | "active" | "error";
export function useVoice(sessionId: string, deviceId?: string | null) {
const [state, setState] = useState<VoiceState>("inactive");
const [error, setError] = useState<string | null>(null);
const audioContextRef = useRef<AudioContext | null>(null);
const streamRef = useRef<MediaStream | null>(null);
const workletRef = useRef<AudioWorkletNode | null>(null);
const start = useCallback(async () => {
if (state === "active" || state === "starting") return;
setState("starting");
setError(null);
try {
// 1. Start the audio bridge in the container (creates FIFO writer)
await commands.startAudioBridge(sessionId);
// 2. Get microphone access (use specific device if configured)
const audioConstraints: MediaTrackConstraints = {
channelCount: 1,
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true,
};
if (deviceId) {
audioConstraints.deviceId = { exact: deviceId };
}
const stream = await navigator.mediaDevices.getUserMedia({
audio: audioConstraints,
});
streamRef.current = stream;
// 3. Create AudioContext at 16kHz (browser handles resampling)
const audioContext = new AudioContext({ sampleRate: 16000 });
audioContextRef.current = audioContext;
// 4. Load AudioWorklet processor
await audioContext.audioWorklet.addModule("/audio-capture-processor.js");
// 5. Connect: mic → worklet → (silent) destination
const source = audioContext.createMediaStreamSource(stream);
const processor = new AudioWorkletNode(audioContext, "audio-capture-processor");
workletRef.current = processor;
// 6. Handle PCM chunks from the worklet
processor.port.onmessage = (event: MessageEvent<ArrayBuffer>) => {
const bytes = Array.from(new Uint8Array(event.data));
commands.sendAudioData(sessionId, bytes).catch(() => {
// Audio bridge may have been closed — ignore send errors
});
};
source.connect(processor);
processor.connect(audioContext.destination);
setState("active");
} catch (e) {
const msg = e instanceof Error ? e.message : String(e);
setError(msg);
setState("error");
// Clean up on failure
await commands.stopAudioBridge(sessionId).catch(() => {});
}
}, [sessionId, state, deviceId]);
const stop = useCallback(async () => {
// Tear down audio pipeline
workletRef.current?.disconnect();
workletRef.current = null;
if (audioContextRef.current) {
await audioContextRef.current.close().catch(() => {});
audioContextRef.current = null;
}
if (streamRef.current) {
streamRef.current.getTracks().forEach((t) => t.stop());
streamRef.current = null;
}
// Stop the container-side audio bridge
await commands.stopAudioBridge(sessionId).catch(() => {});
setState("inactive");
setError(null);
}, [sessionId]);
const toggle = useCallback(async () => {
if (state === "active") {
await stop();
} else {
await start();
}
}, [state, start, stop]);
return { state, error, start, stop, toggle };
}

View File

@@ -1,5 +1,5 @@
import { invoke } from "@tauri-apps/api/core"; import { invoke } from "@tauri-apps/api/core";
import type { Project, ProjectPath, ContainerInfo, SiblingContainer, AppSettings, UpdateInfo } from "./types"; import type { Project, ProjectPath, ContainerInfo, SiblingContainer, AppSettings, UpdateInfo, McpServer } from "./types";
// Docker // Docker
export const checkDocker = () => invoke<boolean>("check_docker"); export const checkDocker = () => invoke<boolean>("check_docker");
@@ -47,6 +47,23 @@ export const terminalResize = (sessionId: string, cols: number, rows: number) =>
invoke<void>("terminal_resize", { sessionId, cols, rows }); invoke<void>("terminal_resize", { sessionId, cols, rows });
export const closeTerminalSession = (sessionId: string) => export const closeTerminalSession = (sessionId: string) =>
invoke<void>("close_terminal_session", { sessionId }); invoke<void>("close_terminal_session", { sessionId });
export const pasteImageToTerminal = (sessionId: string, imageData: number[]) =>
invoke<string>("paste_image_to_terminal", { sessionId, imageData });
export const startAudioBridge = (sessionId: string) =>
invoke<void>("start_audio_bridge", { sessionId });
export const sendAudioData = (sessionId: string, data: number[]) =>
invoke<void>("send_audio_data", { sessionId, data });
export const stopAudioBridge = (sessionId: string) =>
invoke<void>("stop_audio_bridge", { sessionId });
// MCP Servers
export const listMcpServers = () => invoke<McpServer[]>("list_mcp_servers");
export const addMcpServer = (name: string) =>
invoke<McpServer>("add_mcp_server", { name });
export const updateMcpServer = (server: McpServer) =>
invoke<McpServer>("update_mcp_server", { server });
export const removeMcpServer = (serverId: string) =>
invoke<void>("remove_mcp_server", { serverId });
// Updates // Updates
export const getAppVersion = () => invoke<string>("get_app_version"); export const getAppVersion = () => invoke<string>("get_app_version");

View File

@@ -30,6 +30,7 @@ export interface Project {
custom_env_vars: EnvVar[]; custom_env_vars: EnvVar[];
port_mappings: PortMapping[]; port_mappings: PortMapping[];
claude_instructions: string | null; claude_instructions: string | null;
enabled_mcp_servers: string[];
created_at: string; created_at: string;
updated_at: string; updated_at: string;
} }
@@ -99,6 +100,7 @@ export interface AppSettings {
auto_check_updates: boolean; auto_check_updates: boolean;
dismissed_update_version: string | null; dismissed_update_version: string | null;
timezone: string | null; timezone: string | null;
default_microphone: string | null;
} }
export interface UpdateInfo { export interface UpdateInfo {
@@ -115,3 +117,20 @@ export interface ReleaseAsset {
browser_download_url: string; browser_download_url: string;
size: number; size: number;
} }
export type McpTransportType = "stdio" | "http";
export interface McpServer {
id: string;
name: string;
transport_type: McpTransportType;
command: string | null;
args: string[];
env: Record<string, string>;
url: string | null;
headers: Record<string, string>;
docker_image: string | null;
container_port: number | null;
created_at: string;
updated_at: string;
}

View File

@@ -1,5 +1,5 @@
import { create } from "zustand"; import { create } from "zustand";
import type { Project, TerminalSession, AppSettings, UpdateInfo } from "../lib/types"; import type { Project, TerminalSession, AppSettings, UpdateInfo, McpServer } from "../lib/types";
interface AppState { interface AppState {
// Projects // Projects
@@ -17,9 +17,15 @@ interface AppState {
removeSession: (id: string) => void; removeSession: (id: string) => void;
setActiveSession: (id: string | null) => void; setActiveSession: (id: string | null) => void;
// MCP servers
mcpServers: McpServer[];
setMcpServers: (servers: McpServer[]) => void;
updateMcpServerInList: (server: McpServer) => void;
removeMcpServerFromList: (id: string) => void;
// UI state // UI state
sidebarView: "projects" | "settings"; sidebarView: "projects" | "mcp" | "settings";
setSidebarView: (view: "projects" | "settings") => void; setSidebarView: (view: "projects" | "mcp" | "settings") => void;
dockerAvailable: boolean | null; dockerAvailable: boolean | null;
setDockerAvailable: (available: boolean | null) => void; setDockerAvailable: (available: boolean | null) => void;
imageExists: boolean | null; imageExists: boolean | null;
@@ -75,6 +81,20 @@ export const useAppState = create<AppState>((set) => ({
}), }),
setActiveSession: (id) => set({ activeSessionId: id }), setActiveSession: (id) => set({ activeSessionId: id }),
// MCP servers
mcpServers: [],
setMcpServers: (servers) => set({ mcpServers: servers }),
updateMcpServerInList: (server) =>
set((state) => ({
mcpServers: state.mcpServers.map((s) =>
s.id === server.id ? server : s,
),
})),
removeMcpServerFromList: (id) =>
set((state) => ({
mcpServers: state.mcpServers.filter((s) => s.id !== id),
})),
// UI state // UI state
sidebarView: "projects", sidebarView: "projects",
setSidebarView: (view) => set({ sidebarView: view }), setSidebarView: (view) => set({ sidebarView: view }),

View File

@@ -1,5 +1,6 @@
FROM ubuntu:24.04 FROM ubuntu:24.04
# Multi-arch: builds for linux/amd64 and linux/arm64 (Apple Silicon)
# Avoid interactive prompts during package install # Avoid interactive prompts during package install
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
@@ -100,6 +101,24 @@ WORKDIR /workspace
# ── Switch back to root for entrypoint (handles UID/GID remapping) ───────── # ── Switch back to root for entrypoint (handles UID/GID remapping) ─────────
USER root USER root
# ── OSC 52 clipboard support ─────────────────────────────────────────────
# Provides xclip/xsel/pbcopy shims that emit OSC 52 escape sequences,
# allowing programs inside the container to copy to the host clipboard.
COPY osc52-clipboard /usr/local/bin/osc52-clipboard
RUN chmod +x /usr/local/bin/osc52-clipboard \
&& ln -sf /usr/local/bin/osc52-clipboard /usr/local/bin/xclip \
&& ln -sf /usr/local/bin/osc52-clipboard /usr/local/bin/xsel \
&& ln -sf /usr/local/bin/osc52-clipboard /usr/local/bin/pbcopy
# ── Audio capture shim (voice mode) ────────────────────────────────────────
# Provides fake rec/arecord that read PCM from a FIFO instead of a real mic,
# allowing Claude Code voice mode to work inside the container.
COPY audio-shim /usr/local/bin/audio-shim
RUN chmod +x /usr/local/bin/audio-shim \
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/rec \
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/arecord
COPY entrypoint.sh /usr/local/bin/entrypoint.sh COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh RUN chmod +x /usr/local/bin/entrypoint.sh
COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler

16
container/audio-shim Normal file
View File

@@ -0,0 +1,16 @@
#!/bin/bash
# Audio capture shim for Triple-C voice mode.
# Claude Code spawns `rec` or `arecord` to capture mic audio.
# Inside Docker there is no mic, so this shim reads PCM data from a
# FIFO that the Tauri host app writes to, and outputs it on stdout.
FIFO=/tmp/triple-c-audio-input
# Create the FIFO if it doesn't already exist
[ -p "$FIFO" ] || mkfifo "$FIFO" 2>/dev/null
# Clean exit on SIGTERM (Claude Code sends this when recording stops)
trap 'exit 0' TERM INT
# Stream PCM from the FIFO to stdout until we get a signal or EOF
cat "$FIFO"

View File

@@ -73,6 +73,19 @@ su -s /bin/bash claude -c '
sort -u -o /home/claude/.ssh/known_hosts /home/claude/.ssh/known_hosts sort -u -o /home/claude/.ssh/known_hosts /home/claude/.ssh/known_hosts
' '
# ── AWS config setup ──────────────────────────────────────────────────────────
# Host AWS dir is mounted read-only at /tmp/.host-aws.
# Copy to /home/claude/.aws so AWS CLI can write to sso/cache and cli/cache.
if [ -d /tmp/.host-aws ]; then
rm -rf /home/claude/.aws
cp -a /tmp/.host-aws /home/claude/.aws
chown -R claude:claude /home/claude/.aws
chmod 700 /home/claude/.aws
# Ensure writable cache directories exist
mkdir -p /home/claude/.aws/sso/cache /home/claude/.aws/cli/cache
chown -R claude:claude /home/claude/.aws/sso /home/claude/.aws/cli
fi
# ── Git credential helper (for HTTPS token) ───────────────────────────────── # ── Git credential helper (for HTTPS token) ─────────────────────────────────
if [ -n "$GIT_TOKEN" ]; then if [ -n "$GIT_TOKEN" ]; then
CRED_FILE="/home/claude/.git-credentials" CRED_FILE="/home/claude/.git-credentials"
@@ -103,6 +116,27 @@ if [ -n "$CLAUDE_INSTRUCTIONS" ]; then
unset CLAUDE_INSTRUCTIONS unset CLAUDE_INSTRUCTIONS
fi fi
# ── MCP server configuration ────────────────────────────────────────────────
# Merge MCP server config into ~/.claude.json (preserves existing keys like
# OAuth tokens). Creates the file if it doesn't exist.
if [ -n "$MCP_SERVERS_JSON" ]; then
CLAUDE_JSON="/home/claude/.claude.json"
if [ -f "$CLAUDE_JSON" ]; then
# Merge: existing config + MCP config (MCP keys override on conflict)
MERGED=$(jq -s '.[0] * .[1]' "$CLAUDE_JSON" <(printf '%s' "$MCP_SERVERS_JSON") 2>/dev/null)
if [ -n "$MERGED" ]; then
printf '%s\n' "$MERGED" > "$CLAUDE_JSON"
else
echo "entrypoint: warning — failed to merge MCP config into $CLAUDE_JSON"
fi
else
printf '%s\n' "$MCP_SERVERS_JSON" > "$CLAUDE_JSON"
fi
chown claude:claude "$CLAUDE_JSON"
chmod 600 "$CLAUDE_JSON"
unset MCP_SERVERS_JSON
fi
# ── Docker socket permissions ──────────────────────────────────────────────── # ── Docker socket permissions ────────────────────────────────────────────────
if [ -S /var/run/docker.sock ]; then if [ -S /var/run/docker.sock ]; then
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)

26
container/osc52-clipboard Normal file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
# OSC 52 clipboard provider — sends clipboard data to the host system clipboard
# via OSC 52 terminal escape sequences. Installed as xclip/xsel/pbcopy so that
# programs inside the container (e.g. Claude Code) can copy to clipboard.
#
# Supports common invocations:
# echo "text" | xclip -selection clipboard
# echo "text" | xsel --clipboard --input
# echo "text" | pbcopy
#
# Paste/output requests exit silently (not supported via OSC 52).
# Detect paste/output mode — exit silently since we can't read the host clipboard
for arg in "$@"; do
case "$arg" in
-o|--output) exit 0 ;;
esac
done
# Read all input from stdin
data=$(cat)
[ -z "$data" ] && exit 0
# Base64 encode and write OSC 52 escape sequence to the controlling terminal
encoded=$(printf '%s' "$data" | base64 | tr -d '\n')
printf '\033]52;c;%s\a' "$encoded" > /dev/tty 2>/dev/null