Compare commits

..

11 Commits

Author SHA1 Message Date
652e451afe fix: prevent projects from getting stuck in starting/stopping state
All checks were successful
Build App / build-macos (push) Successful in 2m21s
Build App / build-linux (push) Successful in 5m46s
Sync Release to GitHub / sync-release (release) Successful in 1s
Build App / build-windows (push) Successful in 6m35s
Reconcile stale transient statuses on app startup, add Force Stop button
for transient states, and harden stop_project_container error handling so
Docker failures don't leave projects permanently stuck.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 07:12:49 -08:00
eb86aa95b7 fix: persist full container state across stop/start and config-change recreation
All checks were successful
Build App / build-macos (push) Successful in 2m25s
Build App / build-windows (push) Successful in 2m29s
Build App / build-linux (push) Successful in 4m34s
Sync Release to GitHub / sync-release (release) Successful in 1s
- Add home volume (triple-c-home-{id}) for /home/claude to persist
  .claude.json, .local, and other user-level state across restarts
- Add docker commit before recreation: when container_needs_recreation()
  triggers, snapshot the container to preserve system-level changes
  (apt/pip/npm installs), then create the new container from that snapshot
- On Reset/removal: delete snapshot image + both volumes for clean slate
- Remove commit from stop_project_container (stop/start preserves the
  writable layer naturally; no commit needed)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 16:16:19 -08:00
3228e6cdd7 fix: Docker status never updates after Docker starts
All checks were successful
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 3m22s
Build App / build-linux (push) Successful in 5m56s
Sync Release to GitHub / sync-release (release) Successful in 1s
Replace OnceLock with Mutex<Option<Docker>> in the Rust backend so
failed Docker connections are retried instead of cached permanently.
Add frontend polling (every 5s) when Docker is initially unavailable,
stopping once detected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 14:46:59 -08:00
3344ce1cbf fix: prevent spurious container recreation on every start
All checks were successful
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 4m1s
Build App / build-linux (push) Successful in 5m6s
Sync Release to GitHub / sync-release (release) Successful in 1s
The CLAUDE_INSTRUCTIONS env var was computed differently during container
creation (with port mapping docs + scheduler instructions appended) vs
the recreation check (bare merge only). This caused
container_needs_recreation() to always return true, triggering a full
recreate on every stop/start cycle.

Extract build_claude_instructions() helper used by both code paths so
the expected value always matches what was set at creation time.

Also add TODO.md noting planned tauri-plugin-updater integration for
seamless in-app updates on all platforms.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 14:22:25 -08:00
d642cc64de fix: use browser_download_url for Gitea asset downloads
The API endpoint /releases/assets/{id} returns JSON metadata, not the
binary file. Use the browser_download_url from the asset object instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:09:41 -08:00
e3502876eb rename Triple-C.md to README.md
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:07:16 -08:00
4f41f0d98b feat: add Gitea actions to sync releases to GitHub
Add two new workflows:
- sync-release.yml: automatically mirrors releases (with assets) to GitHub when published on Gitea
- backfill-releases.yml: manual workflow to bulk-sync all existing Gitea releases to GitHub

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-02 10:06:29 -08:00
c9dc232fc4 fix: remove Node.js from actual path location on Act runner
Some checks failed
Build App / build-windows (push) Successful in 3m40s
Build App / build-linux (push) Successful in 7m4s
Build App / build-macos (push) Failing after 11m34s
The Act runner has Node 18 at /opt/acttoolcache/node/18.20.3/x64/bin/,
not at /usr/local/bin/. Use $(dirname "$(which node)") to find and
remove the actual binary location before installing Node 22.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 18:08:46 -08:00
2d4fce935f fix: remove old Node.js before installing v22 on Linux runner
Some checks failed
Build App / build-linux (push) Failing after 2m34s
Build App / build-windows (push) Successful in 3m40s
Build App / build-macos (push) Has been cancelled
The Act runner has Node 18 at /usr/local/bin/node which takes
precedence over the apt-installed /usr/bin/node. Even after
running nodesource setup and apt-get install, the old Node 18
binary remained in the PATH. Now removes old binaries and uses
hash -r to force path re-lookup. Also removes package-lock.json
before npm install to ensure correct platform-specific bindings.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 18:03:31 -08:00
e739f6aaff fix: check Node.js version, not just presence, in CI
Some checks failed
Build App / build-macos (push) Successful in 2m23s
Build App / build-linux (push) Failing after 3m38s
Build App / build-windows (push) Successful in 4m1s
The Act runner has Node.js v18 pre-installed, so the check
`command -v node` passes and skips installing v22. Node 18 is
too old for dependencies like vitest, jsdom, and tailwindcss/oxide.
Now checks the major version and upgrades if < 22.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:54:36 -08:00
550159fc63 Fix native binding error: use npm install instead of npm ci
Some checks failed
Build App / build-linux (push) Failing after 2m25s
Build App / build-macos (push) Successful in 2m34s
Build App / build-windows (push) Successful in 4m8s
@tailwindcss/oxide has platform-specific native bindings. The
package-lock.json was generated on a different platform, so npm ci
installs the wrong native binary. Switching to rm -rf node_modules
+ npm install lets npm resolve the correct platform-specific
optional dependency (e.g., @tailwindcss/oxide-linux-x64-gnu on
Linux, oxide-darwin-arm64 on macOS).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-01 17:44:03 -08:00
12 changed files with 519 additions and 73 deletions

View File

@@ -0,0 +1,84 @@
name: Backfill Releases to GitHub
on:
workflow_dispatch:
jobs:
backfill:
runs-on: ubuntu-latest
steps:
- name: Backfill all Gitea releases to GitHub
env:
GH_PAT: ${{ secrets.GH_PAT }}
GITEA_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
GITEA_API: https://repo.anhonesthost.net/api/v1
GITEA_REPO: cybercovellc/triple-c
GITHUB_REPO: shadowdao/triple-c
run: |
set -e
echo "==> Fetching releases from Gitea..."
RELEASES=$(curl -sf \
-H "Authorization: token $GITEA_TOKEN" \
"$GITEA_API/repos/$GITEA_REPO/releases?limit=50")
echo "$RELEASES" | jq -c '.[]' | while read release; do
TAG=$(echo "$release" | jq -r '.tag_name')
NAME=$(echo "$release" | jq -r '.name')
BODY=$(echo "$release" | jq -r '.body')
IS_PRERELEASE=$(echo "$release" | jq -r '.prerelease')
IS_DRAFT=$(echo "$release" | jq -r '.draft')
EXISTS=$(curl -sf \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/$GITHUB_REPO/releases/tags/$TAG" \
-o /dev/null -w "%{http_code}" || true)
if [ "$EXISTS" = "200" ]; then
echo "==> Skipping $TAG (already exists on GitHub)"
continue
fi
echo "==> Creating release $TAG..."
RESPONSE=$(curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/json" \
https://api.github.com/repos/$GITHUB_REPO/releases \
-d "{
\"tag_name\": \"$TAG\",
\"name\": \"$NAME\",
\"body\": $(echo "$BODY" | jq -Rs .),
\"draft\": $IS_DRAFT,
\"prerelease\": $IS_PRERELEASE
}")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url' | sed 's/{?name,label}//')
echo "$release" | jq -c '.assets[]?' | while read asset; do
ASSET_NAME=$(echo "$asset" | jq -r '.name')
ASSET_ID=$(echo "$asset" | jq -r '.id')
echo " ==> Downloading $ASSET_NAME..."
DOWNLOAD_URL=$(echo "$asset" | jq -r '.browser_download_url')
curl -sfL -o "/tmp/$ASSET_NAME" \
-H "Authorization: token $GITEA_TOKEN" \
"$DOWNLOAD_URL"
echo " ==> Uploading $ASSET_NAME to GitHub..."
ENCODED_NAME=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" "$ASSET_NAME")
curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/octet-stream" \
--data-binary "@/tmp/$ASSET_NAME" \
"$UPLOAD_URL?name=$ENCODED_NAME"
echo " Uploaded: $ASSET_NAME"
done
echo "==> Done: $TAG"
done
echo "==> Backfill complete."

View File

@@ -20,15 +20,29 @@ jobs:
build-linux:
runs-on: ubuntu-latest
steps:
- name: Install Node.js
- name: Install Node.js 22
run: |
NEED_INSTALL=false
if command -v node >/dev/null 2>&1; then
echo "Node.js already installed: $(node --version)"
NODE_MAJOR=$(node --version | sed 's/v\([0-9]*\).*/\1/')
OLD_NODE_DIR=$(dirname "$(which node)")
echo "Found Node.js $(node --version) at $(which node) (major: ${NODE_MAJOR})"
if [ "$NODE_MAJOR" -lt 22 ]; then
echo "Node.js ${NODE_MAJOR} is too old, removing before installing 22..."
sudo rm -f "${OLD_NODE_DIR}/node" "${OLD_NODE_DIR}/npm" "${OLD_NODE_DIR}/npx" "${OLD_NODE_DIR}/corepack"
hash -r
NEED_INSTALL=true
fi
else
echo "Installing Node.js 22..."
echo "Node.js not found, installing 22..."
NEED_INSTALL=true
fi
if [ "$NEED_INSTALL" = true ]; then
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs
hash -r
fi
echo "Node.js at: $(which node)"
node --version
npm --version
@@ -87,7 +101,9 @@ jobs:
- name: Install frontend dependencies
working-directory: ./app
run: npm ci
run: |
rm -rf node_modules package-lock.json
npm install
- name: Install Tauri CLI
working-directory: ./app
@@ -138,12 +154,21 @@ jobs:
build-macos:
runs-on: macos-latest
steps:
- name: Install Node.js
- name: Install Node.js 22
run: |
NEED_INSTALL=false
if command -v node >/dev/null 2>&1; then
echo "Node.js already installed: $(node --version)"
NODE_MAJOR=$(node --version | sed 's/v\([0-9]*\).*/\1/')
echo "Found Node.js $(node --version) (major: ${NODE_MAJOR})"
if [ "$NODE_MAJOR" -lt 22 ]; then
echo "Node.js ${NODE_MAJOR} is too old, upgrading to 22..."
NEED_INSTALL=true
fi
else
echo "Installing Node.js 22 via Homebrew..."
echo "Node.js not found, installing 22..."
NEED_INSTALL=true
fi
if [ "$NEED_INSTALL" = true ]; then
brew install node@22
brew link --overwrite node@22
fi
@@ -187,7 +212,9 @@ jobs:
- name: Install frontend dependencies
working-directory: ./app
run: npm ci
run: |
rm -rf node_modules
npm install
- name: Install Tauri CLI
working-directory: ./app

View File

@@ -0,0 +1,60 @@
name: Sync Release to GitHub
on:
release:
types: [published]
jobs:
sync-release:
runs-on: ubuntu-latest
steps:
- name: Mirror release to GitHub
env:
GH_PAT: ${{ secrets.GH_PAT }}
GITHUB_REPO: shadowdao/triple-c
RELEASE_TAG: ${{ gitea.event.release.tag_name }}
RELEASE_NAME: ${{ gitea.event.release.name }}
RELEASE_BODY: ${{ gitea.event.release.body }}
IS_PRERELEASE: ${{ gitea.event.release.prerelease }}
IS_DRAFT: ${{ gitea.event.release.draft }}
run: |
set -e
echo "==> Creating release $RELEASE_TAG on GitHub..."
RESPONSE=$(curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/json" \
https://api.github.com/repos/$GITHUB_REPO/releases \
-d "{
\"tag_name\": \"$RELEASE_TAG\",
\"name\": \"$RELEASE_NAME\",
\"body\": $(echo "$RELEASE_BODY" | jq -Rs .),
\"draft\": $IS_DRAFT,
\"prerelease\": $IS_PRERELEASE
}")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.upload_url' | sed 's/{?name,label}//')
echo "Release created. Upload URL: $UPLOAD_URL"
echo '${{ toJSON(gitea.event.release.assets) }}' | jq -c '.[]' | while read asset; do
ASSET_NAME=$(echo "$asset" | jq -r '.name')
ASSET_URL=$(echo "$asset" | jq -r '.browser_download_url')
echo "==> Downloading asset: $ASSET_NAME"
curl -sfL -o "/tmp/$ASSET_NAME" "$ASSET_URL"
echo "==> Uploading $ASSET_NAME to GitHub..."
ENCODED_NAME=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.argv[1]))" "$ASSET_NAME")
curl -sf -X POST \
-H "Authorization: Bearer $GH_PAT" \
-H "Accept: application/vnd.github+json" \
-H "Content-Type: application/octet-stream" \
--data-binary "@/tmp/$ASSET_NAME" \
"$UPLOAD_URL?name=$ENCODED_NAME"
echo " Uploaded: $ASSET_NAME"
done
echo "==> Release sync complete."

60
TODO.md Normal file
View File

@@ -0,0 +1,60 @@
# TODO / Future Improvements
## In-App Auto-Update via `tauri-plugin-updater`
**Priority:** High
**Status:** Planned
Currently the app detects available updates via the Gitea API (`check_for_updates` command) but cannot apply them. Users must manually download and install the new version. On macOS and Linux this is a poor experience compared to Windows (where NSIS handles upgrades cleanly).
### Recommended approach: `tauri-plugin-updater`
Full in-app auto-update: detects, downloads, verifies, and applies updates seamlessly on all platforms. The user clicks "Update" and the app restarts with the new version.
### Requirements
1. **Generate a Tauri update signing key pair** (this is Tauri's own Ed25519 key, not OS code signing):
```bash
npx @tauri-apps/cli signer generate -w ~/.tauri/triple-c.key
```
Set `TAURI_SIGNING_PRIVATE_KEY` and `TAURI_SIGNING_PRIVATE_KEY_PASSWORD` in CI.
2. **Add `tauri-plugin-updater`** to Rust and JS dependencies.
3. **Create an update endpoint** that returns Tauri's expected JSON format:
```json
{
"version": "v0.1.100",
"notes": "Changelog here",
"pub_date": "2026-03-01T00:00:00Z",
"platforms": {
"darwin-x86_64": { "signature": "...", "url": "https://..." },
"darwin-aarch64": { "signature": "...", "url": "https://..." },
"linux-x86_64": { "signature": "...", "url": "https://..." },
"windows-x86_64": { "signature": "...", "url": "https://..." }
}
}
```
This could be a static JSON file uploaded alongside release assets, or a small API that reads from Gitea releases and reformats.
4. **Configure the updater** in `tauri.conf.json`:
```json
"plugins": {
"updater": {
"endpoints": ["https://repo.anhonesthost.net/...update-endpoint..."],
"pubkey": "<public key from step 1>"
}
}
```
5. **Add frontend UI** for the update prompt (replace or enhance the existing update check flow).
6. **Update CI pipeline** to:
- Sign bundles with the Tauri key during build
- Upload `.sig` files alongside installers
- Generate/upload the update endpoint JSON
### References
- https://v2.tauri.app/plugin/updater/
- Existing update check code: `app/src-tauri/src/commands/update_commands.rs`
- Existing models: `app/src-tauri/src/models/update_info.rs`

View File

@@ -81,12 +81,19 @@ pub async fn remove_project(
state: State<'_, AppState>,
) -> Result<(), String> {
// Stop and remove container if it exists
if let Some(project) = state.projects_store.get(&project_id) {
if let Some(ref project) = state.projects_store.get(&project_id) {
if let Some(ref container_id) = project.container_id {
state.exec_manager.close_sessions_for_container(container_id).await;
let _ = docker::stop_container(container_id).await;
let _ = docker::remove_container(container_id).await;
}
// Clean up the snapshot image + volumes
if let Err(e) = docker::remove_snapshot_image(project).await {
log::warn!("Failed to remove snapshot image for project {}: {}", project_id, e);
}
if let Err(e) = docker::remove_project_volumes(project).await {
log::warn!("Failed to remove project volumes for project {}: {}", project_id, e);
}
}
// Clean up keychain secrets for this project
@@ -153,25 +160,37 @@ pub async fn start_project_container(
// AWS config path from global settings
let aws_config_path = settings.global_aws.aws_config_path.clone();
// Check for existing container
let container_id = if let Some(existing_id) = docker::find_existing_container(&project).await? {
let needs_recreation = docker::container_needs_recreation(
// Check if config changed — if so, snapshot + recreate
let needs_recreate = docker::container_needs_recreation(
&existing_id,
&project,
settings.global_claude_instructions.as_deref(),
&settings.global_custom_env_vars,
settings.timezone.as_deref(),
)
.await
.unwrap_or(false);
if needs_recreation {
log::info!("Container config changed, recreating container for project {}", project.id);
).await.unwrap_or(false);
if needs_recreate {
log::info!("Container config changed for project {} — committing snapshot and recreating", project.id);
// Snapshot the filesystem before destroying
if let Err(e) = docker::commit_container_snapshot(&existing_id, &project).await {
log::warn!("Failed to snapshot container before recreation: {}", e);
}
let _ = docker::stop_container(&existing_id).await;
docker::remove_container(&existing_id).await?;
// Create from snapshot image (preserves system-level changes)
let snapshot_image = docker::get_snapshot_image_name(&project);
let create_image = if docker::image_exists(&snapshot_image).await.unwrap_or(false) {
snapshot_image
} else {
image_name.clone()
};
let new_id = docker::create_container(
&project,
&docker_socket,
&image_name,
&create_image,
aws_config_path.as_deref(),
&settings.global_aws,
settings.global_claude_instructions.as_deref(),
@@ -185,10 +204,21 @@ pub async fn start_project_container(
existing_id
}
} else {
// Container doesn't exist (first start, or Docker pruned it).
// Check for a snapshot image first — it preserves system-level
// changes (apt/pip/npm installs) from the previous session.
let snapshot_image = docker::get_snapshot_image_name(&project);
let create_image = if docker::image_exists(&snapshot_image).await.unwrap_or(false) {
log::info!("Creating container from snapshot image for project {}", project.id);
snapshot_image
} else {
image_name.clone()
};
let new_id = docker::create_container(
&project,
&docker_socket,
&image_name,
&create_image,
aws_config_path.as_deref(),
&settings.global_aws,
settings.global_claude_instructions.as_deref(),
@@ -229,16 +259,18 @@ pub async fn stop_project_container(
.get(&project_id)
.ok_or_else(|| format!("Project {} not found", project_id))?;
if let Some(ref container_id) = project.container_id {
state.projects_store.update_status(&project_id, ProjectStatus::Stopping)?;
if let Some(ref container_id) = project.container_id {
// Close exec sessions for this project
state.exec_manager.close_sessions_for_container(container_id).await;
docker::stop_container(container_id).await?;
state.projects_store.update_status(&project_id, ProjectStatus::Stopped)?;
if let Err(e) = docker::stop_container(container_id).await {
log::warn!("Docker stop failed for container {} (project {}): {} — resetting to Stopped anyway", container_id, project_id, e);
}
}
state.projects_store.update_status(&project_id, ProjectStatus::Stopped)?;
Ok(())
}
@@ -260,6 +292,14 @@ pub async fn rebuild_project_container(
state.projects_store.set_container_id(&project_id, None)?;
}
// Remove snapshot image + volumes so Reset creates from the clean base image
if let Err(e) = docker::remove_snapshot_image(&project).await {
log::warn!("Failed to remove snapshot image for project {}: {}", project_id, e);
}
if let Err(e) = docker::remove_project_volumes(&project).await {
log::warn!("Failed to remove project volumes for project {}: {}", project_id, e);
}
// Start fresh
start_project_container(project_id, state).await
}

View File

@@ -1,23 +1,28 @@
use bollard::Docker;
use std::sync::OnceLock;
use std::sync::Mutex;
static DOCKER: OnceLock<Result<Docker, String>> = OnceLock::new();
static DOCKER: Mutex<Option<Docker>> = Mutex::new(None);
pub fn get_docker() -> Result<&'static Docker, String> {
let result = DOCKER.get_or_init(|| {
Docker::connect_with_local_defaults()
.map_err(|e| format!("Failed to connect to Docker daemon: {}", e))
});
match result {
Ok(docker) => Ok(docker),
Err(e) => Err(e.clone()),
pub fn get_docker() -> Result<Docker, String> {
let mut guard = DOCKER.lock().map_err(|e| format!("Lock poisoned: {}", e))?;
if let Some(docker) = guard.as_ref() {
return Ok(docker.clone());
}
let docker = Docker::connect_with_local_defaults()
.map_err(|e| format!("Failed to connect to Docker daemon: {}", e))?;
guard.replace(docker.clone());
Ok(docker)
}
pub async fn check_docker_available() -> Result<bool, String> {
let docker = get_docker()?;
match docker.ping().await {
Ok(_) => Ok(true),
Err(e) => Err(format!("Docker daemon not responding: {}", e)),
Err(_) => {
// Connection object exists but daemon not responding — clear cache
let mut guard = DOCKER.lock().map_err(|e| format!("Lock poisoned: {}", e))?;
*guard = None;
Ok(false)
}
}
}

View File

@@ -2,6 +2,7 @@ use bollard::container::{
Config, CreateContainerOptions, ListContainersOptions, RemoveContainerOptions,
StartContainerOptions, StopContainerOptions,
};
use bollard::image::{CommitContainerOptions, RemoveImageOptions};
use bollard::models::{ContainerSummary, HostConfig, Mount, MountTypeEnum, PortBinding};
use std::collections::HashMap;
use std::collections::hash_map::DefaultHasher;
@@ -40,6 +41,42 @@ After tasks run, check notifications with `triple-c-scheduler notifications` and
### Timezone
Scheduled times use the container's configured timezone (check with `date`). If no timezone is configured, UTC is used."#;
/// Build the full CLAUDE_INSTRUCTIONS value by merging global + project
/// instructions, appending port mapping docs, and appending scheduler docs.
/// Used by both create_container() and container_needs_recreation() to ensure
/// the same value is produced in both paths.
fn build_claude_instructions(
global_instructions: Option<&str>,
project_instructions: Option<&str>,
port_mappings: &[PortMapping],
) -> Option<String> {
let mut combined = merge_claude_instructions(global_instructions, project_instructions);
if !port_mappings.is_empty() {
let mut port_lines: Vec<String> = Vec::new();
port_lines.push("## Available Port Mappings".to_string());
port_lines.push("The following ports are mapped from the host to this container. Use these container ports when starting services that need to be accessible from the host:".to_string());
for pm in port_mappings {
port_lines.push(format!(
"- Host port {} -> Container port {} ({})",
pm.host_port, pm.container_port, pm.protocol
));
}
let port_info = port_lines.join("\n");
combined = Some(match combined {
Some(existing) => format!("{}\n\n{}", existing, port_info),
None => port_info,
});
}
combined = Some(match combined {
Some(existing) => format!("{}\n\n{}", existing, SCHEDULER_INSTRUCTIONS),
None => SCHEDULER_INSTRUCTIONS.to_string(),
});
combined
}
/// Compute a fingerprint string for the custom environment variables.
/// Sorted alphabetically so order changes do not cause spurious recreation.
fn compute_env_fingerprint(custom_env_vars: &[EnvVar]) -> String {
@@ -307,33 +344,12 @@ pub async fn create_container(
}
}
// Claude instructions (global + per-project, plus port mapping info)
let mut combined_instructions = merge_claude_instructions(
// Claude instructions (global + per-project, plus port mapping info + scheduler docs)
let combined_instructions = build_claude_instructions(
global_claude_instructions,
project.claude_instructions.as_deref(),
&project.port_mappings,
);
if !project.port_mappings.is_empty() {
let mut port_lines: Vec<String> = Vec::new();
port_lines.push("## Available Port Mappings".to_string());
port_lines.push("The following ports are mapped from the host to this container. Use these container ports when starting services that need to be accessible from the host:".to_string());
for pm in &project.port_mappings {
port_lines.push(format!(
"- Host port {} -> Container port {} ({})",
pm.host_port, pm.container_port, pm.protocol
));
}
let port_info = port_lines.join("\n");
combined_instructions = Some(match combined_instructions {
Some(existing) => format!("{}\n\n{}", existing, port_info),
None => port_info,
});
}
// Scheduler instructions (always appended so all containers get scheduling docs)
let scheduler_docs = SCHEDULER_INSTRUCTIONS;
combined_instructions = Some(match combined_instructions {
Some(existing) => format!("{}\n\n{}", existing, scheduler_docs),
None => scheduler_docs.to_string(),
});
if let Some(ref instructions) = combined_instructions {
env_vars.push(format!("CLAUDE_INSTRUCTIONS={}", instructions));
@@ -352,7 +368,19 @@ pub async fn create_container(
});
}
// Named volume for claude config persistence
// Named volume for the entire home directory — preserves ~/.claude.json,
// ~/.local (pip/npm globals), and any other user-level state across
// container stop/start cycles.
mounts.push(Mount {
target: Some("/home/claude".to_string()),
source: Some(format!("triple-c-home-{}", project.id)),
typ: Some(MountTypeEnum::VOLUME),
read_only: Some(false),
..Default::default()
});
// Named volume for claude config persistence — mounted as a nested volume
// inside the home volume; Docker gives the more-specific mount precedence.
mounts.push(Mount {
target: Some("/home/claude/.claude".to_string()),
source: Some(format!("triple-c-claude-config-{}", project.id)),
@@ -523,6 +551,83 @@ pub async fn remove_container(container_id: &str) -> Result<(), String> {
.map_err(|e| format!("Failed to remove container: {}", e))
}
/// Return the snapshot image name for a project.
pub fn get_snapshot_image_name(project: &Project) -> String {
format!("triple-c-snapshot-{}:latest", project.id)
}
/// Commit the container's filesystem to a snapshot image so that system-level
/// changes (apt/pip/npm installs, ~/.claude.json, etc.) survive container
/// removal. The Config is left empty so that secrets injected as env vars are
/// NOT baked into the image.
pub async fn commit_container_snapshot(container_id: &str, project: &Project) -> Result<(), String> {
let docker = get_docker()?;
let image_name = get_snapshot_image_name(project);
// Parse repo:tag
let (repo, tag) = match image_name.rsplit_once(':') {
Some((r, t)) => (r.to_string(), t.to_string()),
None => (image_name.clone(), "latest".to_string()),
};
let options = CommitContainerOptions {
container: container_id.to_string(),
repo: repo.clone(),
tag: tag.clone(),
pause: true,
..Default::default()
};
// Empty config — no env vars / cmd baked in
let config = Config::<String> {
..Default::default()
};
docker
.commit_container(options, config)
.await
.map_err(|e| format!("Failed to commit container snapshot: {}", e))?;
log::info!("Committed container {} as snapshot {}:{}", container_id, repo, tag);
Ok(())
}
/// Remove the snapshot image for a project (used on Reset / project removal).
pub async fn remove_snapshot_image(project: &Project) -> Result<(), String> {
let docker = get_docker()?;
let image_name = get_snapshot_image_name(project);
docker
.remove_image(
&image_name,
Some(RemoveImageOptions {
force: true,
noprune: false,
}),
None,
)
.await
.map_err(|e| format!("Failed to remove snapshot image {}: {}", image_name, e))?;
log::info!("Removed snapshot image {}", image_name);
Ok(())
}
/// Remove both named volumes for a project (used on Reset / project removal).
pub async fn remove_project_volumes(project: &Project) -> Result<(), String> {
let docker = get_docker()?;
for vol in [
format!("triple-c-home-{}", project.id),
format!("triple-c-claude-config-{}", project.id),
] {
match docker.remove_volume(&vol, None).await {
Ok(_) => log::info!("Removed volume {}", vol),
Err(e) => log::warn!("Failed to remove volume {} (may not exist): {}", vol, e),
}
}
Ok(())
}
/// Check whether the existing container's configuration still matches the
/// current project settings. Returns `true` when the container must be
/// recreated (mounts or env vars differ).
@@ -685,9 +790,10 @@ pub async fn container_needs_recreation(
}
// ── Claude instructions ───────────────────────────────────────────────
let expected_instructions = merge_claude_instructions(
let expected_instructions = build_claude_instructions(
global_claude_instructions,
project.claude_instructions.as_deref(),
&project.port_mappings,
);
let container_instructions = get_env("CLAUDE_INSTRUCTIONS");
if container_instructions.as_deref() != expected_instructions.as_deref() {

View File

@@ -70,17 +70,38 @@ impl ProjectsStore {
(Vec::new(), false)
};
// Reconcile stale transient statuses: on a cold app start no Docker
// operations can be in flight, so Starting/Stopping are always stale.
let mut projects = projects;
let mut needs_save = needs_save;
for p in projects.iter_mut() {
match p.status {
crate::models::ProjectStatus::Starting | crate::models::ProjectStatus::Stopping => {
log::warn!(
"Reconciling stale '{}' status for project '{}' ({}) → Stopped",
serde_json::to_string(&p.status).unwrap_or_default().trim_matches('"'),
p.name,
p.id
);
p.status = crate::models::ProjectStatus::Stopped;
p.updated_at = chrono::Utc::now().to_rfc3339();
needs_save = true;
}
_ => {}
}
}
let store = Self {
projects: Mutex::new(projects),
file_path,
};
// Persist migrated format back to disk
// Persist migrated/reconciled format back to disk
if needs_save {
log::info!("Migrated projects.json from single-path to multi-path format");
log::info!("Saving reconciled/migrated projects.json to disk");
let projects = store.lock();
if let Err(e) = store.save(&projects) {
log::error!("Failed to save migrated projects: {}", e);
log::error!("Failed to save projects: {}", e);
}
}

View File

@@ -11,7 +11,7 @@ import { useUpdates } from "./hooks/useUpdates";
import { useAppState } from "./store/appState";
export default function App() {
const { checkDocker, checkImage } = useDocker();
const { checkDocker, checkImage, startDockerPolling } = useDocker();
const { loadSettings } = useSettings();
const { refresh } = useProjects();
const { loadVersion, checkForUpdates, startPeriodicCheck } = useUpdates();
@@ -22,8 +22,13 @@ export default function App() {
// Initialize on mount
useEffect(() => {
loadSettings();
let stopPolling: (() => void) | undefined;
checkDocker().then((available) => {
if (available) checkImage();
if (available) {
checkImage();
} else {
stopPolling = startDockerPolling();
}
});
refresh();
@@ -34,6 +39,7 @@ export default function App() {
return () => {
clearTimeout(updateTimer);
cleanup?.();
stopPolling?.();
};
}, []); // eslint-disable-line react-hooks/exhaustive-deps

View File

@@ -315,9 +315,12 @@ export default function ProjectCard({ project }: Props) {
<ActionButton onClick={handleOpenTerminal} disabled={loading} label="Terminal" accent />
</>
) : (
<>
<span className="text-xs text-[var(--text-secondary)]">
{project.status}...
</span>
<ActionButton onClick={handleStop} disabled={loading} label="Force Stop" danger />
</>
)}
<ActionButton
onClick={(e) => { e?.stopPropagation?.(); setShowConfig(!showConfig); }}

View File

@@ -1,4 +1,4 @@
import { useCallback } from "react";
import { useCallback, useRef } from "react";
import { useShallow } from "zustand/react/shallow";
import { listen } from "@tauri-apps/api/event";
import { useAppState } from "../store/appState";
@@ -59,6 +59,39 @@ export function useDocker() {
[setImageExists],
);
const pollingRef = useRef<ReturnType<typeof setInterval> | null>(null);
const startDockerPolling = useCallback(() => {
// Don't start if already polling
if (pollingRef.current) return () => {};
const interval = setInterval(async () => {
try {
const available = await commands.checkDocker();
if (available) {
clearInterval(interval);
pollingRef.current = null;
setDockerAvailable(true);
// Also check image once Docker is available
try {
const exists = await commands.checkImageExists();
setImageExists(exists);
} catch {
setImageExists(false);
}
}
} catch {
// Still not available, keep polling
}
}, 5000);
pollingRef.current = interval;
return () => {
clearInterval(interval);
pollingRef.current = null;
};
}, [setDockerAvailable, setImageExists]);
const pullImage = useCallback(
async (imageName: string, onProgress?: (msg: string) => void) => {
const unlisten = onProgress
@@ -84,5 +117,6 @@ export function useDocker() {
checkImage,
buildImage,
pullImage,
startDockerPolling,
};
}