Compare commits
6 Commits
v0.1.92-ma
...
v0.1.98
| Author | SHA1 | Date | |
|---|---|---|---|
| b6fd8a557e | |||
| 93deab68a7 | |||
| 2dce2993cc | |||
| e482452ffd | |||
| 8c710fc7bf | |||
| b7585420ef |
@@ -10,6 +10,7 @@ on:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "app/**"
|
||||
- ".gitea/workflows/build-app.yml"
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
|
||||
@@ -5,10 +5,12 @@ on:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "container/**"
|
||||
- ".gitea/workflows/build.yml"
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths:
|
||||
- "container/**"
|
||||
- ".gitea/workflows/build.yml"
|
||||
|
||||
env:
|
||||
REGISTRY: repo.anhonesthost.net
|
||||
|
||||
@@ -72,7 +72,7 @@ docker exec stdout → tokio task → emit("terminal-output-{sessionId}") → li
|
||||
- `container.rs` — Container lifecycle (create, start, stop, remove, inspect)
|
||||
- `exec.rs` — PTY exec sessions with bidirectional stdin/stdout streaming
|
||||
- `image.rs` — Image build/pull with progress streaming
|
||||
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
|
||||
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `OllamaConfig`, `LiteLlmConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
|
||||
- **`storage/`** — Persistence: `projects_store.rs` (JSON file with atomic writes), `secure.rs` (OS keychain via `keyring` crate), `settings_store.rs`
|
||||
|
||||
### Container (`container/`)
|
||||
@@ -90,6 +90,8 @@ Containers use a **stop/start** model (not create/destroy). Installed packages p
|
||||
Per-project, independently configured:
|
||||
- **Anthropic (OAuth)** — `claude login` in terminal, token persists in config volume
|
||||
- **AWS Bedrock** — Static keys, profile, or bearer token injected as env vars
|
||||
- **Ollama** — Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`)
|
||||
- **LiteLLM** — Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers
|
||||
|
||||
## Styling
|
||||
|
||||
|
||||
253
HOW-TO-USE.md
253
HOW-TO-USE.md
@@ -33,6 +33,8 @@ You need access to Claude Code through one of:
|
||||
|
||||
- **Anthropic account** — Sign up at https://claude.ai and use `claude login` (OAuth) inside the terminal
|
||||
- **AWS Bedrock** — An AWS account with Bedrock access and Claude models enabled
|
||||
- **Ollama** — A local or remote Ollama server running an Anthropic-compatible model (best-effort support)
|
||||
- **LiteLLM** — A LiteLLM proxy gateway providing access to 100+ model providers (best-effort support)
|
||||
|
||||
---
|
||||
|
||||
@@ -65,11 +67,11 @@ Switch to the **Projects** tab in the sidebar and click the **+** button.
|
||||
|
||||
### 3. Start the Container
|
||||
|
||||
Select your project in the sidebar and click **Start**. The status dot changes from gray (stopped) to orange (starting) to green (running).
|
||||
Select your project in the sidebar and click **Start**. A progress modal appears showing real-time status as the container starts. The status dot changes from gray (stopped) to orange (starting) to green (running). The modal auto-closes on success.
|
||||
|
||||
### 4. Open a Terminal
|
||||
|
||||
Click the **Terminal** button (highlighted in accent color) to open an interactive terminal session. A new tab appears in the top bar and an xterm.js terminal loads in the main area.
|
||||
Click the **Terminal** button to open an interactive terminal session. A new tab appears in the top bar and an xterm.js terminal loads in the main area.
|
||||
|
||||
Claude Code launches automatically with `--dangerously-skip-permissions` inside the sandboxed container.
|
||||
|
||||
@@ -88,6 +90,20 @@ Claude Code launches automatically with `--dangerously-skip-permissions` inside
|
||||
3. Expand the **Config** panel and fill in your AWS credentials (see [AWS Bedrock Configuration](#aws-bedrock-configuration) below).
|
||||
4. Start the container again.
|
||||
|
||||
**Ollama:**
|
||||
|
||||
1. Stop the container first (settings can only be changed while stopped).
|
||||
2. In the project card, switch the auth mode to **Ollama**.
|
||||
3. Expand the **Config** panel and set the base URL of your Ollama server (defaults to `http://host.docker.internal:11434` for a local instance). Optionally set a model ID.
|
||||
4. Start the container again.
|
||||
|
||||
**LiteLLM:**
|
||||
|
||||
1. Stop the container first (settings can only be changed while stopped).
|
||||
2. In the project card, switch the auth mode to **LiteLLM**.
|
||||
3. Expand the **Config** panel and set the base URL of your LiteLLM proxy (defaults to `http://host.docker.internal:4000`). Optionally set an API key and model ID.
|
||||
4. Start the container again.
|
||||
|
||||
---
|
||||
|
||||
## The Interface
|
||||
@@ -99,16 +115,16 @@ Claude Code launches automatically with `--dangerously-skip-permissions` inside
|
||||
│ Sidebar │ │
|
||||
│ │ Terminal View │
|
||||
│ Projects │ (xterm.js) │
|
||||
│ MCP │ │
|
||||
│ Settings │ │
|
||||
│ │ │
|
||||
├────────────┴────────────────────────────────────────┤
|
||||
│ StatusBar X projects · X running · X terminals │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
- **TopBar** — Terminal tabs for switching between sessions. Status dots on the right show Docker connection (green = connected) and image availability (green = ready).
|
||||
- **Sidebar** — Toggle between the **Projects** list and **Settings** panel.
|
||||
- **Terminal View** — Interactive terminal powered by xterm.js with WebGL rendering.
|
||||
- **TopBar** — Terminal tabs for switching between sessions. Bash shell tabs show a "(bash)" suffix. Status dots on the right show Docker connection (green = connected) and image availability (green = ready).
|
||||
- **Sidebar** — Toggle between the **Projects** list, **MCP** server configuration, and **Settings** panel.
|
||||
- **Terminal View** — Interactive terminal powered by xterm.js with WebGL rendering. Includes a **Jump to Current** button that appears when you scroll up, so you can quickly return to the latest output.
|
||||
- **StatusBar** — Counts of total projects, running containers, and open terminal sessions.
|
||||
|
||||
---
|
||||
@@ -134,11 +150,17 @@ Select a project in the sidebar to see its action buttons:
|
||||
|--------|---------------|--------------|
|
||||
| **Start** | Stopped | Creates (if needed) and starts the container |
|
||||
| **Stop** | Running | Stops the container but preserves its state |
|
||||
| **Terminal** | Running | Opens a new terminal session in this container |
|
||||
| **Terminal** | Running | Opens a new Claude Code terminal session |
|
||||
| **Shell** | Running | Opens a bash login shell in the container (no Claude Code) |
|
||||
| **Files** | Running | Opens the file manager to browse, download, and upload files |
|
||||
| **Reset** | Stopped | Destroys and recreates the container from scratch |
|
||||
| **Config** | Always | Toggles the configuration panel |
|
||||
| **Remove** | Stopped | Deletes the project and its container (with confirmation) |
|
||||
|
||||
### Renaming a Project
|
||||
|
||||
Double-click the project name in the sidebar to rename it inline. Press **Enter** to confirm or **Escape** to cancel.
|
||||
|
||||
### Container Lifecycle
|
||||
|
||||
Containers use a **stop/start** model. When you stop a container, everything inside it is preserved — installed packages, modified files, downloaded tools. Starting it again resumes where you left off.
|
||||
@@ -147,6 +169,10 @@ Containers use a **stop/start** model. When you stop a container, everything ins
|
||||
|
||||
Only **Remove** deletes everything, including the config volume and any stored credentials.
|
||||
|
||||
### Container Progress Feedback
|
||||
|
||||
When starting, stopping, or resetting a container, a progress modal shows real-time status messages (e.g., "Setting up MCP network...", "Starting MCP containers...", "Creating container..."). If an error occurs, the modal displays the error with a **Close** button. A **Force Stop** option is available if the operation stalls. The modal auto-closes on success.
|
||||
|
||||
---
|
||||
|
||||
## Project Configuration
|
||||
@@ -177,6 +203,19 @@ When enabled, the host Docker socket is mounted into the container so Claude Cod
|
||||
|
||||
> Toggling this requires stopping and restarting the container to take effect.
|
||||
|
||||
### Mission Control
|
||||
|
||||
Toggle **Mission Control** to integrate [Flight Control](https://github.com/msieurthenardier/mission-control) — an AI-first development methodology — into the project. When enabled:
|
||||
|
||||
- The Flight Control repository is automatically cloned into the container
|
||||
- Flight Control skills are installed to Claude Code's skill directory (`~/.claude/skills/`)
|
||||
- Project instructions are appended with Flight Control workflow guidance
|
||||
- The repository is symlinked at `/workspace/mission-control`
|
||||
|
||||
Available skills include `/mission`, `/flight`, `/leg`, `/agentic-workflow`, `/flight-debrief`, `/mission-debrief`, `/daily-briefing`, and `/init-project`.
|
||||
|
||||
> This setting can only be changed when the container is stopped. Toggling it triggers a container recreation on the next start.
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Click **Edit** to open the environment variables modal. Add key-value pairs that will be injected into the container. Per-project variables override global variables with the same key.
|
||||
@@ -188,8 +227,8 @@ Click **Edit** to open the environment variables modal. Add key-value pairs that
|
||||
Click **Edit** to map host ports to container ports. This is useful when Claude Code starts a web server or other service inside the container and you want to access it from your host browser.
|
||||
|
||||
Each mapping specifies:
|
||||
- **Host Port** — The port on your machine (1–65535)
|
||||
- **Container Port** — The port inside the container (1–65535)
|
||||
- **Host Port** — The port on your machine (1-65535)
|
||||
- **Container Port** — The port inside the container (1-65535)
|
||||
- **Protocol** — TCP (default) or UDP
|
||||
|
||||
### Claude Instructions
|
||||
@@ -198,6 +237,128 @@ Click **Edit** to write per-project instructions for Claude Code. These are writ
|
||||
|
||||
---
|
||||
|
||||
## MCP Servers (Beta)
|
||||
|
||||
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers, which extend Claude Code with access to external tools and data sources. MCP servers are configured in a **global library** and **enabled per-project**.
|
||||
|
||||
### How It Works
|
||||
|
||||
There are two dimensions to MCP server configuration:
|
||||
|
||||
| | **Manual** (no Docker image) | **Docker** (Docker image specified) |
|
||||
|---|---|---|
|
||||
| **Stdio** | Command runs inside the project container | Command runs in a separate MCP container via `docker exec` |
|
||||
| **HTTP** | Connects to a URL you provide | Runs in a separate container, reached by hostname on a shared Docker network |
|
||||
|
||||
**Docker images are pulled automatically** if not already present when the project starts.
|
||||
|
||||
### Accessing MCP Configuration
|
||||
|
||||
Click the **MCP** tab in the sidebar to open the MCP server library. This is where you define all available MCP servers.
|
||||
|
||||
### Adding an MCP Server
|
||||
|
||||
1. Type a name in the input field and click **Add**.
|
||||
2. Expand the server card and configure it.
|
||||
|
||||
The key decision is whether to set a **Docker Image**:
|
||||
- **With Docker image** — The MCP server runs in its own isolated container. Best for servers that need specific dependencies or system-level packages.
|
||||
- **Without Docker image** (manual) — The command runs directly inside your project container. Best for lightweight npx-based servers that just need Node.js.
|
||||
|
||||
Then choose the **Transport Type**:
|
||||
- **Stdio** — The MCP server communicates over stdin/stdout. This is the most common type.
|
||||
- **HTTP** — The MCP server exposes an HTTP endpoint (streamable HTTP transport).
|
||||
|
||||
### Configuration Examples
|
||||
|
||||
#### Example 1: Filesystem Server (Stdio, Manual)
|
||||
|
||||
A simple npx-based server that runs inside the project container. No Docker image needed since Node.js is already installed.
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Docker Image** | *(empty)* |
|
||||
| **Transport** | Stdio |
|
||||
| **Command** | `npx` |
|
||||
| **Arguments** | `-y @modelcontextprotocol/server-filesystem /workspace` |
|
||||
|
||||
This gives Claude Code access to browse and read files via MCP. The command runs directly inside the project container using the pre-installed Node.js.
|
||||
|
||||
#### Example 2: GitHub Server (Stdio, Manual)
|
||||
|
||||
Another npx-based server, with an environment variable for authentication.
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Docker Image** | *(empty)* |
|
||||
| **Transport** | Stdio |
|
||||
| **Command** | `npx` |
|
||||
| **Arguments** | `-y @modelcontextprotocol/server-github` |
|
||||
| **Environment Variables** | `GITHUB_PERSONAL_ACCESS_TOKEN` = `ghp_your_token` |
|
||||
|
||||
#### Example 3: Custom MCP Server (HTTP, Docker)
|
||||
|
||||
An MCP server packaged as a Docker image that exposes an HTTP endpoint.
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Docker Image** | `myregistry/my-mcp-server:latest` |
|
||||
| **Transport** | HTTP |
|
||||
| **Container Port** | `8080` |
|
||||
| **Environment Variables** | `API_KEY` = `your_key` |
|
||||
|
||||
Triple-C will:
|
||||
1. Pull the image automatically if not present
|
||||
2. Start the container on the project's bridge network
|
||||
3. Configure Claude Code to reach it at `http://triple-c-mcp-{id}:8080/mcp`
|
||||
|
||||
The hostname is the MCP container's name on the Docker network — **not** `localhost`.
|
||||
|
||||
#### Example 4: Database Server (Stdio, Docker)
|
||||
|
||||
An MCP server that needs its own runtime environment, communicating over stdio.
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Docker Image** | `mcp/postgres-server:latest` |
|
||||
| **Transport** | Stdio |
|
||||
| **Command** | `node` |
|
||||
| **Arguments** | `dist/index.js` |
|
||||
| **Environment Variables** | `DATABASE_URL` = `postgresql://user:pass@host:5432/db` |
|
||||
|
||||
Triple-C will:
|
||||
1. Pull the image and start it on the project network
|
||||
2. Configure Claude Code to communicate via `docker exec -i triple-c-mcp-{id} node dist/index.js`
|
||||
3. Automatically enable Docker socket access on the project container (required for `docker exec`)
|
||||
|
||||
### Enabling MCP Servers Per-Project
|
||||
|
||||
In a project's configuration panel (click **Config**), the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes take effect on the next container start.
|
||||
|
||||
### How Docker-Based MCP Works
|
||||
|
||||
When a project with Docker-based MCP servers starts:
|
||||
|
||||
1. Missing Docker images are **automatically pulled** (progress shown in the progress modal)
|
||||
2. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
|
||||
3. Each enabled Docker MCP server gets its own container on that network
|
||||
4. The main project container is connected to the same network
|
||||
5. MCP server configuration is written to `~/.claude.json` inside the container
|
||||
|
||||
**Networking**: Docker-based MCP containers are reached by their container name as a hostname (e.g., `triple-c-mcp-{serverId}`), not by `localhost`. Docker DNS resolves these names automatically on the shared bridge network.
|
||||
|
||||
**Stdio + Docker**: The project container uses `docker exec` to communicate with the MCP container over stdin/stdout. This automatically enables Docker socket access on the project container.
|
||||
|
||||
**HTTP + Docker**: The project container connects to the MCP container's HTTP endpoint using the container hostname and port (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
|
||||
|
||||
**Manual (no Docker image)**: Stdio commands run directly inside the project container. HTTP URLs connect to wherever you point them (could be an external service or something running on the host).
|
||||
|
||||
### Configuration Change Detection
|
||||
|
||||
MCP server configuration is tracked via SHA-256 fingerprints stored as Docker labels. If you add, remove, or modify MCP servers for a project, the container is automatically recreated on the next start to apply the new configuration. The container filesystem is snapshotted first, so installed packages are preserved.
|
||||
|
||||
---
|
||||
|
||||
## AWS Bedrock Configuration
|
||||
|
||||
To use Claude via AWS Bedrock instead of Anthropic's API, switch the auth mode to **Bedrock** on the project card.
|
||||
@@ -227,6 +388,41 @@ Per-project settings always override these global defaults.
|
||||
|
||||
---
|
||||
|
||||
## Ollama Configuration
|
||||
|
||||
To use Claude Code with a local or remote Ollama server, switch the auth mode to **Ollama** on the project card.
|
||||
|
||||
### Settings
|
||||
|
||||
- **Base URL** — The URL of your Ollama server. Defaults to `http://host.docker.internal:11434`, which reaches a locally running Ollama instance from inside the container. For a remote server, use its IP or hostname (e.g., `http://192.168.1.100:11434`).
|
||||
- **Model ID** — Optional. Override the model to use (e.g., `qwen3.5:27b`).
|
||||
|
||||
### How It Works
|
||||
|
||||
Triple-C sets `ANTHROPIC_BASE_URL` to point Claude Code at your Ollama server instead of Anthropic's API. The `ANTHROPIC_AUTH_TOKEN` is set to `ollama` (required by Claude Code but not used for actual authentication).
|
||||
|
||||
> **Note:** Ollama support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models.
|
||||
|
||||
---
|
||||
|
||||
## LiteLLM Configuration
|
||||
|
||||
To use Claude Code through a [LiteLLM](https://docs.litellm.ai/) proxy gateway, switch the auth mode to **LiteLLM** on the project card. LiteLLM supports 100+ model providers (OpenAI, Gemini, Anthropic, and more) through a single proxy.
|
||||
|
||||
### Settings
|
||||
|
||||
- **Base URL** — The URL of your LiteLLM proxy. Defaults to `http://host.docker.internal:4000` for a locally running proxy.
|
||||
- **API Key** — Optional. The API key for your LiteLLM proxy, if authentication is required. Stored securely in your OS keychain.
|
||||
- **Model ID** — Optional. Override the model to use.
|
||||
|
||||
### How It Works
|
||||
|
||||
Triple-C sets `ANTHROPIC_BASE_URL` to point Claude Code at your LiteLLM proxy. If an API key is provided, it is set as `ANTHROPIC_AUTH_TOKEN`.
|
||||
|
||||
> **Note:** LiteLLM support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected when routing to non-Anthropic models through the proxy.
|
||||
|
||||
---
|
||||
|
||||
## Settings
|
||||
|
||||
Access global settings via the **Settings** tab in the sidebar.
|
||||
@@ -264,7 +460,11 @@ When an update is available, a pulsing **Update** button appears in the top bar.
|
||||
|
||||
### Multiple Sessions
|
||||
|
||||
You can open multiple terminal sessions (even for the same project). Each session gets its own tab in the top bar. Click a tab to switch, or click the **x** on a tab to close it.
|
||||
You can open multiple terminal sessions (even for the same project). Each session gets its own tab in the top bar. Click a tab to switch, or click the **x** on a tab to close it. Tabs show the project name, with a "(bash)" suffix for shell sessions.
|
||||
|
||||
### Bash Shell Sessions
|
||||
|
||||
In addition to Claude Code terminals, you can open a plain **bash login shell** in any running container by clicking the **Shell** button. This is useful for manual inspection, package installation, debugging, or running commands that don't need Claude Code.
|
||||
|
||||
### URL Detection
|
||||
|
||||
@@ -272,9 +472,28 @@ When Claude Code prints a long URL (e.g., during `claude login`), Triple-C detec
|
||||
|
||||
Shorter URLs in terminal output are also clickable directly.
|
||||
|
||||
### Clipboard Support (OSC 52)
|
||||
|
||||
Programs inside the container can copy text to your host clipboard. When a container program uses `xclip`, `xsel`, or `pbcopy`, the text is transparently forwarded to your host clipboard via OSC 52 escape sequences. No additional configuration is required — this works out of the box.
|
||||
|
||||
### Image Paste
|
||||
|
||||
You can paste images from your clipboard into the terminal (Ctrl+V / Cmd+V). The image is uploaded to the container and the file path is injected into the terminal input so Claude Code can reference it.
|
||||
You can paste images from your clipboard into the terminal (Ctrl+V / Cmd+V). The image is uploaded to the container as `/tmp/clipboard_<timestamp>.png` and the file path is injected into the terminal input so Claude Code can reference it. A toast notification confirms the upload.
|
||||
|
||||
### Jump to Current
|
||||
|
||||
When you scroll up in the terminal to review previous output, a **Jump to Current** button appears in the bottom-right corner. Click it to scroll back to the latest output.
|
||||
|
||||
### File Manager
|
||||
|
||||
Click the **Files** button on a running project to open the file manager modal. You can:
|
||||
|
||||
- **Browse** the container filesystem starting from `/workspace`, with breadcrumb navigation
|
||||
- **Download** any file to your host machine via the download button on each file entry
|
||||
- **Upload** files from your host into the current container directory
|
||||
- **Refresh** the directory listing at any time
|
||||
|
||||
The file manager shows file names, sizes, and modification dates.
|
||||
|
||||
### Terminal Rendering
|
||||
|
||||
@@ -356,6 +575,8 @@ The sandbox container (Ubuntu 24.04) comes pre-installed with:
|
||||
| build-essential | — | C/C++ compiler toolchain |
|
||||
| openssh-client | — | SSH for git and remote access |
|
||||
|
||||
The container also includes **clipboard shims** (`xclip`, `xsel`, `pbcopy`) that forward copy operations to the host via OSC 52, and an **audio shim** (`rec`, `arecord`) for future voice mode support.
|
||||
|
||||
You can install additional tools at runtime with `sudo apt install`, `pip install`, `npm install -g`, etc. Installed packages persist across container stops (but not across resets).
|
||||
|
||||
---
|
||||
@@ -378,7 +599,7 @@ You can install additional tools at runtime with `sudo apt install`, `pip instal
|
||||
|
||||
- Check that the Docker image is "Ready" in Settings.
|
||||
- Verify that the mounted folder paths exist on your host.
|
||||
- Look at the error message displayed in red below the project card.
|
||||
- Look at the error message displayed in the progress modal.
|
||||
|
||||
### OAuth Login URL Not Opening
|
||||
|
||||
@@ -394,4 +615,10 @@ You can install additional tools at runtime with `sudo apt install`, `pip instal
|
||||
### Settings Won't Save
|
||||
|
||||
- Most project settings can only be changed when the container is **stopped**. Stop the container first, make your changes, then start it again.
|
||||
- Some changes (like toggling Docker access or changing mounted folders) trigger an automatic container recreation on the next start.
|
||||
- Some changes (like toggling Docker access, Mission Control, or changing mounted folders) trigger an automatic container recreation on the next start.
|
||||
|
||||
### MCP Containers Not Starting
|
||||
|
||||
- Ensure the Docker image for the MCP server exists (pull it first if needed).
|
||||
- Check that Docker socket access is available (stdio + Docker MCP servers auto-enable this).
|
||||
- Try resetting the project container to force a clean recreation.
|
||||
|
||||
76
README.md
76
README.md
@@ -27,10 +27,10 @@ Triple-C is a cross-platform desktop application that sandboxes Claude Code insi
|
||||
### Container Lifecycle
|
||||
|
||||
1. **Create**: New container created with bind mounts, env vars, and labels
|
||||
2. **Start**: Container started, entrypoint remaps UID/GID, sets up SSH, configures Docker group
|
||||
3. **Terminal**: `docker exec` launches Claude Code with a PTY
|
||||
4. **Stop**: Container halted (filesystem persists in named volume)
|
||||
5. **Restart**: Existing container restarted; recreated if settings changed (e.g., Docker access toggled)
|
||||
2. **Start**: Container started, entrypoint remaps UID/GID, sets up SSH, configures Docker group, sets up MCP servers
|
||||
3. **Terminal**: `docker exec` launches Claude Code (or bash shell) with a PTY
|
||||
4. **Stop**: Container halted (filesystem persists in named volume); MCP containers stopped
|
||||
5. **Restart**: Existing container restarted; recreated if settings changed (detected via SHA-256 fingerprint)
|
||||
6. **Reset**: Container removed and recreated from scratch (named volume preserved)
|
||||
|
||||
### Mounts
|
||||
@@ -41,14 +41,18 @@ Triple-C is a cross-platform desktop application that sandboxes Claude Code insi
|
||||
| `/home/claude/.claude` | `triple-c-claude-config-{projectId}` | Named Volume | Persists across container recreation |
|
||||
| `/tmp/.host-ssh` | SSH key directory | Bind | Read-only; entrypoint copies to `~/.ssh` |
|
||||
| `/home/claude/.aws` | AWS config directory | Bind | Read-only; for Bedrock auth |
|
||||
| `/var/run/docker.sock` | Host Docker socket | Bind | Only if "Allow container spawning" is ON |
|
||||
| `/var/run/docker.sock` | Host Docker socket | Bind | If "Allow container spawning" is ON, or auto-enabled by stdio+Docker MCP servers |
|
||||
|
||||
### Authentication Modes
|
||||
|
||||
Each project can independently use one of:
|
||||
|
||||
- **Anthropic** (OAuth): User runs `claude login` inside the terminal on first use. Token persisted in the config volume across restarts and resets.
|
||||
- **AWS Bedrock**: Per-project AWS credentials (static keys, profile, or bearer token).
|
||||
- **AWS Bedrock**: Per-project AWS credentials (static keys, profile, or bearer token). SSO sessions are validated before launching Claude for Profile auth.
|
||||
- **Ollama**: Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`). Optional model override.
|
||||
- **LiteLLM**: Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers. API key stored securely in OS keychain.
|
||||
|
||||
> **Note:** Ollama and LiteLLM support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models behind these backends.
|
||||
|
||||
### Container Spawning (Sibling Containers)
|
||||
|
||||
@@ -56,6 +60,31 @@ When "Allow container spawning" is enabled per-project, the host Docker socket i
|
||||
|
||||
If the Docker access setting is toggled after a container already exists, the container is automatically recreated on next start to apply the mount change. The named config volume (keyed by project ID) is preserved across recreation.
|
||||
|
||||
### MCP Server Architecture
|
||||
|
||||
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers as a Beta feature. MCP servers extend Claude Code with external tools and data sources.
|
||||
|
||||
**Modes**: Each MCP server operates in one of four modes based on transport type and whether a Docker image is specified:
|
||||
|
||||
| Mode | Where It Runs | How It Communicates |
|
||||
|------|--------------|---------------------|
|
||||
| Stdio + Manual | Inside the project container | Direct stdin/stdout (e.g., `npx -y @mcp/server`) |
|
||||
| Stdio + Docker | Separate MCP container | `docker exec -i <mcp-container> <command>` from the project container |
|
||||
| HTTP + Manual | External / user-provided | Connects to the URL you specify |
|
||||
| HTTP + Docker | Separate MCP container | `http://<mcp-container>:<port>/mcp` via Docker DNS on a shared bridge network |
|
||||
|
||||
**Key behaviors**:
|
||||
- **Global library**: MCP servers are defined globally in the MCP sidebar tab and stored in `mcp_servers.json`
|
||||
- **Per-project toggles**: Each project enables/disables individual servers via checkboxes
|
||||
- **Auto-pull**: Docker images for MCP servers are pulled automatically if not present when the project starts
|
||||
- **Docker networking**: Docker-based MCP containers run on a per-project bridge network (`triple-c-net-{projectId}`), reachable by container name — not localhost
|
||||
- **Auto-detection**: Config changes are detected via SHA-256 fingerprints and trigger automatic container recreation
|
||||
- **Config injection**: MCP server configuration is written to `~/.claude.json` inside the container via the `MCP_SERVERS_JSON` environment variable, merged by the entrypoint using `jq`
|
||||
|
||||
### Mission Control Integration
|
||||
|
||||
Optional per-project integration with [Flight Control](https://github.com/msieurthenardier/mission-control) — an AI-first development methodology. When enabled, the repo is cloned into the container, skills are installed, and workflow instructions are injected into CLAUDE.md.
|
||||
|
||||
### Docker Socket Path
|
||||
|
||||
The socket path is OS-aware:
|
||||
@@ -75,17 +104,32 @@ Users can override this in Settings via the global `docker_socket_path` option.
|
||||
| `app/src/components/layout/StatusBar.tsx` | Running project/terminal counts |
|
||||
| `app/src/components/projects/ProjectCard.tsx` | Project config, auth mode, action buttons |
|
||||
| `app/src/components/projects/ProjectList.tsx` | Project list in sidebar |
|
||||
| `app/src/components/settings/SettingsPanel.tsx` | API key, Docker, AWS settings |
|
||||
| `app/src/components/terminal/TerminalView.tsx` | xterm.js terminal with WebGL, URL detection |
|
||||
| `app/src/components/terminal/TerminalTabs.tsx` | Tab bar for multiple terminal sessions |
|
||||
| `app/src-tauri/src/docker/container.rs` | Container creation, mounts, env vars, inspection |
|
||||
| `app/src-tauri/src/docker/exec.rs` | PTY exec sessions for terminal interaction |
|
||||
| `app/src/components/projects/FileManagerModal.tsx` | File browser modal (browse, download, upload) |
|
||||
| `app/src/components/projects/ContainerProgressModal.tsx` | Real-time container operation progress |
|
||||
| `app/src/components/mcp/McpPanel.tsx` | MCP server library (global configuration) |
|
||||
| `app/src/components/mcp/McpServerCard.tsx` | Individual MCP server configuration card |
|
||||
| `app/src/components/settings/SettingsPanel.tsx` | Docker, AWS, timezone, and global settings |
|
||||
| `app/src/components/terminal/TerminalView.tsx` | xterm.js terminal with WebGL, URL detection, OSC 52 clipboard, image paste |
|
||||
| `app/src/components/terminal/TerminalTabs.tsx` | Tab bar for multiple terminal sessions (claude + bash) |
|
||||
| `app/src/hooks/useTerminal.ts` | Terminal session management (claude and bash modes) |
|
||||
| `app/src/hooks/useFileManager.ts` | File manager operations (list, download, upload) |
|
||||
| `app/src/hooks/useMcpServers.ts` | MCP server CRUD operations |
|
||||
| `app/src/hooks/useVoice.ts` | Voice mode audio capture (currently hidden) |
|
||||
| `app/src-tauri/src/docker/container.rs` | Container creation, mounts, env vars, MCP injection, fingerprinting |
|
||||
| `app/src-tauri/src/docker/exec.rs` | PTY exec sessions, file upload/download via tar |
|
||||
| `app/src-tauri/src/docker/image.rs` | Image building/pulling |
|
||||
| `app/src-tauri/src/docker/network.rs` | Per-project bridge networks for MCP containers |
|
||||
| `app/src-tauri/src/commands/project_commands.rs` | Start/stop/rebuild Tauri command handlers |
|
||||
| `app/src-tauri/src/models/project.rs` | Project struct (auth mode, Docker access, etc.) |
|
||||
| `app/src-tauri/src/models/app_settings.rs` | Global settings (image source, Docker socket, AWS) |
|
||||
| `container/Dockerfile` | Ubuntu 24.04 sandbox image with Claude Code + dev tools |
|
||||
| `container/entrypoint.sh` | UID/GID remap, SSH setup, Docker group config |
|
||||
| `app/src-tauri/src/commands/file_commands.rs` | File manager Tauri commands (list, download, upload) |
|
||||
| `app/src-tauri/src/commands/mcp_commands.rs` | MCP server CRUD Tauri commands |
|
||||
| `app/src-tauri/src/models/project.rs` | Project struct (auth mode, Docker access, MCP servers, Mission Control) |
|
||||
| `app/src-tauri/src/models/mcp_server.rs` | MCP server struct (transport, Docker image, env vars) |
|
||||
| `app/src-tauri/src/models/app_settings.rs` | Global settings (image source, Docker socket, AWS, microphone) |
|
||||
| `app/src-tauri/src/storage/mcp_store.rs` | MCP server persistence (JSON with atomic writes) |
|
||||
| `container/Dockerfile` | Ubuntu 24.04 sandbox image with Claude Code + dev tools + clipboard/audio shims |
|
||||
| `container/entrypoint.sh` | UID/GID remap, SSH setup, Docker group config, MCP injection, Mission Control setup |
|
||||
| `container/osc52-clipboard` | Clipboard shim (xclip/xsel/pbcopy via OSC 52) |
|
||||
| `container/audio-shim` | Audio capture shim (rec/arecord via FIFO) for voice mode |
|
||||
|
||||
## CSS / Styling Notes
|
||||
|
||||
@@ -100,4 +144,6 @@ Users can override this in Settings via the global `docker_socket_path` option.
|
||||
|
||||
**Pre-installed tools**: Claude Code, Node.js 22 LTS + pnpm, Python 3.12 + uv + ruff, Rust (stable), Docker CLI, git + gh, AWS CLI v2, ripgrep, openssh-client, build-essential
|
||||
|
||||
**Shims**: `xclip`/`xsel`/`pbcopy` (OSC 52 clipboard forwarding), `rec`/`arecord` (audio FIFO for voice mode)
|
||||
|
||||
**Default user**: `claude` (UID/GID 1000, remapped by entrypoint to match host)
|
||||
74
TECHNICAL.md
74
TECHNICAL.md
@@ -154,13 +154,12 @@ The `.claude` configuration directory uses a **named Docker volume** (`triple-c-
|
||||
|
||||
### Authentication Modes
|
||||
|
||||
Each project independently chooses one of three authentication methods:
|
||||
Each project independently chooses one of two authentication methods:
|
||||
|
||||
| Mode | How It Works | When to Use |
|
||||
|------|-------------|-------------|
|
||||
| **Login (OAuth)** | User runs `claude login` or `/login` inside the terminal. OAuth URL opens in host browser via the web links addon. Token persists in the `.claude` config volume. | Personal use, interactive sessions |
|
||||
| **API Key** | Key stored in OS keychain, injected as `ANTHROPIC_API_KEY` env var at container creation. | Automated workflows, team-shared keys |
|
||||
| **AWS Bedrock** | Per-project AWS credentials (static, profile, or bearer token) injected as env vars. `~/.aws` config optionally bind-mounted read-only. | Enterprise environments using Bedrock |
|
||||
| **Anthropic (OAuth)** | User runs `claude login` or `/login` inside the terminal. OAuth URL opens in host browser via URL detection. Token persists in the `.claude` config volume. | Default — personal and team use |
|
||||
| **AWS Bedrock** | Per-project AWS credentials (static keys, profile, or bearer token) injected as env vars. `~/.aws` config optionally bind-mounted read-only. | Enterprise environments using Bedrock |
|
||||
|
||||
### UID/GID Remapping
|
||||
|
||||
@@ -213,13 +212,26 @@ The `TerminalView` component works around this with a **URL accumulator**:
|
||||
|
||||
```
|
||||
triple-c/
|
||||
├── LICENSE # MIT
|
||||
├── README.md # Architecture overview
|
||||
├── TECHNICAL.md # This document
|
||||
├── Triple-C.md # Project overview
|
||||
├── HOW-TO-USE.md # User guide
|
||||
├── BUILDING.md # Build instructions
|
||||
├── CLAUDE.md # Claude Code instructions
|
||||
│
|
||||
├── container/
|
||||
│ ├── Dockerfile # Ubuntu 24.04 + all dev tools + Claude Code
|
||||
│ └── entrypoint.sh # UID/GID remap, SSH setup, git config
|
||||
│ ├── entrypoint.sh # UID/GID remap, SSH setup, git config, MCP injection
|
||||
│ ├── osc52-clipboard # Clipboard shim (xclip/xsel/pbcopy via OSC 52)
|
||||
│ ├── audio-shim # Audio capture shim (rec/arecord via FIFO)
|
||||
│ ├── triple-c-scheduler # Bash-based cron task system
|
||||
│ └── triple-c-task-runner # Task execution runner for scheduler
|
||||
│
|
||||
├── .gitea/
|
||||
│ └── workflows/
|
||||
│ ├── build-app.yml # Build Tauri app (Linux/macOS/Windows)
|
||||
│ ├── build.yml # Build container image (multi-arch)
|
||||
│ ├── sync-release.yml # Mirror releases to GitHub
|
||||
│ └── backfill-releases.yml # Bulk copy releases to GitHub
|
||||
│
|
||||
└── app/ # Tauri v2 desktop application
|
||||
├── package.json # React, xterm.js, zustand, tailwindcss
|
||||
@@ -231,22 +243,28 @@ triple-c/
|
||||
│ ├── App.tsx # Top-level layout
|
||||
│ ├── index.css # CSS variables, dark theme, scrollbars
|
||||
│ ├── store/
|
||||
│ │ └── appState.ts # Zustand store (projects, sessions, UI)
|
||||
│ │ └── appState.ts # Zustand store (projects, sessions, MCP, UI)
|
||||
│ ├── hooks/
|
||||
│ │ ├── useDocker.ts # Docker status, image build
|
||||
│ │ ├── useDocker.ts # Docker status, image build/pull
|
||||
│ │ ├── useFileManager.ts # File manager operations
|
||||
│ │ ├── useMcpServers.ts # MCP server CRUD
|
||||
│ │ ├── useProjects.ts # Project CRUD operations
|
||||
│ │ ├── useSettings.ts # API key, app settings
|
||||
│ │ └── useTerminal.ts # Terminal I/O, resize, session events
|
||||
│ │ ├── useSettings.ts # App settings
|
||||
│ │ ├── useTerminal.ts # Terminal I/O, resize, session events
|
||||
│ │ ├── useUpdates.ts # App update checking
|
||||
│ │ └── useVoice.ts # Voice mode audio capture
|
||||
│ ├── lib/
|
||||
│ │ ├── types.ts # TypeScript interfaces matching Rust models
|
||||
│ │ ├── tauri-commands.ts # Typed invoke() wrappers
|
||||
│ │ └── constants.ts # App-wide constants
|
||||
│ └── components/
|
||||
│ ├── layout/ # Sidebar, TopBar, StatusBar
|
||||
│ ├── projects/ # ProjectList, ProjectCard, AddProjectDialog
|
||||
│ ├── terminal/ # TerminalView (xterm.js), TerminalTabs
|
||||
│ ├── settings/ # ApiKeyInput, DockerSettings, AwsSettings
|
||||
│ └── containers/ # SiblingContainers
|
||||
│ ├── mcp/ # McpPanel, McpServerCard
|
||||
│ ├── projects/ # ProjectCard, ProjectList, AddProjectDialog,
|
||||
│ │ # FileManagerModal, ContainerProgressModal, modals
|
||||
│ ├── settings/ # SettingsPanel, DockerSettings, AwsSettings,
|
||||
│ │ # UpdateDialog
|
||||
│ └── terminal/ # TerminalView (xterm.js), TerminalTabs, UrlToast
|
||||
│
|
||||
└── src-tauri/ # Rust backend
|
||||
├── Cargo.toml # Rust dependencies
|
||||
@@ -256,23 +274,31 @@ triple-c/
|
||||
└── src/
|
||||
├── lib.rs # App builder, plugin + command registration
|
||||
├── main.rs # Entry point
|
||||
├── logging.rs # Log configuration
|
||||
├── commands/ # Tauri command handlers
|
||||
│ ├── docker_commands.rs
|
||||
│ ├── project_commands.rs
|
||||
│ ├── settings_commands.rs
|
||||
│ └── terminal_commands.rs
|
||||
│ ├── docker_commands.rs # Docker status, image ops
|
||||
│ ├── file_commands.rs # File manager (list/download/upload)
|
||||
│ ├── mcp_commands.rs # MCP server CRUD
|
||||
│ ├── project_commands.rs # Start/stop/rebuild containers
|
||||
│ ├── settings_commands.rs # Settings CRUD
|
||||
│ ├── terminal_commands.rs # Terminal I/O, resize
|
||||
│ └── update_commands.rs # App update checking
|
||||
├── docker/ # Docker API layer
|
||||
│ ├── client.rs # bollard singleton connection
|
||||
│ ├── container.rs # Create, start, stop, remove, inspect
|
||||
│ ├── container.rs # Create, start, stop, remove, fingerprinting
|
||||
│ ├── exec.rs # PTY exec sessions with bidirectional streaming
|
||||
│ ├── image.rs # Build from embedded Dockerfile, pull from registry
|
||||
│ └── sibling.rs # List non-Triple-C containers
|
||||
│ ├── image.rs # Build from Dockerfile, pull from registry
|
||||
│ └── network.rs # Per-project bridge networks for MCP
|
||||
├── models/ # Data structures
|
||||
│ ├── project.rs # Project, AuthMode, BedrockConfig
|
||||
│ └── container_config.rs
|
||||
│ ├── mcp_server.rs # MCP server configuration
|
||||
│ ├── app_settings.rs # Global settings (image source, AWS, etc.)
|
||||
│ ├── container_config.rs # Image name resolution
|
||||
│ └── update_info.rs # Update metadata
|
||||
└── storage/ # Persistence
|
||||
├── projects_store.rs # JSON file with atomic writes
|
||||
├── settings_store.rs # App settings
|
||||
├── mcp_store.rs # MCP server persistence
|
||||
├── settings_store.rs # App settings (Tauri plugin-store)
|
||||
└── secure.rs # OS keychain via keyring
|
||||
```
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "triple-c",
|
||||
"private": true,
|
||||
"version": "0.1.0",
|
||||
"version": "0.2.0",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
|
||||
2
app/src-tauri/Cargo.lock
generated
2
app/src-tauri/Cargo.lock
generated
@@ -4668,7 +4668,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "triple-c"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
dependencies = [
|
||||
"bollard",
|
||||
"chrono",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "triple-c"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
edition = "2021"
|
||||
|
||||
[lib]
|
||||
|
||||
30
app/src-tauri/src/commands/aws_commands.rs
Normal file
30
app/src-tauri/src/commands/aws_commands.rs
Normal file
@@ -0,0 +1,30 @@
|
||||
use tauri::State;
|
||||
use crate::AppState;
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn aws_sso_refresh(
|
||||
project_id: String,
|
||||
state: State<'_, AppState>,
|
||||
) -> Result<(), String> {
|
||||
let project = state.projects_store.get(&project_id)
|
||||
.ok_or_else(|| format!("Project {} not found", project_id))?;
|
||||
|
||||
let profile = project.bedrock_config.as_ref()
|
||||
.and_then(|b| b.aws_profile.clone())
|
||||
.or_else(|| state.settings_store.get().global_aws.aws_profile.clone())
|
||||
.unwrap_or_else(|| "default".to_string());
|
||||
|
||||
log::info!("Running host-side AWS SSO login for profile '{}'", profile);
|
||||
|
||||
let status = tokio::process::Command::new("aws")
|
||||
.args(["sso", "login", "--profile", &profile])
|
||||
.status()
|
||||
.await
|
||||
.map_err(|e| format!("Failed to run aws sso login: {}", e))?;
|
||||
|
||||
if !status.success() {
|
||||
return Err("SSO login failed or was cancelled".to_string());
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,3 +1,4 @@
|
||||
pub mod aws_commands;
|
||||
pub mod docker_commands;
|
||||
pub mod file_commands;
|
||||
pub mod mcp_commands;
|
||||
|
||||
@@ -34,6 +34,11 @@ fn store_secrets_for_project(project: &Project) -> Result<(), String> {
|
||||
secure::store_project_secret(&project.id, "aws-bearer-token", v)?;
|
||||
}
|
||||
}
|
||||
if let Some(ref litellm) = project.litellm_config {
|
||||
if let Some(ref v) = litellm.api_key {
|
||||
secure::store_project_secret(&project.id, "litellm-api-key", v)?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -51,6 +56,10 @@ fn load_secrets_for_project(project: &mut Project) {
|
||||
bedrock.aws_bearer_token = secure::get_project_secret(&project.id, "aws-bearer-token")
|
||||
.unwrap_or(None);
|
||||
}
|
||||
if let Some(ref mut litellm) = project.litellm_config {
|
||||
litellm.api_key = secure::get_project_secret(&project.id, "litellm-api-key")
|
||||
.unwrap_or(None);
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve enabled MCP servers and filter to Docker-only ones.
|
||||
@@ -180,6 +189,22 @@ pub async fn start_project_container(
|
||||
}
|
||||
}
|
||||
|
||||
if project.auth_mode == AuthMode::Ollama {
|
||||
let ollama = project.ollama_config.as_ref()
|
||||
.ok_or_else(|| "Ollama auth mode selected but no Ollama configuration found.".to_string())?;
|
||||
if ollama.base_url.is_empty() {
|
||||
return Err("Ollama base URL is required.".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if project.auth_mode == AuthMode::LiteLlm {
|
||||
let litellm = project.litellm_config.as_ref()
|
||||
.ok_or_else(|| "LiteLLM auth mode selected but no LiteLLM configuration found.".to_string())?;
|
||||
if litellm.base_url.is_empty() {
|
||||
return Err("LiteLLM base URL is required.".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
// Update status to starting
|
||||
state.projects_store.update_status(&project_id, ProjectStatus::Starting)?;
|
||||
|
||||
@@ -202,6 +227,28 @@ pub async fn start_project_container(
|
||||
|
||||
// Set up Docker network and MCP containers if needed
|
||||
let network_name = if !docker_mcp.is_empty() {
|
||||
// Pull any missing MCP Docker images before starting containers
|
||||
for server in &docker_mcp {
|
||||
if let Some(ref image) = server.docker_image {
|
||||
if !docker::image_exists(image).await.unwrap_or(false) {
|
||||
emit_progress(
|
||||
&app_handle,
|
||||
&project_id,
|
||||
&format!("Pulling MCP image for '{}'...", server.name),
|
||||
);
|
||||
let image_clone = image.clone();
|
||||
let app_clone = app_handle.clone();
|
||||
let pid_clone = project_id.clone();
|
||||
let sname = server.name.clone();
|
||||
docker::pull_image(&image_clone, move |msg| {
|
||||
emit_progress(&app_clone, &pid_clone, &format!("[{}] {}", sname, msg));
|
||||
}).await.map_err(|e| {
|
||||
format!("Failed to pull MCP image '{}' for '{}': {}", image, server.name, e)
|
||||
})?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
emit_progress(&app_handle, &project_id, "Setting up MCP network...");
|
||||
let net = docker::ensure_project_network(&project.id).await?;
|
||||
emit_progress(&app_handle, &project_id, "Starting MCP containers...");
|
||||
@@ -386,6 +433,46 @@ pub async fn rebuild_project_container(
|
||||
start_project_container(project_id, app_handle, state).await
|
||||
}
|
||||
|
||||
/// Reconcile project statuses against actual Docker container state.
|
||||
/// Called by the frontend after Docker is confirmed available. Projects
|
||||
/// marked as Running whose containers are no longer running get reset
|
||||
/// to Stopped.
|
||||
#[tauri::command]
|
||||
pub async fn reconcile_project_statuses(
|
||||
state: State<'_, AppState>,
|
||||
) -> Result<Vec<Project>, String> {
|
||||
let projects = state.projects_store.list();
|
||||
|
||||
for project in &projects {
|
||||
if project.status != ProjectStatus::Running && project.status != ProjectStatus::Error {
|
||||
continue;
|
||||
}
|
||||
|
||||
let is_running = if let Some(ref container_id) = project.container_id {
|
||||
docker::is_container_running(container_id).await.unwrap_or(false)
|
||||
} else {
|
||||
false
|
||||
};
|
||||
|
||||
if is_running {
|
||||
log::info!(
|
||||
"Project '{}' ({}) container is still running — keeping Running status",
|
||||
project.name,
|
||||
project.id
|
||||
);
|
||||
} else {
|
||||
log::info!(
|
||||
"Project '{}' ({}) container is not running — setting to Stopped",
|
||||
project.name,
|
||||
project.id
|
||||
);
|
||||
let _ = state.projects_store.update_status(&project.id, ProjectStatus::Stopped);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(state.projects_store.list())
|
||||
}
|
||||
|
||||
fn default_docker_socket() -> String {
|
||||
if cfg!(target_os = "windows") {
|
||||
"//./pipe/docker_engine".to_string()
|
||||
|
||||
@@ -40,11 +40,12 @@ if aws sts get-caller-identity --profile '{profile}' >/dev/null 2>&1; then
|
||||
echo "AWS session valid."
|
||||
else
|
||||
echo "AWS session expired or invalid."
|
||||
# Check if this profile uses SSO (has sso_start_url configured)
|
||||
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1; then
|
||||
echo "Starting SSO login — click the URL below to authenticate:"
|
||||
# Check if this profile uses SSO (has sso_start_url or sso_session configured)
|
||||
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1 || \
|
||||
aws configure get sso_session --profile '{profile}' >/dev/null 2>&1; then
|
||||
echo "Starting SSO login..."
|
||||
echo ""
|
||||
aws sso login --profile '{profile}'
|
||||
triple-c-sso-refresh
|
||||
if [ $? -ne 0 ]; then
|
||||
echo ""
|
||||
echo "SSO login failed or was cancelled. Starting Claude anyway..."
|
||||
|
||||
@@ -47,8 +47,8 @@ The `/workspace/mission-control/` directory contains **Flight Control** — an A
|
||||
### How It Works
|
||||
|
||||
- **Mission Control is a tool, not a project.** It provides skills and methodology for managing other projects.
|
||||
- All Flight Control skills live in `/workspace/mission-control/.claude/skills/`
|
||||
- The projects registry at `/workspace/mission-control/projects.md` lists all active projects
|
||||
- All Flight Control skills are installed as personal skills in `~/.claude/skills/` and are automatically available as `/slash-commands`
|
||||
- The methodology docs and project registry live in `/workspace/mission-control/`
|
||||
|
||||
### When to Use
|
||||
|
||||
@@ -231,6 +231,33 @@ fn compute_bedrock_fingerprint(project: &Project) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute a fingerprint for the Ollama configuration so we can detect changes.
|
||||
fn compute_ollama_fingerprint(project: &Project) -> String {
|
||||
if let Some(ref ollama) = project.ollama_config {
|
||||
let parts = vec![
|
||||
ollama.base_url.clone(),
|
||||
ollama.model_id.as_deref().unwrap_or("").to_string(),
|
||||
];
|
||||
sha256_hex(&parts.join("|"))
|
||||
} else {
|
||||
String::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute a fingerprint for the LiteLLM configuration so we can detect changes.
|
||||
fn compute_litellm_fingerprint(project: &Project) -> String {
|
||||
if let Some(ref litellm) = project.litellm_config {
|
||||
let parts = vec![
|
||||
litellm.base_url.clone(),
|
||||
litellm.api_key.as_deref().unwrap_or("").to_string(),
|
||||
litellm.model_id.as_deref().unwrap_or("").to_string(),
|
||||
];
|
||||
sha256_hex(&parts.join("|"))
|
||||
} else {
|
||||
String::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute a fingerprint for the project paths so we can detect changes.
|
||||
/// Sorted by mount_name so order changes don't cause spurious recreation.
|
||||
fn compute_paths_fingerprint(paths: &[ProjectPath]) -> String {
|
||||
@@ -459,6 +486,7 @@ pub async fn create_container(
|
||||
if let Some(p) = profile {
|
||||
env_vars.push(format!("AWS_PROFILE={}", p));
|
||||
}
|
||||
env_vars.push("AWS_SSO_AUTH_REFRESH_CMD=triple-c-sso-refresh".to_string());
|
||||
}
|
||||
BedrockAuthMethod::BearerToken => {
|
||||
if let Some(ref token) = bedrock.aws_bearer_token {
|
||||
@@ -477,6 +505,30 @@ pub async fn create_container(
|
||||
}
|
||||
}
|
||||
|
||||
// Ollama configuration
|
||||
if project.auth_mode == AuthMode::Ollama {
|
||||
if let Some(ref ollama) = project.ollama_config {
|
||||
env_vars.push(format!("ANTHROPIC_BASE_URL={}", ollama.base_url));
|
||||
env_vars.push("ANTHROPIC_AUTH_TOKEN=ollama".to_string());
|
||||
if let Some(ref model) = ollama.model_id {
|
||||
env_vars.push(format!("ANTHROPIC_MODEL={}", model));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// LiteLLM configuration
|
||||
if project.auth_mode == AuthMode::LiteLlm {
|
||||
if let Some(ref litellm) = project.litellm_config {
|
||||
env_vars.push(format!("ANTHROPIC_BASE_URL={}", litellm.base_url));
|
||||
if let Some(ref key) = litellm.api_key {
|
||||
env_vars.push(format!("ANTHROPIC_AUTH_TOKEN={}", key));
|
||||
}
|
||||
if let Some(ref model) = litellm.model_id {
|
||||
env_vars.push(format!("ANTHROPIC_MODEL={}", model));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Custom environment variables (global + per-project, project overrides global for same key)
|
||||
let merged_env = merge_custom_env_vars(global_custom_env_vars, &project.custom_env_vars);
|
||||
let reserved_prefixes = ["ANTHROPIC_", "AWS_", "GIT_", "HOST_", "CLAUDE_", "TRIPLE_C_"];
|
||||
@@ -645,6 +697,8 @@ pub async fn create_container(
|
||||
labels.insert("triple-c.auth-mode".to_string(), format!("{:?}", project.auth_mode));
|
||||
labels.insert("triple-c.paths-fingerprint".to_string(), compute_paths_fingerprint(&project.paths));
|
||||
labels.insert("triple-c.bedrock-fingerprint".to_string(), compute_bedrock_fingerprint(project));
|
||||
labels.insert("triple-c.ollama-fingerprint".to_string(), compute_ollama_fingerprint(project));
|
||||
labels.insert("triple-c.litellm-fingerprint".to_string(), compute_litellm_fingerprint(project));
|
||||
labels.insert("triple-c.ports-fingerprint".to_string(), compute_ports_fingerprint(&project.port_mappings));
|
||||
labels.insert("triple-c.image".to_string(), image_name.to_string());
|
||||
labels.insert("triple-c.timezone".to_string(), timezone.unwrap_or("").to_string());
|
||||
@@ -884,6 +938,22 @@ pub async fn container_needs_recreation(
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
// ── Ollama config fingerprint ────────────────────────────────────────
|
||||
let expected_ollama_fp = compute_ollama_fingerprint(project);
|
||||
let container_ollama_fp = get_label("triple-c.ollama-fingerprint").unwrap_or_default();
|
||||
if container_ollama_fp != expected_ollama_fp {
|
||||
log::info!("Ollama config mismatch");
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
// ── LiteLLM config fingerprint ───────────────────────────────────────
|
||||
let expected_litellm_fp = compute_litellm_fingerprint(project);
|
||||
let container_litellm_fp = get_label("triple-c.litellm-fingerprint").unwrap_or_default();
|
||||
if container_litellm_fp != expected_litellm_fp {
|
||||
log::info!("LiteLLM config mismatch");
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
// ── Image ────────────────────────────────────────────────────────────
|
||||
// The image label is set at creation time; if the user changed the
|
||||
// configured image we need to recreate. We only compare when the
|
||||
@@ -1031,6 +1101,16 @@ pub async fn get_container_info(project: &Project) -> Result<Option<ContainerInf
|
||||
}
|
||||
}
|
||||
|
||||
/// Check whether a Docker container is currently running.
|
||||
/// Returns false if the container doesn't exist or Docker is unavailable.
|
||||
pub async fn is_container_running(container_id: &str) -> Result<bool, String> {
|
||||
let docker = get_docker()?;
|
||||
match docker.inspect_container(container_id, None).await {
|
||||
Ok(info) => Ok(info.state.and_then(|s| s.running).unwrap_or(false)),
|
||||
Err(_) => Ok(false),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn list_sibling_containers() -> Result<Vec<ContainerSummary>, String> {
|
||||
let docker = get_docker()?;
|
||||
|
||||
@@ -1063,11 +1143,6 @@ pub fn any_stdio_docker_mcp(servers: &[McpServer]) -> bool {
|
||||
servers.iter().any(|s| s.is_docker() && s.transport_type == McpTransportType::Stdio)
|
||||
}
|
||||
|
||||
/// Returns true if any MCP server uses Docker.
|
||||
pub fn any_docker_mcp(servers: &[McpServer]) -> bool {
|
||||
servers.iter().any(|s| s.is_docker())
|
||||
}
|
||||
|
||||
/// Find an existing MCP container by its expected name.
|
||||
pub async fn find_mcp_container(server: &McpServer) -> Result<Option<String>, String> {
|
||||
let docker = get_docker()?;
|
||||
|
||||
@@ -22,6 +22,7 @@ impl ExecSession {
|
||||
.map_err(|e| format!("Failed to send input: {}", e))
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub async fn resize(&self, cols: u16, rows: u16) -> Result<(), String> {
|
||||
let docker = get_docker()?;
|
||||
docker
|
||||
|
||||
@@ -4,8 +4,13 @@ pub mod image;
|
||||
pub mod exec;
|
||||
pub mod network;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
pub use client::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use container::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use image::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use exec::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use network::*;
|
||||
|
||||
@@ -48,6 +48,7 @@ pub async fn ensure_project_network(project_id: &str) -> Result<String, String>
|
||||
}
|
||||
|
||||
/// Connect a container to the project network.
|
||||
#[allow(dead_code)]
|
||||
pub async fn connect_container_to_network(
|
||||
container_id: &str,
|
||||
network_name: &str,
|
||||
|
||||
@@ -88,6 +88,7 @@ pub fn run() {
|
||||
commands::project_commands::start_project_container,
|
||||
commands::project_commands::stop_project_container,
|
||||
commands::project_commands::rebuild_project_container,
|
||||
commands::project_commands::reconcile_project_statuses,
|
||||
// Settings
|
||||
commands::settings_commands::get_settings,
|
||||
commands::settings_commands::update_settings,
|
||||
@@ -113,6 +114,8 @@ pub fn run() {
|
||||
commands::mcp_commands::add_mcp_server,
|
||||
commands::mcp_commands::update_mcp_server,
|
||||
commands::mcp_commands::remove_mcp_server,
|
||||
// AWS
|
||||
commands::aws_commands::aws_sso_refresh,
|
||||
// Updates
|
||||
commands::update_commands::get_app_version,
|
||||
commands::update_commands::check_for_updates,
|
||||
|
||||
@@ -33,6 +33,8 @@ pub struct Project {
|
||||
pub status: ProjectStatus,
|
||||
pub auth_mode: AuthMode,
|
||||
pub bedrock_config: Option<BedrockConfig>,
|
||||
pub ollama_config: Option<OllamaConfig>,
|
||||
pub litellm_config: Option<LiteLlmConfig>,
|
||||
pub allow_docker_access: bool,
|
||||
#[serde(default)]
|
||||
pub mission_control_enabled: bool,
|
||||
@@ -74,6 +76,9 @@ pub enum AuthMode {
|
||||
#[serde(alias = "login", alias = "api_key")]
|
||||
Anthropic,
|
||||
Bedrock,
|
||||
Ollama,
|
||||
#[serde(alias = "litellm")]
|
||||
LiteLlm,
|
||||
}
|
||||
|
||||
impl Default for AuthMode {
|
||||
@@ -115,6 +120,29 @@ pub struct BedrockConfig {
|
||||
pub disable_prompt_caching: bool,
|
||||
}
|
||||
|
||||
/// Ollama configuration for a project.
|
||||
/// Ollama exposes an Anthropic-compatible API endpoint.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct OllamaConfig {
|
||||
/// The base URL of the Ollama server (e.g., "http://host.docker.internal:11434" or "http://192.168.1.100:11434")
|
||||
pub base_url: String,
|
||||
/// Optional model override (e.g., "qwen3.5:27b")
|
||||
pub model_id: Option<String>,
|
||||
}
|
||||
|
||||
/// LiteLLM gateway configuration for a project.
|
||||
/// LiteLLM translates Anthropic API calls to 100+ model providers.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct LiteLlmConfig {
|
||||
/// The base URL of the LiteLLM proxy (e.g., "http://host.docker.internal:4000" or "https://litellm.example.com")
|
||||
pub base_url: String,
|
||||
/// API key for the LiteLLM proxy
|
||||
#[serde(skip_serializing, default)]
|
||||
pub api_key: Option<String>,
|
||||
/// Optional model override
|
||||
pub model_id: Option<String>,
|
||||
}
|
||||
|
||||
impl Project {
|
||||
pub fn new(name: String, paths: Vec<ProjectPath>) -> Self {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
@@ -126,6 +154,8 @@ impl Project {
|
||||
status: ProjectStatus::Stopped,
|
||||
auth_mode: AuthMode::default(),
|
||||
bedrock_config: None,
|
||||
ollama_config: None,
|
||||
litellm_config: None,
|
||||
allow_docker_access: false,
|
||||
mission_control_enabled: false,
|
||||
ssh_key_path: None,
|
||||
|
||||
@@ -3,7 +3,11 @@ pub mod secure;
|
||||
pub mod settings_store;
|
||||
pub mod mcp_store;
|
||||
|
||||
#[allow(unused_imports)]
|
||||
pub use projects_store::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use secure::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use settings_store::*;
|
||||
#[allow(unused_imports)]
|
||||
pub use mcp_store::*;
|
||||
|
||||
@@ -72,6 +72,8 @@ impl ProjectsStore {
|
||||
|
||||
// Reconcile stale transient statuses: on a cold app start no Docker
|
||||
// operations can be in flight, so Starting/Stopping are always stale.
|
||||
// Running/Error are left as-is and reconciled against Docker later
|
||||
// via the reconcile_project_statuses command.
|
||||
let mut projects = projects;
|
||||
let mut needs_save = needs_save;
|
||||
for p in projects.iter_mut() {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "https://raw.githubusercontent.com/tauri-apps/tauri/dev/crates/tauri-cli/schema.json",
|
||||
"productName": "Triple-C",
|
||||
"version": "0.1.0",
|
||||
"version": "0.2.0",
|
||||
"identifier": "com.triple-c.desktop",
|
||||
"build": {
|
||||
"beforeDevCommand": "npm run dev",
|
||||
|
||||
@@ -10,6 +10,7 @@ import { useProjects } from "./hooks/useProjects";
|
||||
import { useMcpServers } from "./hooks/useMcpServers";
|
||||
import { useUpdates } from "./hooks/useUpdates";
|
||||
import { useAppState } from "./store/appState";
|
||||
import { reconcileProjectStatuses } from "./lib/tauri-commands";
|
||||
|
||||
export default function App() {
|
||||
const { checkDocker, checkImage, startDockerPolling } = useDocker();
|
||||
@@ -17,8 +18,8 @@ export default function App() {
|
||||
const { refresh } = useProjects();
|
||||
const { refresh: refreshMcp } = useMcpServers();
|
||||
const { loadVersion, checkForUpdates, startPeriodicCheck } = useUpdates();
|
||||
const { sessions, activeSessionId } = useAppState(
|
||||
useShallow(s => ({ sessions: s.sessions, activeSessionId: s.activeSessionId }))
|
||||
const { sessions, activeSessionId, setProjects } = useAppState(
|
||||
useShallow(s => ({ sessions: s.sessions, activeSessionId: s.activeSessionId, setProjects: s.setProjects }))
|
||||
);
|
||||
|
||||
// Initialize on mount
|
||||
@@ -28,6 +29,14 @@ export default function App() {
|
||||
checkDocker().then((available) => {
|
||||
if (available) {
|
||||
checkImage();
|
||||
// Reconcile project statuses against actual Docker container state,
|
||||
// then refresh the project list so the UI reflects reality.
|
||||
reconcileProjectStatuses().then((projects) => {
|
||||
setProjects(projects);
|
||||
}).catch(() => {
|
||||
// If reconciliation fails (e.g. Docker hiccup), just load from store
|
||||
refresh();
|
||||
});
|
||||
} else {
|
||||
stopPolling = startDockerPolling();
|
||||
}
|
||||
|
||||
@@ -147,7 +147,7 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
|
||||
className={inputCls}
|
||||
/>
|
||||
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
|
||||
Set a Docker image to run this MCP server as a container. Leave empty for manual mode.
|
||||
Set a Docker image to run this MCP server in its own container. Leave empty to run commands inside the project container. Images are pulled automatically if not present.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
@@ -171,6 +171,14 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Mode description */}
|
||||
<p className="text-xs text-[var(--text-secondary)] opacity-60">
|
||||
{transportType === "stdio" && isDocker && "Runs via docker exec in a separate MCP container."}
|
||||
{transportType === "stdio" && !isDocker && "Runs inside the project container (e.g. npx commands)."}
|
||||
{transportType === "http" && isDocker && "Runs in a separate container, reached by hostname on the project network."}
|
||||
{transportType === "http" && !isDocker && "Connects to an MCP server at the URL you specify."}
|
||||
</p>
|
||||
|
||||
{/* Container Port (HTTP+Docker only) */}
|
||||
{transportType === "http" && isDocker && (
|
||||
<div>
|
||||
@@ -183,7 +191,7 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
|
||||
className={inputCls}
|
||||
/>
|
||||
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
|
||||
Port inside the MCP container (default: 3000)
|
||||
Port the MCP server listens on inside its container. The URL is auto-generated as http://<container>:<port>/mcp on the project network.
|
||||
</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { useState, useEffect } from "react";
|
||||
import { open } from "@tauri-apps/plugin-dialog";
|
||||
import { listen } from "@tauri-apps/api/event";
|
||||
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod } from "../../lib/types";
|
||||
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod, OllamaConfig, LiteLlmConfig } from "../../lib/types";
|
||||
import { useProjects } from "../../hooks/useProjects";
|
||||
import { useMcpServers } from "../../hooks/useMcpServers";
|
||||
import { useTerminal } from "../../hooks/useTerminal";
|
||||
@@ -58,6 +58,15 @@ export default function ProjectCard({ project }: Props) {
|
||||
const [bedrockBearerToken, setBedrockBearerToken] = useState(project.bedrock_config?.aws_bearer_token ?? "");
|
||||
const [bedrockModelId, setBedrockModelId] = useState(project.bedrock_config?.model_id ?? "");
|
||||
|
||||
// Ollama local state
|
||||
const [ollamaBaseUrl, setOllamaBaseUrl] = useState(project.ollama_config?.base_url ?? "http://host.docker.internal:11434");
|
||||
const [ollamaModelId, setOllamaModelId] = useState(project.ollama_config?.model_id ?? "");
|
||||
|
||||
// LiteLLM local state
|
||||
const [litellmBaseUrl, setLitellmBaseUrl] = useState(project.litellm_config?.base_url ?? "http://host.docker.internal:4000");
|
||||
const [litellmApiKey, setLitellmApiKey] = useState(project.litellm_config?.api_key ?? "");
|
||||
const [litellmModelId, setLitellmModelId] = useState(project.litellm_config?.model_id ?? "");
|
||||
|
||||
// Sync local state when project prop changes (e.g., after save or external update)
|
||||
useEffect(() => {
|
||||
setEditName(project.name);
|
||||
@@ -76,6 +85,11 @@ export default function ProjectCard({ project }: Props) {
|
||||
setBedrockProfile(project.bedrock_config?.aws_profile ?? "");
|
||||
setBedrockBearerToken(project.bedrock_config?.aws_bearer_token ?? "");
|
||||
setBedrockModelId(project.bedrock_config?.model_id ?? "");
|
||||
setOllamaBaseUrl(project.ollama_config?.base_url ?? "http://host.docker.internal:11434");
|
||||
setOllamaModelId(project.ollama_config?.model_id ?? "");
|
||||
setLitellmBaseUrl(project.litellm_config?.base_url ?? "http://host.docker.internal:4000");
|
||||
setLitellmApiKey(project.litellm_config?.api_key ?? "");
|
||||
setLitellmModelId(project.litellm_config?.model_id ?? "");
|
||||
}, [project]);
|
||||
|
||||
// Listen for container progress events
|
||||
@@ -177,12 +191,29 @@ export default function ProjectCard({ project }: Props) {
|
||||
disable_prompt_caching: false,
|
||||
};
|
||||
|
||||
const defaultOllamaConfig: OllamaConfig = {
|
||||
base_url: "http://host.docker.internal:11434",
|
||||
model_id: null,
|
||||
};
|
||||
|
||||
const defaultLiteLlmConfig: LiteLlmConfig = {
|
||||
base_url: "http://host.docker.internal:4000",
|
||||
api_key: null,
|
||||
model_id: null,
|
||||
};
|
||||
|
||||
const handleAuthModeChange = async (mode: AuthMode) => {
|
||||
try {
|
||||
const updates: Partial<Project> = { auth_mode: mode };
|
||||
if (mode === "bedrock" && !project.bedrock_config) {
|
||||
updates.bedrock_config = defaultBedrockConfig;
|
||||
}
|
||||
if (mode === "ollama" && !project.ollama_config) {
|
||||
updates.ollama_config = defaultOllamaConfig;
|
||||
}
|
||||
if (mode === "lit_llm" && !project.litellm_config) {
|
||||
updates.litellm_config = defaultLiteLlmConfig;
|
||||
}
|
||||
await update({ ...project, ...updates });
|
||||
} catch (e) {
|
||||
setError(String(e));
|
||||
@@ -305,6 +336,51 @@ export default function ProjectCard({ project }: Props) {
|
||||
}
|
||||
};
|
||||
|
||||
const handleOllamaBaseUrlBlur = async () => {
|
||||
try {
|
||||
const current = project.ollama_config ?? defaultOllamaConfig;
|
||||
await update({ ...project, ollama_config: { ...current, base_url: ollamaBaseUrl } });
|
||||
} catch (err) {
|
||||
console.error("Failed to update Ollama base URL:", err);
|
||||
}
|
||||
};
|
||||
|
||||
const handleOllamaModelIdBlur = async () => {
|
||||
try {
|
||||
const current = project.ollama_config ?? defaultOllamaConfig;
|
||||
await update({ ...project, ollama_config: { ...current, model_id: ollamaModelId || null } });
|
||||
} catch (err) {
|
||||
console.error("Failed to update Ollama model ID:", err);
|
||||
}
|
||||
};
|
||||
|
||||
const handleLitellmBaseUrlBlur = async () => {
|
||||
try {
|
||||
const current = project.litellm_config ?? defaultLiteLlmConfig;
|
||||
await update({ ...project, litellm_config: { ...current, base_url: litellmBaseUrl } });
|
||||
} catch (err) {
|
||||
console.error("Failed to update LiteLLM base URL:", err);
|
||||
}
|
||||
};
|
||||
|
||||
const handleLitellmApiKeyBlur = async () => {
|
||||
try {
|
||||
const current = project.litellm_config ?? defaultLiteLlmConfig;
|
||||
await update({ ...project, litellm_config: { ...current, api_key: litellmApiKey || null } });
|
||||
} catch (err) {
|
||||
console.error("Failed to update LiteLLM API key:", err);
|
||||
}
|
||||
};
|
||||
|
||||
const handleLitellmModelIdBlur = async () => {
|
||||
try {
|
||||
const current = project.litellm_config ?? defaultLiteLlmConfig;
|
||||
await update({ ...project, litellm_config: { ...current, model_id: litellmModelId || null } });
|
||||
} catch (err) {
|
||||
console.error("Failed to update LiteLLM model ID:", err);
|
||||
}
|
||||
};
|
||||
|
||||
const statusColor = {
|
||||
stopped: "bg-[var(--text-secondary)]",
|
||||
starting: "bg-[var(--warning)]",
|
||||
@@ -395,6 +471,28 @@ export default function ProjectCard({ project }: Props) {
|
||||
>
|
||||
Bedrock
|
||||
</button>
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); handleAuthModeChange("ollama"); }}
|
||||
disabled={!isStopped}
|
||||
className={`px-2 py-0.5 rounded transition-colors ${
|
||||
project.auth_mode === "ollama"
|
||||
? "bg-[var(--accent)] text-white"
|
||||
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
|
||||
} disabled:opacity-50`}
|
||||
>
|
||||
Ollama
|
||||
</button>
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); handleAuthModeChange("lit_llm"); }}
|
||||
disabled={!isStopped}
|
||||
className={`px-2 py-0.5 rounded transition-colors ${
|
||||
project.auth_mode === "lit_llm"
|
||||
? "bg-[var(--accent)] text-white"
|
||||
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
|
||||
} disabled:opacity-50`}
|
||||
>
|
||||
LiteLLM
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Action buttons */}
|
||||
@@ -851,6 +949,99 @@ export default function ProjectCard({ project }: Props) {
|
||||
</div>
|
||||
);
|
||||
})()}
|
||||
|
||||
{/* Ollama config */}
|
||||
{project.auth_mode === "ollama" && (() => {
|
||||
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50";
|
||||
return (
|
||||
<div className="space-y-2 pt-1 border-t border-[var(--border-color)]">
|
||||
<label className="block text-xs font-medium text-[var(--text-primary)]">Ollama</label>
|
||||
<p className="text-xs text-[var(--text-secondary)]">
|
||||
Connect to an Ollama server running locally or on a remote host.
|
||||
</p>
|
||||
|
||||
<div>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Base URL</label>
|
||||
<input
|
||||
value={ollamaBaseUrl}
|
||||
onChange={(e) => setOllamaBaseUrl(e.target.value)}
|
||||
onBlur={handleOllamaBaseUrlBlur}
|
||||
placeholder="http://host.docker.internal:11434"
|
||||
disabled={!isStopped}
|
||||
className={inputCls}
|
||||
/>
|
||||
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-70">
|
||||
Use host.docker.internal for the host machine, or an IP/hostname for remote.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)</label>
|
||||
<input
|
||||
value={ollamaModelId}
|
||||
onChange={(e) => setOllamaModelId(e.target.value)}
|
||||
onBlur={handleOllamaModelIdBlur}
|
||||
placeholder="qwen3.5:27b"
|
||||
disabled={!isStopped}
|
||||
className={inputCls}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})()}
|
||||
|
||||
{/* LiteLLM config */}
|
||||
{project.auth_mode === "lit_llm" && (() => {
|
||||
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50";
|
||||
return (
|
||||
<div className="space-y-2 pt-1 border-t border-[var(--border-color)]">
|
||||
<label className="block text-xs font-medium text-[var(--text-primary)]">LiteLLM Gateway</label>
|
||||
<p className="text-xs text-[var(--text-secondary)]">
|
||||
Connect through a LiteLLM proxy to use 100+ model providers.
|
||||
</p>
|
||||
|
||||
<div>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Base URL</label>
|
||||
<input
|
||||
value={litellmBaseUrl}
|
||||
onChange={(e) => setLitellmBaseUrl(e.target.value)}
|
||||
onBlur={handleLitellmBaseUrlBlur}
|
||||
placeholder="http://host.docker.internal:4000"
|
||||
disabled={!isStopped}
|
||||
className={inputCls}
|
||||
/>
|
||||
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-70">
|
||||
Use host.docker.internal for local, or a URL for remote/containerized LiteLLM.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">API Key</label>
|
||||
<input
|
||||
type="password"
|
||||
value={litellmApiKey}
|
||||
onChange={(e) => setLitellmApiKey(e.target.value)}
|
||||
onBlur={handleLitellmApiKeyBlur}
|
||||
placeholder="sk-..."
|
||||
disabled={!isStopped}
|
||||
className={inputCls}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)</label>
|
||||
<input
|
||||
value={litellmModelId}
|
||||
onChange={(e) => setLitellmModelId(e.target.value)}
|
||||
onBlur={handleLitellmModelIdBlur}
|
||||
placeholder="gpt-4o / gemini-pro / etc."
|
||||
disabled={!isStopped}
|
||||
className={inputCls}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})()}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -6,6 +6,8 @@ import { WebLinksAddon } from "@xterm/addon-web-links";
|
||||
import { openUrl } from "@tauri-apps/plugin-opener";
|
||||
import "@xterm/xterm/css/xterm.css";
|
||||
import { useTerminal } from "../../hooks/useTerminal";
|
||||
import { useAppState } from "../../store/appState";
|
||||
import { awsSsoRefresh } from "../../lib/tauri-commands";
|
||||
import { UrlDetector } from "../../lib/urlDetector";
|
||||
import UrlToast from "./UrlToast";
|
||||
|
||||
@@ -23,6 +25,12 @@ export default function TerminalView({ sessionId, active }: Props) {
|
||||
const detectorRef = useRef<UrlDetector | null>(null);
|
||||
const { sendInput, pasteImage, resize, onOutput, onExit } = useTerminal();
|
||||
|
||||
const ssoBufferRef = useRef("");
|
||||
const ssoTriggeredRef = useRef(false);
|
||||
const projectId = useAppState(
|
||||
(s) => s.sessions.find((sess) => sess.id === sessionId)?.projectId
|
||||
);
|
||||
|
||||
const [detectedUrl, setDetectedUrl] = useState<string | null>(null);
|
||||
const [imagePasteMsg, setImagePasteMsg] = useState<string | null>(null);
|
||||
const [isAtBottom, setIsAtBottom] = useState(true);
|
||||
@@ -152,10 +160,30 @@ export default function TerminalView({ sessionId, active }: Props) {
|
||||
const detector = new UrlDetector((url) => setDetectedUrl(url));
|
||||
detectorRef.current = detector;
|
||||
|
||||
const SSO_MARKER = "###TRIPLE_C_SSO_REFRESH###";
|
||||
const textDecoder = new TextDecoder();
|
||||
|
||||
const outputPromise = onOutput(sessionId, (data) => {
|
||||
if (aborted) return;
|
||||
term.write(data);
|
||||
detector.feed(data);
|
||||
|
||||
// Scan for SSO refresh marker in terminal output
|
||||
if (!ssoTriggeredRef.current && projectId) {
|
||||
const text = textDecoder.decode(data, { stream: true });
|
||||
// Combine with overlap from previous chunk to handle marker spanning chunks
|
||||
const combined = ssoBufferRef.current + text;
|
||||
if (combined.includes(SSO_MARKER)) {
|
||||
ssoTriggeredRef.current = true;
|
||||
ssoBufferRef.current = "";
|
||||
awsSsoRefresh(projectId).catch((e) =>
|
||||
console.error("AWS SSO refresh failed:", e)
|
||||
);
|
||||
} else {
|
||||
// Keep last N chars as overlap for next chunk
|
||||
ssoBufferRef.current = combined.slice(-SSO_MARKER.length);
|
||||
}
|
||||
}
|
||||
}).then((unlisten) => {
|
||||
if (aborted) unlisten();
|
||||
return unlisten;
|
||||
@@ -189,6 +217,8 @@ export default function TerminalView({ sessionId, active }: Props) {
|
||||
aborted = true;
|
||||
detector.dispose();
|
||||
detectorRef.current = null;
|
||||
ssoTriggeredRef.current = false;
|
||||
ssoBufferRef.current = "";
|
||||
osc52Disposable.dispose();
|
||||
inputDisposable.dispose();
|
||||
scrollDisposable.dispose();
|
||||
|
||||
@@ -24,6 +24,8 @@ export const stopProjectContainer = (projectId: string) =>
|
||||
invoke<void>("stop_project_container", { projectId });
|
||||
export const rebuildProjectContainer = (projectId: string) =>
|
||||
invoke<Project>("rebuild_project_container", { projectId });
|
||||
export const reconcileProjectStatuses = () =>
|
||||
invoke<Project[]>("reconcile_project_statuses");
|
||||
|
||||
// Settings
|
||||
export const getSettings = () => invoke<AppSettings>("get_settings");
|
||||
@@ -38,6 +40,10 @@ export const listAwsProfiles = () =>
|
||||
export const detectHostTimezone = () =>
|
||||
invoke<string>("detect_host_timezone");
|
||||
|
||||
// AWS
|
||||
export const awsSsoRefresh = (projectId: string) =>
|
||||
invoke<void>("aws_sso_refresh", { projectId });
|
||||
|
||||
// Terminal
|
||||
export const openTerminalSession = (projectId: string, sessionId: string, sessionType?: string) =>
|
||||
invoke<void>("open_terminal_session", { projectId, sessionId, sessionType });
|
||||
|
||||
@@ -22,6 +22,8 @@ export interface Project {
|
||||
status: ProjectStatus;
|
||||
auth_mode: AuthMode;
|
||||
bedrock_config: BedrockConfig | null;
|
||||
ollama_config: OllamaConfig | null;
|
||||
litellm_config: LiteLlmConfig | null;
|
||||
allow_docker_access: boolean;
|
||||
mission_control_enabled: boolean;
|
||||
ssh_key_path: string | null;
|
||||
@@ -43,7 +45,7 @@ export type ProjectStatus =
|
||||
| "stopping"
|
||||
| "error";
|
||||
|
||||
export type AuthMode = "anthropic" | "bedrock";
|
||||
export type AuthMode = "anthropic" | "bedrock" | "ollama" | "lit_llm";
|
||||
|
||||
export type BedrockAuthMethod = "static_credentials" | "profile" | "bearer_token";
|
||||
|
||||
@@ -59,6 +61,17 @@ export interface BedrockConfig {
|
||||
disable_prompt_caching: boolean;
|
||||
}
|
||||
|
||||
export interface OllamaConfig {
|
||||
base_url: string;
|
||||
model_id: string | null;
|
||||
}
|
||||
|
||||
export interface LiteLlmConfig {
|
||||
base_url: string;
|
||||
api_key: string | null;
|
||||
model_id: string | null;
|
||||
}
|
||||
|
||||
export interface ContainerInfo {
|
||||
container_id: string;
|
||||
project_id: string;
|
||||
|
||||
@@ -119,6 +119,9 @@ RUN chmod +x /usr/local/bin/audio-shim \
|
||||
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/rec \
|
||||
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/arecord
|
||||
|
||||
COPY triple-c-sso-refresh /usr/local/bin/triple-c-sso-refresh
|
||||
RUN chmod +x /usr/local/bin/triple-c-sso-refresh
|
||||
|
||||
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
|
||||
RUN chmod +x /usr/local/bin/entrypoint.sh
|
||||
COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler
|
||||
|
||||
@@ -84,6 +84,31 @@ if [ -d /tmp/.host-aws ]; then
|
||||
# Ensure writable cache directories exist
|
||||
mkdir -p /home/claude/.aws/sso/cache /home/claude/.aws/cli/cache
|
||||
chown -R claude:claude /home/claude/.aws/sso /home/claude/.aws/cli
|
||||
|
||||
# Inline sso_session properties into profile sections so AWS SDKs that don't
|
||||
# support the sso_session indirection format can resolve sso_region, etc.
|
||||
if [ -f /home/claude/.aws/config ]; then
|
||||
python3 -c '
|
||||
import configparser, sys
|
||||
c = configparser.ConfigParser()
|
||||
c.read(sys.argv[1])
|
||||
for sec in c.sections():
|
||||
if not sec.startswith("profile ") and sec != "default":
|
||||
continue
|
||||
session = c.get(sec, "sso_session", fallback=None)
|
||||
if not session or c.has_option(sec, "sso_start_url"):
|
||||
continue
|
||||
ss = f"sso-session {session}"
|
||||
if not c.has_section(ss):
|
||||
continue
|
||||
for key in ("sso_start_url", "sso_region", "sso_registration_scopes"):
|
||||
val = c.get(ss, key, fallback=None)
|
||||
if val:
|
||||
c.set(sec, key, val)
|
||||
with open(sys.argv[1], "w") as f:
|
||||
c.write(f)
|
||||
' /home/claude/.aws/config 2>/dev/null || true
|
||||
fi
|
||||
fi
|
||||
|
||||
# ── Git credential helper (for HTTPS token) ─────────────────────────────────
|
||||
@@ -131,6 +156,15 @@ if [ "$MISSION_CONTROL_ENABLED" = "1" ]; then
|
||||
# Symlink into workspace so Claude sees it at /workspace/mission-control
|
||||
ln -sfn "$MC_HOME" "$MC_LINK"
|
||||
chown -h claude:claude "$MC_LINK"
|
||||
|
||||
# Install skills to ~/.claude/skills/ so Claude Code discovers them automatically
|
||||
if [ -d "$MC_HOME/.claude/skills" ]; then
|
||||
mkdir -p /home/claude/.claude/skills
|
||||
cp -r "$MC_HOME/.claude/skills/"* /home/claude/.claude/skills/ 2>/dev/null
|
||||
chown -R claude:claude /home/claude/.claude/skills
|
||||
echo "entrypoint: mission-control skills installed to ~/.claude/skills/"
|
||||
fi
|
||||
|
||||
unset MISSION_CONTROL_ENABLED
|
||||
fi
|
||||
|
||||
@@ -155,6 +189,24 @@ if [ -n "$MCP_SERVERS_JSON" ]; then
|
||||
unset MCP_SERVERS_JSON
|
||||
fi
|
||||
|
||||
# ── AWS SSO auth refresh command ──────────────────────────────────────────────
|
||||
# When set, inject awsAuthRefresh into ~/.claude.json so Claude Code calls
|
||||
# triple-c-sso-refresh when AWS credentials expire mid-session.
|
||||
if [ -n "$AWS_SSO_AUTH_REFRESH_CMD" ]; then
|
||||
CLAUDE_JSON="/home/claude/.claude.json"
|
||||
if [ -f "$CLAUDE_JSON" ]; then
|
||||
MERGED=$(jq --arg cmd "$AWS_SSO_AUTH_REFRESH_CMD" '.awsAuthRefresh = $cmd' "$CLAUDE_JSON" 2>/dev/null)
|
||||
if [ -n "$MERGED" ]; then
|
||||
printf '%s\n' "$MERGED" > "$CLAUDE_JSON"
|
||||
fi
|
||||
else
|
||||
printf '{"awsAuthRefresh":"%s"}\n' "$AWS_SSO_AUTH_REFRESH_CMD" > "$CLAUDE_JSON"
|
||||
fi
|
||||
chown claude:claude "$CLAUDE_JSON"
|
||||
chmod 600 "$CLAUDE_JSON"
|
||||
unset AWS_SSO_AUTH_REFRESH_CMD
|
||||
fi
|
||||
|
||||
# ── Docker socket permissions ────────────────────────────────────────────────
|
||||
if [ -S /var/run/docker.sock ]; then
|
||||
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)
|
||||
|
||||
33
container/triple-c-sso-refresh
Executable file
33
container/triple-c-sso-refresh
Executable file
@@ -0,0 +1,33 @@
|
||||
#!/bin/bash
|
||||
# Signal Triple-C to perform host-side AWS SSO login, then sync the result.
|
||||
CACHE_DIR="$HOME/.aws/sso/cache"
|
||||
HOST_CACHE="/tmp/.host-aws/sso/cache"
|
||||
MARKER="/tmp/.sso-refresh-marker"
|
||||
|
||||
touch "$MARKER"
|
||||
|
||||
# Emit marker for Triple-C app to detect in terminal output
|
||||
echo "###TRIPLE_C_SSO_REFRESH###"
|
||||
echo "Waiting for SSO login to complete on host..."
|
||||
|
||||
TIMEOUT=120
|
||||
ELAPSED=0
|
||||
while [ $ELAPSED -lt $TIMEOUT ]; do
|
||||
if [ -d "$HOST_CACHE" ]; then
|
||||
NEW=$(find "$HOST_CACHE" -name "*.json" -newer "$MARKER" 2>/dev/null | head -1)
|
||||
if [ -n "$NEW" ]; then
|
||||
mkdir -p "$CACHE_DIR"
|
||||
cp -f "$HOST_CACHE"/*.json "$CACHE_DIR/" 2>/dev/null
|
||||
chown -R "$(whoami)" "$CACHE_DIR"
|
||||
echo "AWS SSO credentials refreshed successfully."
|
||||
rm -f "$MARKER"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
sleep 2
|
||||
ELAPSED=$((ELAPSED + 2))
|
||||
done
|
||||
|
||||
echo "SSO refresh timed out (${TIMEOUT}s). Please try again."
|
||||
rm -f "$MARKER"
|
||||
exit 1
|
||||
Reference in New Issue
Block a user