Compare commits

...

3 Commits

Author SHA1 Message Date
2dce2993cc Fix AWS SSO for Bedrock profile auth in containers
All checks were successful
Build App / build-macos (push) Successful in 2m29s
Build App / build-windows (push) Successful in 3m56s
Build App / build-linux (push) Successful in 4m42s
Build Container / build-container (push) Successful in 54s
Build App / sync-to-github (push) Successful in 10s
SSO login was broken in containers due to three issues: the sso_session
indirection format not being resolved by Claude Code's AWS SDK, SSO
detection only checking sso_start_url (missing sso_session), and the
OAuth callback port not being accessible from inside the container.

This fix runs SSO login on the host OS (where the browser and ports work
natively) by having the container emit a marker that the Tauri app
detects in terminal output, triggering host-side `aws sso login`. The
entrypoint also inlines sso_session properties into profile sections and
injects awsAuthRefresh into Claude Code config for mid-session refresh.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 12:24:16 -07:00
e482452ffd Expand MCP documentation with mode explanations and concrete examples
- Rewrite HOW-TO-USE.md MCP section with a mode matrix (stdio/http x
  manual/docker), four worked examples (filesystem, GitHub, custom HTTP,
  database), and detailed explanations of networking, auto-pull, and
  config injection
- Update README.md MCP architecture section with a mode table and
  key behaviors including auto-pull and Docker DNS details

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 10:19:03 -07:00
8c710fc7bf Auto-pull MCP Docker images and add mode hints to MCP UI
All checks were successful
Build App / build-macos (push) Successful in 2m25s
Build App / build-windows (push) Successful in 3m32s
Build App / build-linux (push) Successful in 5m34s
Build App / sync-to-github (push) Successful in 11s
- Automatically pull missing Docker images for MCP servers before
  starting containers, with progress streamed to the container
  progress modal
- Add contextual mode descriptions to MCP server cards explaining
  where commands run (project container vs separate MCP container)
- Clarify that HTTP+Docker URLs are auto-generated using the
  container hostname on the project network, not localhost

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 09:51:47 -07:00
14 changed files with 291 additions and 31 deletions

View File

@@ -225,6 +225,17 @@ Click **Edit** to write per-project instructions for Claude Code. These are writ
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers, which extend Claude Code with access to external tools and data sources. MCP servers are configured in a **global library** and **enabled per-project**.
### How It Works
There are two dimensions to MCP server configuration:
| | **Manual** (no Docker image) | **Docker** (Docker image specified) |
|---|---|---|
| **Stdio** | Command runs inside the project container | Command runs in a separate MCP container via `docker exec` |
| **HTTP** | Connects to a URL you provide | Runs in a separate container, reached by hostname on a shared Docker network |
**Docker images are pulled automatically** if not already present when the project starts.
### Accessing MCP Configuration
Click the **MCP** tab in the sidebar to open the MCP server library. This is where you define all available MCP servers.
@@ -232,43 +243,103 @@ Click the **MCP** tab in the sidebar to open the MCP server library. This is whe
### Adding an MCP Server
1. Type a name in the input field and click **Add**.
2. Configure the server in its card:
2. Expand the server card and configure it.
| Setting | Description |
|---------|-------------|
| **Docker Image** | Optional. If provided, the server runs as an isolated Docker container. |
| **Transport Type** | **Stdio** (command-line) or **HTTP** (network endpoint) |
The key decision is whether to set a **Docker Image**:
- **With Docker image** — The MCP server runs in its own isolated container. Best for servers that need specific dependencies or system-level packages.
- **Without Docker image** (manual) — The command runs directly inside your project container. Best for lightweight npx-based servers that just need Node.js.
#### Stdio Mode (Manual)
- **Command** — The executable to run (e.g., `npx`)
- **Arguments** — Space-separated arguments
- **Environment Variables** — Key-value pairs passed to the command
Then choose the **Transport Type**:
- **Stdio** — The MCP server communicates over stdin/stdout. This is the most common type.
- **HTTP** — The MCP server exposes an HTTP endpoint (streamable HTTP transport).
#### HTTP Mode (Manual)
- **URL** — The MCP endpoint (e.g., `http://localhost:3000/mcp`)
- **Headers** — Custom HTTP headers
### Configuration Examples
#### Docker Mode
When a Docker image is specified, the server runs as a container on a per-project network:
- **Container Port** — Port the MCP server listens on inside its container (default: 3000)
- **Environment Variables** — Injected into the Docker container
#### Example 1: Filesystem Server (Stdio, Manual)
A simple npx-based server that runs inside the project container. No Docker image needed since Node.js is already installed.
| Field | Value |
|-------|-------|
| **Docker Image** | *(empty)* |
| **Transport** | Stdio |
| **Command** | `npx` |
| **Arguments** | `-y @modelcontextprotocol/server-filesystem /workspace` |
This gives Claude Code access to browse and read files via MCP. The command runs directly inside the project container using the pre-installed Node.js.
#### Example 2: GitHub Server (Stdio, Manual)
Another npx-based server, with an environment variable for authentication.
| Field | Value |
|-------|-------|
| **Docker Image** | *(empty)* |
| **Transport** | Stdio |
| **Command** | `npx` |
| **Arguments** | `-y @modelcontextprotocol/server-github` |
| **Environment Variables** | `GITHUB_PERSONAL_ACCESS_TOKEN` = `ghp_your_token` |
#### Example 3: Custom MCP Server (HTTP, Docker)
An MCP server packaged as a Docker image that exposes an HTTP endpoint.
| Field | Value |
|-------|-------|
| **Docker Image** | `myregistry/my-mcp-server:latest` |
| **Transport** | HTTP |
| **Container Port** | `8080` |
| **Environment Variables** | `API_KEY` = `your_key` |
Triple-C will:
1. Pull the image automatically if not present
2. Start the container on the project's bridge network
3. Configure Claude Code to reach it at `http://triple-c-mcp-{id}:8080/mcp`
The hostname is the MCP container's name on the Docker network — **not** `localhost`.
#### Example 4: Database Server (Stdio, Docker)
An MCP server that needs its own runtime environment, communicating over stdio.
| Field | Value |
|-------|-------|
| **Docker Image** | `mcp/postgres-server:latest` |
| **Transport** | Stdio |
| **Command** | `node` |
| **Arguments** | `dist/index.js` |
| **Environment Variables** | `DATABASE_URL` = `postgresql://user:pass@host:5432/db` |
Triple-C will:
1. Pull the image and start it on the project network
2. Configure Claude Code to communicate via `docker exec -i triple-c-mcp-{id} node dist/index.js`
3. Automatically enable Docker socket access on the project container (required for `docker exec`)
### Enabling MCP Servers Per-Project
In a project's configuration panel, the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes require a container restart.
In a project's configuration panel (click **Config**), the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes take effect on the next container start.
### How Docker-Based MCP Works
When a project with Docker-based MCP servers starts:
1. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
2. Each enabled Docker MCP server gets its own container on that network
3. The main project container is connected to the same network
4. MCP server configuration is injected into Claude Code's config file
1. Missing Docker images are **automatically pulled** (progress shown in the progress modal)
2. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
3. Each enabled Docker MCP server gets its own container on that network
4. The main project container is connected to the same network
5. MCP server configuration is written to `~/.claude.json` inside the container
**Stdio + Docker** servers communicate via `docker exec`, which automatically enables Docker socket access on the main container. **HTTP + Docker** servers are reached by hostname on the shared network (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
**Networking**: Docker-based MCP containers are reached by their container name as a hostname (e.g., `triple-c-mcp-{serverId}`), not by `localhost`. Docker DNS resolves these names automatically on the shared bridge network.
When MCP configuration changes (servers added/removed/modified), the container is automatically recreated on the next start to apply the new configuration.
**Stdio + Docker**: The project container uses `docker exec` to communicate with the MCP container over stdin/stdout. This automatically enables Docker socket access on the project container.
**HTTP + Docker**: The project container connects to the MCP container's HTTP endpoint using the container hostname and port (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
**Manual (no Docker image)**: Stdio commands run directly inside the project container. HTTP URLs connect to wherever you point them (could be an external service or something running on the host).
### Configuration Change Detection
MCP server configuration is tracked via SHA-256 fingerprints stored as Docker labels. If you add, remove, or modify MCP servers for a project, the container is automatically recreated on the next start to apply the new configuration. The container filesystem is snapshotted first, so installed packages are preserved.
---

View File

@@ -60,11 +60,22 @@ If the Docker access setting is toggled after a container already exists, the co
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers as a Beta feature. MCP servers extend Claude Code with external tools and data sources.
**Modes**: Each MCP server operates in one of four modes based on transport type and whether a Docker image is specified:
| Mode | Where It Runs | How It Communicates |
|------|--------------|---------------------|
| Stdio + Manual | Inside the project container | Direct stdin/stdout (e.g., `npx -y @mcp/server`) |
| Stdio + Docker | Separate MCP container | `docker exec -i <mcp-container> <command>` from the project container |
| HTTP + Manual | External / user-provided | Connects to the URL you specify |
| HTTP + Docker | Separate MCP container | `http://<mcp-container>:<port>/mcp` via Docker DNS on a shared bridge network |
**Key behaviors**:
- **Global library**: MCP servers are defined globally in the MCP sidebar tab and stored in `mcp_servers.json`
- **Per-project toggles**: Each project enables/disables individual servers via checkboxes
- **Docker isolation**: MCP servers can run as isolated Docker containers on a per-project bridge network (`triple-c-net-{projectId}`)
- **Transport types**: Stdio (command-line) and HTTP (network endpoint), each with manual or Docker mode
- **Auto-pull**: Docker images for MCP servers are pulled automatically if not present when the project starts
- **Docker networking**: Docker-based MCP containers run on a per-project bridge network (`triple-c-net-{projectId}`), reachable by container name — not localhost
- **Auto-detection**: Config changes are detected via SHA-256 fingerprints and trigger automatic container recreation
- **Config injection**: MCP server configuration is written to `~/.claude.json` inside the container via the `MCP_SERVERS_JSON` environment variable, merged by the entrypoint using `jq`
### Mission Control Integration

View File

@@ -0,0 +1,30 @@
use tauri::State;
use crate::AppState;
#[tauri::command]
pub async fn aws_sso_refresh(
project_id: String,
state: State<'_, AppState>,
) -> Result<(), String> {
let project = state.projects_store.get(&project_id)
.ok_or_else(|| format!("Project {} not found", project_id))?;
let profile = project.bedrock_config.as_ref()
.and_then(|b| b.aws_profile.clone())
.or_else(|| state.settings_store.get().global_aws.aws_profile.clone())
.unwrap_or_else(|| "default".to_string());
log::info!("Running host-side AWS SSO login for profile '{}'", profile);
let status = tokio::process::Command::new("aws")
.args(["sso", "login", "--profile", &profile])
.status()
.await
.map_err(|e| format!("Failed to run aws sso login: {}", e))?;
if !status.success() {
return Err("SSO login failed or was cancelled".to_string());
}
Ok(())
}

View File

@@ -1,3 +1,4 @@
pub mod aws_commands;
pub mod docker_commands;
pub mod file_commands;
pub mod mcp_commands;

View File

@@ -202,6 +202,28 @@ pub async fn start_project_container(
// Set up Docker network and MCP containers if needed
let network_name = if !docker_mcp.is_empty() {
// Pull any missing MCP Docker images before starting containers
for server in &docker_mcp {
if let Some(ref image) = server.docker_image {
if !docker::image_exists(image).await.unwrap_or(false) {
emit_progress(
&app_handle,
&project_id,
&format!("Pulling MCP image for '{}'...", server.name),
);
let image_clone = image.clone();
let app_clone = app_handle.clone();
let pid_clone = project_id.clone();
let sname = server.name.clone();
docker::pull_image(&image_clone, move |msg| {
emit_progress(&app_clone, &pid_clone, &format!("[{}] {}", sname, msg));
}).await.map_err(|e| {
format!("Failed to pull MCP image '{}' for '{}': {}", image, server.name, e)
})?;
}
}
}
emit_progress(&app_handle, &project_id, "Setting up MCP network...");
let net = docker::ensure_project_network(&project.id).await?;
emit_progress(&app_handle, &project_id, "Starting MCP containers...");

View File

@@ -40,11 +40,12 @@ if aws sts get-caller-identity --profile '{profile}' >/dev/null 2>&1; then
echo "AWS session valid."
else
echo "AWS session expired or invalid."
# Check if this profile uses SSO (has sso_start_url configured)
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1; then
echo "Starting SSO login — click the URL below to authenticate:"
# Check if this profile uses SSO (has sso_start_url or sso_session configured)
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1 || \
aws configure get sso_session --profile '{profile}' >/dev/null 2>&1; then
echo "Starting SSO login..."
echo ""
aws sso login --profile '{profile}'
triple-c-sso-refresh
if [ $? -ne 0 ]; then
echo ""
echo "SSO login failed or was cancelled. Starting Claude anyway..."

View File

@@ -459,6 +459,7 @@ pub async fn create_container(
if let Some(p) = profile {
env_vars.push(format!("AWS_PROFILE={}", p));
}
env_vars.push("AWS_SSO_AUTH_REFRESH_CMD=triple-c-sso-refresh".to_string());
}
BedrockAuthMethod::BearerToken => {
if let Some(ref token) = bedrock.aws_bearer_token {

View File

@@ -114,6 +114,8 @@ pub fn run() {
commands::mcp_commands::add_mcp_server,
commands::mcp_commands::update_mcp_server,
commands::mcp_commands::remove_mcp_server,
// AWS
commands::aws_commands::aws_sso_refresh,
// Updates
commands::update_commands::get_app_version,
commands::update_commands::check_for_updates,

View File

@@ -147,7 +147,7 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
className={inputCls}
/>
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
Set a Docker image to run this MCP server as a container. Leave empty for manual mode.
Set a Docker image to run this MCP server in its own container. Leave empty to run commands inside the project container. Images are pulled automatically if not present.
</p>
</div>
@@ -171,6 +171,14 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
</div>
</div>
{/* Mode description */}
<p className="text-xs text-[var(--text-secondary)] opacity-60">
{transportType === "stdio" && isDocker && "Runs via docker exec in a separate MCP container."}
{transportType === "stdio" && !isDocker && "Runs inside the project container (e.g. npx commands)."}
{transportType === "http" && isDocker && "Runs in a separate container, reached by hostname on the project network."}
{transportType === "http" && !isDocker && "Connects to an MCP server at the URL you specify."}
</p>
{/* Container Port (HTTP+Docker only) */}
{transportType === "http" && isDocker && (
<div>
@@ -183,7 +191,7 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
className={inputCls}
/>
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
Port inside the MCP container (default: 3000)
Port the MCP server listens on inside its container. The URL is auto-generated as http://&lt;container&gt;:&lt;port&gt;/mcp on the project network.
</p>
</div>
)}

View File

@@ -6,6 +6,8 @@ import { WebLinksAddon } from "@xterm/addon-web-links";
import { openUrl } from "@tauri-apps/plugin-opener";
import "@xterm/xterm/css/xterm.css";
import { useTerminal } from "../../hooks/useTerminal";
import { useAppState } from "../../store/appState";
import { awsSsoRefresh } from "../../lib/tauri-commands";
import { UrlDetector } from "../../lib/urlDetector";
import UrlToast from "./UrlToast";
@@ -23,6 +25,12 @@ export default function TerminalView({ sessionId, active }: Props) {
const detectorRef = useRef<UrlDetector | null>(null);
const { sendInput, pasteImage, resize, onOutput, onExit } = useTerminal();
const ssoBufferRef = useRef("");
const ssoTriggeredRef = useRef(false);
const projectId = useAppState(
(s) => s.sessions.find((sess) => sess.id === sessionId)?.projectId
);
const [detectedUrl, setDetectedUrl] = useState<string | null>(null);
const [imagePasteMsg, setImagePasteMsg] = useState<string | null>(null);
const [isAtBottom, setIsAtBottom] = useState(true);
@@ -152,10 +160,30 @@ export default function TerminalView({ sessionId, active }: Props) {
const detector = new UrlDetector((url) => setDetectedUrl(url));
detectorRef.current = detector;
const SSO_MARKER = "###TRIPLE_C_SSO_REFRESH###";
const textDecoder = new TextDecoder();
const outputPromise = onOutput(sessionId, (data) => {
if (aborted) return;
term.write(data);
detector.feed(data);
// Scan for SSO refresh marker in terminal output
if (!ssoTriggeredRef.current && projectId) {
const text = textDecoder.decode(data, { stream: true });
// Combine with overlap from previous chunk to handle marker spanning chunks
const combined = ssoBufferRef.current + text;
if (combined.includes(SSO_MARKER)) {
ssoTriggeredRef.current = true;
ssoBufferRef.current = "";
awsSsoRefresh(projectId).catch((e) =>
console.error("AWS SSO refresh failed:", e)
);
} else {
// Keep last N chars as overlap for next chunk
ssoBufferRef.current = combined.slice(-SSO_MARKER.length);
}
}
}).then((unlisten) => {
if (aborted) unlisten();
return unlisten;
@@ -189,6 +217,8 @@ export default function TerminalView({ sessionId, active }: Props) {
aborted = true;
detector.dispose();
detectorRef.current = null;
ssoTriggeredRef.current = false;
ssoBufferRef.current = "";
osc52Disposable.dispose();
inputDisposable.dispose();
scrollDisposable.dispose();

View File

@@ -40,6 +40,10 @@ export const listAwsProfiles = () =>
export const detectHostTimezone = () =>
invoke<string>("detect_host_timezone");
// AWS
export const awsSsoRefresh = (projectId: string) =>
invoke<void>("aws_sso_refresh", { projectId });
// Terminal
export const openTerminalSession = (projectId: string, sessionId: string, sessionType?: string) =>
invoke<void>("open_terminal_session", { projectId, sessionId, sessionType });

View File

@@ -119,6 +119,9 @@ RUN chmod +x /usr/local/bin/audio-shim \
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/rec \
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/arecord
COPY triple-c-sso-refresh /usr/local/bin/triple-c-sso-refresh
RUN chmod +x /usr/local/bin/triple-c-sso-refresh
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler

View File

@@ -84,6 +84,31 @@ if [ -d /tmp/.host-aws ]; then
# Ensure writable cache directories exist
mkdir -p /home/claude/.aws/sso/cache /home/claude/.aws/cli/cache
chown -R claude:claude /home/claude/.aws/sso /home/claude/.aws/cli
# Inline sso_session properties into profile sections so AWS SDKs that don't
# support the sso_session indirection format can resolve sso_region, etc.
if [ -f /home/claude/.aws/config ]; then
python3 -c '
import configparser, sys
c = configparser.ConfigParser()
c.read(sys.argv[1])
for sec in c.sections():
if not sec.startswith("profile ") and sec != "default":
continue
session = c.get(sec, "sso_session", fallback=None)
if not session or c.has_option(sec, "sso_start_url"):
continue
ss = f"sso-session {session}"
if not c.has_section(ss):
continue
for key in ("sso_start_url", "sso_region", "sso_registration_scopes"):
val = c.get(ss, key, fallback=None)
if val:
c.set(sec, key, val)
with open(sys.argv[1], "w") as f:
c.write(f)
' /home/claude/.aws/config 2>/dev/null || true
fi
fi
# ── Git credential helper (for HTTPS token) ─────────────────────────────────
@@ -164,6 +189,24 @@ if [ -n "$MCP_SERVERS_JSON" ]; then
unset MCP_SERVERS_JSON
fi
# ── AWS SSO auth refresh command ──────────────────────────────────────────────
# When set, inject awsAuthRefresh into ~/.claude.json so Claude Code calls
# triple-c-sso-refresh when AWS credentials expire mid-session.
if [ -n "$AWS_SSO_AUTH_REFRESH_CMD" ]; then
CLAUDE_JSON="/home/claude/.claude.json"
if [ -f "$CLAUDE_JSON" ]; then
MERGED=$(jq --arg cmd "$AWS_SSO_AUTH_REFRESH_CMD" '.awsAuthRefresh = $cmd' "$CLAUDE_JSON" 2>/dev/null)
if [ -n "$MERGED" ]; then
printf '%s\n' "$MERGED" > "$CLAUDE_JSON"
fi
else
printf '{"awsAuthRefresh":"%s"}\n' "$AWS_SSO_AUTH_REFRESH_CMD" > "$CLAUDE_JSON"
fi
chown claude:claude "$CLAUDE_JSON"
chmod 600 "$CLAUDE_JSON"
unset AWS_SSO_AUTH_REFRESH_CMD
fi
# ── Docker socket permissions ────────────────────────────────────────────────
if [ -S /var/run/docker.sock ]; then
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)

33
container/triple-c-sso-refresh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# Signal Triple-C to perform host-side AWS SSO login, then sync the result.
CACHE_DIR="$HOME/.aws/sso/cache"
HOST_CACHE="/tmp/.host-aws/sso/cache"
MARKER="/tmp/.sso-refresh-marker"
touch "$MARKER"
# Emit marker for Triple-C app to detect in terminal output
echo "###TRIPLE_C_SSO_REFRESH###"
echo "Waiting for SSO login to complete on host..."
TIMEOUT=120
ELAPSED=0
while [ $ELAPSED -lt $TIMEOUT ]; do
if [ -d "$HOST_CACHE" ]; then
NEW=$(find "$HOST_CACHE" -name "*.json" -newer "$MARKER" 2>/dev/null | head -1)
if [ -n "$NEW" ]; then
mkdir -p "$CACHE_DIR"
cp -f "$HOST_CACHE"/*.json "$CACHE_DIR/" 2>/dev/null
chown -R "$(whoami)" "$CACHE_DIR"
echo "AWS SSO credentials refreshed successfully."
rm -f "$MARKER"
exit 0
fi
fi
sleep 2
ELAPSED=$((ELAPSED + 2))
done
echo "SSO refresh timed out (${TIMEOUT}s). Please try again."
rm -f "$MARKER"
exit 1