Compare commits

...

6 Commits

Author SHA1 Message Date
b46b392a9a Update package-lock.json version to 0.2.0
All checks were successful
Build App / build-windows (push) Successful in 3m21s
Build App / build-linux (push) Successful in 5m2s
Build App / build-macos (push) Successful in 2m32s
Build App / sync-to-github (push) Successful in 13s
The lock file was still at 0.1.0 after the v0.2.0 version bump.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 07:23:08 -07:00
4889dd974f Change auth mode and Bedrock method selectors from button lists to dropdowns
Some checks failed
Build App / build-macos (push) Failing after 1s
Build App / build-windows (push) Successful in 4m4s
Build App / sync-to-github (push) Has been cancelled
Build App / build-linux (push) Has been cancelled
The horizontal button lists overflowed the viewable area of the card. Replace
with <select> dropdowns for both the main auth mode and the Bedrock sub-method.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 07:18:47 -07:00
b6fd8a557e Clean up compiler warnings and document Ollama/LiteLLM backends
All checks were successful
Build App / build-macos (push) Successful in 2m25s
Build App / build-windows (push) Successful in 3m22s
Build App / build-linux (push) Successful in 4m36s
Build App / sync-to-github (push) Successful in 10s
Remove unused `any_docker_mcp()` function, add `#[allow(unused_imports)]`
and `#[allow(dead_code)]` annotations to suppress false-positive warnings.
Update README.md and HOW-TO-USE.md with Ollama and LiteLLM auth backend
documentation including best-effort compatibility notices.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 13:23:50 -07:00
93deab68a7 Add Ollama and LiteLLM backend support (v0.2.0)
All checks were successful
Build App / build-macos (push) Successful in 2m25s
Build App / build-windows (push) Successful in 3m27s
Build App / build-linux (push) Successful in 4m31s
Build App / sync-to-github (push) Successful in 9s
Add two new auth modes for projects alongside Anthropic and Bedrock:
- Ollama: connect to local or remote Ollama servers via ANTHROPIC_BASE_URL
- LiteLLM: connect through a LiteLLM proxy gateway to 100+ model providers

Both modes inject ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN env vars into
the container, with optional model override via ANTHROPIC_MODEL. LiteLLM
API keys are stored securely in the OS keychain. Config changes trigger
automatic container recreation via fingerprinting.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 13:05:52 -07:00
2dce2993cc Fix AWS SSO for Bedrock profile auth in containers
All checks were successful
Build App / build-macos (push) Successful in 2m29s
Build App / build-windows (push) Successful in 3m56s
Build App / build-linux (push) Successful in 4m42s
Build Container / build-container (push) Successful in 54s
Build App / sync-to-github (push) Successful in 10s
SSO login was broken in containers due to three issues: the sso_session
indirection format not being resolved by Claude Code's AWS SDK, SSO
detection only checking sso_start_url (missing sso_session), and the
OAuth callback port not being accessible from inside the container.

This fix runs SSO login on the host OS (where the browser and ports work
natively) by having the container emit a marker that the Tauri app
detects in terminal output, triggering host-side `aws sso login`. The
entrypoint also inlines sso_session properties into profile sections and
injects awsAuthRefresh into Claude Code config for mid-session refresh.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 12:24:16 -07:00
e482452ffd Expand MCP documentation with mode explanations and concrete examples
- Rewrite HOW-TO-USE.md MCP section with a mode matrix (stdio/http x
  manual/docker), four worked examples (filesystem, GitHub, custom HTTP,
  database), and detailed explanations of networking, auto-pull, and
  config injection
- Update README.md MCP architecture section with a mode table and
  key behaviors including auto-pull and Docker DNS details

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 10:19:03 -07:00
26 changed files with 727 additions and 77 deletions

View File

@@ -72,7 +72,7 @@ docker exec stdout → tokio task → emit("terminal-output-{sessionId}") → li
- `container.rs` — Container lifecycle (create, start, stop, remove, inspect)
- `exec.rs` — PTY exec sessions with bidirectional stdin/stdout streaming
- `image.rs` — Image build/pull with progress streaming
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `OllamaConfig`, `LiteLlmConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
- **`storage/`** — Persistence: `projects_store.rs` (JSON file with atomic writes), `secure.rs` (OS keychain via `keyring` crate), `settings_store.rs`
### Container (`container/`)
@@ -90,6 +90,8 @@ Containers use a **stop/start** model (not create/destroy). Installed packages p
Per-project, independently configured:
- **Anthropic (OAuth)** — `claude login` in terminal, token persists in config volume
- **AWS Bedrock** — Static keys, profile, or bearer token injected as env vars
- **Ollama** — Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`)
- **LiteLLM** — Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers
## Styling

View File

@@ -33,6 +33,8 @@ You need access to Claude Code through one of:
- **Anthropic account** — Sign up at https://claude.ai and use `claude login` (OAuth) inside the terminal
- **AWS Bedrock** — An AWS account with Bedrock access and Claude models enabled
- **Ollama** — A local or remote Ollama server running an Anthropic-compatible model (best-effort support)
- **LiteLLM** — A LiteLLM proxy gateway providing access to 100+ model providers (best-effort support)
---
@@ -88,6 +90,20 @@ Claude Code launches automatically with `--dangerously-skip-permissions` inside
3. Expand the **Config** panel and fill in your AWS credentials (see [AWS Bedrock Configuration](#aws-bedrock-configuration) below).
4. Start the container again.
**Ollama:**
1. Stop the container first (settings can only be changed while stopped).
2. In the project card, switch the auth mode to **Ollama**.
3. Expand the **Config** panel and set the base URL of your Ollama server (defaults to `http://host.docker.internal:11434` for a local instance). Optionally set a model ID.
4. Start the container again.
**LiteLLM:**
1. Stop the container first (settings can only be changed while stopped).
2. In the project card, switch the auth mode to **LiteLLM**.
3. Expand the **Config** panel and set the base URL of your LiteLLM proxy (defaults to `http://host.docker.internal:4000`). Optionally set an API key and model ID.
4. Start the container again.
---
## The Interface
@@ -225,6 +241,17 @@ Click **Edit** to write per-project instructions for Claude Code. These are writ
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers, which extend Claude Code with access to external tools and data sources. MCP servers are configured in a **global library** and **enabled per-project**.
### How It Works
There are two dimensions to MCP server configuration:
| | **Manual** (no Docker image) | **Docker** (Docker image specified) |
|---|---|---|
| **Stdio** | Command runs inside the project container | Command runs in a separate MCP container via `docker exec` |
| **HTTP** | Connects to a URL you provide | Runs in a separate container, reached by hostname on a shared Docker network |
**Docker images are pulled automatically** if not already present when the project starts.
### Accessing MCP Configuration
Click the **MCP** tab in the sidebar to open the MCP server library. This is where you define all available MCP servers.
@@ -232,43 +259,103 @@ Click the **MCP** tab in the sidebar to open the MCP server library. This is whe
### Adding an MCP Server
1. Type a name in the input field and click **Add**.
2. Configure the server in its card:
2. Expand the server card and configure it.
| Setting | Description |
|---------|-------------|
| **Docker Image** | Optional. If provided, the server runs as an isolated Docker container. |
| **Transport Type** | **Stdio** (command-line) or **HTTP** (network endpoint) |
The key decision is whether to set a **Docker Image**:
- **With Docker image** — The MCP server runs in its own isolated container. Best for servers that need specific dependencies or system-level packages.
- **Without Docker image** (manual) — The command runs directly inside your project container. Best for lightweight npx-based servers that just need Node.js.
#### Stdio Mode (Manual)
- **Command** — The executable to run (e.g., `npx`)
- **Arguments** — Space-separated arguments
- **Environment Variables** — Key-value pairs passed to the command
Then choose the **Transport Type**:
- **Stdio** — The MCP server communicates over stdin/stdout. This is the most common type.
- **HTTP** — The MCP server exposes an HTTP endpoint (streamable HTTP transport).
#### HTTP Mode (Manual)
- **URL** — The MCP endpoint (e.g., `http://localhost:3000/mcp`)
- **Headers** — Custom HTTP headers
### Configuration Examples
#### Docker Mode
When a Docker image is specified, the server runs as a container on a per-project network:
- **Container Port** — Port the MCP server listens on inside its container (default: 3000)
- **Environment Variables** — Injected into the Docker container
#### Example 1: Filesystem Server (Stdio, Manual)
A simple npx-based server that runs inside the project container. No Docker image needed since Node.js is already installed.
| Field | Value |
|-------|-------|
| **Docker Image** | *(empty)* |
| **Transport** | Stdio |
| **Command** | `npx` |
| **Arguments** | `-y @modelcontextprotocol/server-filesystem /workspace` |
This gives Claude Code access to browse and read files via MCP. The command runs directly inside the project container using the pre-installed Node.js.
#### Example 2: GitHub Server (Stdio, Manual)
Another npx-based server, with an environment variable for authentication.
| Field | Value |
|-------|-------|
| **Docker Image** | *(empty)* |
| **Transport** | Stdio |
| **Command** | `npx` |
| **Arguments** | `-y @modelcontextprotocol/server-github` |
| **Environment Variables** | `GITHUB_PERSONAL_ACCESS_TOKEN` = `ghp_your_token` |
#### Example 3: Custom MCP Server (HTTP, Docker)
An MCP server packaged as a Docker image that exposes an HTTP endpoint.
| Field | Value |
|-------|-------|
| **Docker Image** | `myregistry/my-mcp-server:latest` |
| **Transport** | HTTP |
| **Container Port** | `8080` |
| **Environment Variables** | `API_KEY` = `your_key` |
Triple-C will:
1. Pull the image automatically if not present
2. Start the container on the project's bridge network
3. Configure Claude Code to reach it at `http://triple-c-mcp-{id}:8080/mcp`
The hostname is the MCP container's name on the Docker network — **not** `localhost`.
#### Example 4: Database Server (Stdio, Docker)
An MCP server that needs its own runtime environment, communicating over stdio.
| Field | Value |
|-------|-------|
| **Docker Image** | `mcp/postgres-server:latest` |
| **Transport** | Stdio |
| **Command** | `node` |
| **Arguments** | `dist/index.js` |
| **Environment Variables** | `DATABASE_URL` = `postgresql://user:pass@host:5432/db` |
Triple-C will:
1. Pull the image and start it on the project network
2. Configure Claude Code to communicate via `docker exec -i triple-c-mcp-{id} node dist/index.js`
3. Automatically enable Docker socket access on the project container (required for `docker exec`)
### Enabling MCP Servers Per-Project
In a project's configuration panel, the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes require a container restart.
In a project's configuration panel (click **Config**), the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes take effect on the next container start.
### How Docker-Based MCP Works
When a project with Docker-based MCP servers starts:
1. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
2. Each enabled Docker MCP server gets its own container on that network
3. The main project container is connected to the same network
4. MCP server configuration is injected into Claude Code's config file
1. Missing Docker images are **automatically pulled** (progress shown in the progress modal)
2. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
3. Each enabled Docker MCP server gets its own container on that network
4. The main project container is connected to the same network
5. MCP server configuration is written to `~/.claude.json` inside the container
**Stdio + Docker** servers communicate via `docker exec`, which automatically enables Docker socket access on the main container. **HTTP + Docker** servers are reached by hostname on the shared network (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
**Networking**: Docker-based MCP containers are reached by their container name as a hostname (e.g., `triple-c-mcp-{serverId}`), not by `localhost`. Docker DNS resolves these names automatically on the shared bridge network.
When MCP configuration changes (servers added/removed/modified), the container is automatically recreated on the next start to apply the new configuration.
**Stdio + Docker**: The project container uses `docker exec` to communicate with the MCP container over stdin/stdout. This automatically enables Docker socket access on the project container.
**HTTP + Docker**: The project container connects to the MCP container's HTTP endpoint using the container hostname and port (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
**Manual (no Docker image)**: Stdio commands run directly inside the project container. HTTP URLs connect to wherever you point them (could be an external service or something running on the host).
### Configuration Change Detection
MCP server configuration is tracked via SHA-256 fingerprints stored as Docker labels. If you add, remove, or modify MCP servers for a project, the container is automatically recreated on the next start to apply the new configuration. The container filesystem is snapshotted first, so installed packages are preserved.
---
@@ -301,6 +388,41 @@ Per-project settings always override these global defaults.
---
## Ollama Configuration
To use Claude Code with a local or remote Ollama server, switch the auth mode to **Ollama** on the project card.
### Settings
- **Base URL** — The URL of your Ollama server. Defaults to `http://host.docker.internal:11434`, which reaches a locally running Ollama instance from inside the container. For a remote server, use its IP or hostname (e.g., `http://192.168.1.100:11434`).
- **Model ID** — Optional. Override the model to use (e.g., `qwen3.5:27b`).
### How It Works
Triple-C sets `ANTHROPIC_BASE_URL` to point Claude Code at your Ollama server instead of Anthropic's API. The `ANTHROPIC_AUTH_TOKEN` is set to `ollama` (required by Claude Code but not used for actual authentication).
> **Note:** Ollama support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models.
---
## LiteLLM Configuration
To use Claude Code through a [LiteLLM](https://docs.litellm.ai/) proxy gateway, switch the auth mode to **LiteLLM** on the project card. LiteLLM supports 100+ model providers (OpenAI, Gemini, Anthropic, and more) through a single proxy.
### Settings
- **Base URL** — The URL of your LiteLLM proxy. Defaults to `http://host.docker.internal:4000` for a locally running proxy.
- **API Key** — Optional. The API key for your LiteLLM proxy, if authentication is required. Stored securely in your OS keychain.
- **Model ID** — Optional. Override the model to use.
### How It Works
Triple-C sets `ANTHROPIC_BASE_URL` to point Claude Code at your LiteLLM proxy. If an API key is provided, it is set as `ANTHROPIC_AUTH_TOKEN`.
> **Note:** LiteLLM support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected when routing to non-Anthropic models through the proxy.
---
## Settings
Access global settings via the **Settings** tab in the sidebar.

View File

@@ -49,6 +49,10 @@ Each project can independently use one of:
- **Anthropic** (OAuth): User runs `claude login` inside the terminal on first use. Token persisted in the config volume across restarts and resets.
- **AWS Bedrock**: Per-project AWS credentials (static keys, profile, or bearer token). SSO sessions are validated before launching Claude for Profile auth.
- **Ollama**: Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`). Optional model override.
- **LiteLLM**: Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers. API key stored securely in OS keychain.
> **Note:** Ollama and LiteLLM support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models behind these backends.
### Container Spawning (Sibling Containers)
@@ -60,11 +64,22 @@ If the Docker access setting is toggled after a container already exists, the co
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers as a Beta feature. MCP servers extend Claude Code with external tools and data sources.
**Modes**: Each MCP server operates in one of four modes based on transport type and whether a Docker image is specified:
| Mode | Where It Runs | How It Communicates |
|------|--------------|---------------------|
| Stdio + Manual | Inside the project container | Direct stdin/stdout (e.g., `npx -y @mcp/server`) |
| Stdio + Docker | Separate MCP container | `docker exec -i <mcp-container> <command>` from the project container |
| HTTP + Manual | External / user-provided | Connects to the URL you specify |
| HTTP + Docker | Separate MCP container | `http://<mcp-container>:<port>/mcp` via Docker DNS on a shared bridge network |
**Key behaviors**:
- **Global library**: MCP servers are defined globally in the MCP sidebar tab and stored in `mcp_servers.json`
- **Per-project toggles**: Each project enables/disables individual servers via checkboxes
- **Docker isolation**: MCP servers can run as isolated Docker containers on a per-project bridge network (`triple-c-net-{projectId}`)
- **Transport types**: Stdio (command-line) and HTTP (network endpoint), each with manual or Docker mode
- **Auto-pull**: Docker images for MCP servers are pulled automatically if not present when the project starts
- **Docker networking**: Docker-based MCP containers run on a per-project bridge network (`triple-c-net-{projectId}`), reachable by container name — not localhost
- **Auto-detection**: Config changes are detected via SHA-256 fingerprints and trigger automatic container recreation
- **Config injection**: MCP server configuration is written to `~/.claude.json` inside the container via the `MCP_SERVERS_JSON` environment variable, merged by the entrypoint using `jq`
### Mission Control Integration

68
app/package-lock.json generated
View File

@@ -1,12 +1,12 @@
{
"name": "triple-c",
"version": "0.1.0",
"version": "0.2.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "triple-c",
"version": "0.1.0",
"version": "0.2.0",
"dependencies": {
"@tauri-apps/api": "^2",
"@tauri-apps/plugin-dialog": "^2",
@@ -1643,6 +1643,70 @@
"node": ">=14.0.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/core": {
"version": "1.8.1",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/wasi-threads": "1.1.0",
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/runtime": {
"version": "1.8.1",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/wasi-threads": {
"version": "1.1.0",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@napi-rs/wasm-runtime": {
"version": "1.1.1",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/core": "^1.7.1",
"@emnapi/runtime": "^1.7.1",
"@tybys/wasm-util": "^0.10.1"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Brooooooklyn"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@tybys/wasm-util": {
"version": "0.10.1",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/tslib": {
"version": "2.8.1",
"dev": true,
"inBundle": true,
"license": "0BSD",
"optional": true
},
"node_modules/@tailwindcss/oxide-win32-arm64-msvc": {
"version": "4.2.1",
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-arm64-msvc/-/oxide-win32-arm64-msvc-4.2.1.tgz",

View File

@@ -1,7 +1,7 @@
{
"name": "triple-c",
"private": true,
"version": "0.1.0",
"version": "0.2.0",
"type": "module",
"scripts": {
"dev": "vite",

View File

@@ -4668,7 +4668,7 @@ dependencies = [
[[package]]
name = "triple-c"
version = "0.1.0"
version = "0.2.0"
dependencies = [
"bollard",
"chrono",

View File

@@ -1,6 +1,6 @@
[package]
name = "triple-c"
version = "0.1.0"
version = "0.2.0"
edition = "2021"
[lib]

View File

@@ -0,0 +1,30 @@
use tauri::State;
use crate::AppState;
#[tauri::command]
pub async fn aws_sso_refresh(
project_id: String,
state: State<'_, AppState>,
) -> Result<(), String> {
let project = state.projects_store.get(&project_id)
.ok_or_else(|| format!("Project {} not found", project_id))?;
let profile = project.bedrock_config.as_ref()
.and_then(|b| b.aws_profile.clone())
.or_else(|| state.settings_store.get().global_aws.aws_profile.clone())
.unwrap_or_else(|| "default".to_string());
log::info!("Running host-side AWS SSO login for profile '{}'", profile);
let status = tokio::process::Command::new("aws")
.args(["sso", "login", "--profile", &profile])
.status()
.await
.map_err(|e| format!("Failed to run aws sso login: {}", e))?;
if !status.success() {
return Err("SSO login failed or was cancelled".to_string());
}
Ok(())
}

View File

@@ -1,3 +1,4 @@
pub mod aws_commands;
pub mod docker_commands;
pub mod file_commands;
pub mod mcp_commands;

View File

@@ -34,6 +34,11 @@ fn store_secrets_for_project(project: &Project) -> Result<(), String> {
secure::store_project_secret(&project.id, "aws-bearer-token", v)?;
}
}
if let Some(ref litellm) = project.litellm_config {
if let Some(ref v) = litellm.api_key {
secure::store_project_secret(&project.id, "litellm-api-key", v)?;
}
}
Ok(())
}
@@ -51,6 +56,10 @@ fn load_secrets_for_project(project: &mut Project) {
bedrock.aws_bearer_token = secure::get_project_secret(&project.id, "aws-bearer-token")
.unwrap_or(None);
}
if let Some(ref mut litellm) = project.litellm_config {
litellm.api_key = secure::get_project_secret(&project.id, "litellm-api-key")
.unwrap_or(None);
}
}
/// Resolve enabled MCP servers and filter to Docker-only ones.
@@ -180,6 +189,22 @@ pub async fn start_project_container(
}
}
if project.auth_mode == AuthMode::Ollama {
let ollama = project.ollama_config.as_ref()
.ok_or_else(|| "Ollama auth mode selected but no Ollama configuration found.".to_string())?;
if ollama.base_url.is_empty() {
return Err("Ollama base URL is required.".to_string());
}
}
if project.auth_mode == AuthMode::LiteLlm {
let litellm = project.litellm_config.as_ref()
.ok_or_else(|| "LiteLLM auth mode selected but no LiteLLM configuration found.".to_string())?;
if litellm.base_url.is_empty() {
return Err("LiteLLM base URL is required.".to_string());
}
}
// Update status to starting
state.projects_store.update_status(&project_id, ProjectStatus::Starting)?;

View File

@@ -40,11 +40,12 @@ if aws sts get-caller-identity --profile '{profile}' >/dev/null 2>&1; then
echo "AWS session valid."
else
echo "AWS session expired or invalid."
# Check if this profile uses SSO (has sso_start_url configured)
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1; then
echo "Starting SSO login — click the URL below to authenticate:"
# Check if this profile uses SSO (has sso_start_url or sso_session configured)
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1 || \
aws configure get sso_session --profile '{profile}' >/dev/null 2>&1; then
echo "Starting SSO login..."
echo ""
aws sso login --profile '{profile}'
triple-c-sso-refresh
if [ $? -ne 0 ]; then
echo ""
echo "SSO login failed or was cancelled. Starting Claude anyway..."

View File

@@ -231,6 +231,33 @@ fn compute_bedrock_fingerprint(project: &Project) -> String {
}
}
/// Compute a fingerprint for the Ollama configuration so we can detect changes.
fn compute_ollama_fingerprint(project: &Project) -> String {
if let Some(ref ollama) = project.ollama_config {
let parts = vec![
ollama.base_url.clone(),
ollama.model_id.as_deref().unwrap_or("").to_string(),
];
sha256_hex(&parts.join("|"))
} else {
String::new()
}
}
/// Compute a fingerprint for the LiteLLM configuration so we can detect changes.
fn compute_litellm_fingerprint(project: &Project) -> String {
if let Some(ref litellm) = project.litellm_config {
let parts = vec![
litellm.base_url.clone(),
litellm.api_key.as_deref().unwrap_or("").to_string(),
litellm.model_id.as_deref().unwrap_or("").to_string(),
];
sha256_hex(&parts.join("|"))
} else {
String::new()
}
}
/// Compute a fingerprint for the project paths so we can detect changes.
/// Sorted by mount_name so order changes don't cause spurious recreation.
fn compute_paths_fingerprint(paths: &[ProjectPath]) -> String {
@@ -459,6 +486,7 @@ pub async fn create_container(
if let Some(p) = profile {
env_vars.push(format!("AWS_PROFILE={}", p));
}
env_vars.push("AWS_SSO_AUTH_REFRESH_CMD=triple-c-sso-refresh".to_string());
}
BedrockAuthMethod::BearerToken => {
if let Some(ref token) = bedrock.aws_bearer_token {
@@ -477,6 +505,30 @@ pub async fn create_container(
}
}
// Ollama configuration
if project.auth_mode == AuthMode::Ollama {
if let Some(ref ollama) = project.ollama_config {
env_vars.push(format!("ANTHROPIC_BASE_URL={}", ollama.base_url));
env_vars.push("ANTHROPIC_AUTH_TOKEN=ollama".to_string());
if let Some(ref model) = ollama.model_id {
env_vars.push(format!("ANTHROPIC_MODEL={}", model));
}
}
}
// LiteLLM configuration
if project.auth_mode == AuthMode::LiteLlm {
if let Some(ref litellm) = project.litellm_config {
env_vars.push(format!("ANTHROPIC_BASE_URL={}", litellm.base_url));
if let Some(ref key) = litellm.api_key {
env_vars.push(format!("ANTHROPIC_AUTH_TOKEN={}", key));
}
if let Some(ref model) = litellm.model_id {
env_vars.push(format!("ANTHROPIC_MODEL={}", model));
}
}
}
// Custom environment variables (global + per-project, project overrides global for same key)
let merged_env = merge_custom_env_vars(global_custom_env_vars, &project.custom_env_vars);
let reserved_prefixes = ["ANTHROPIC_", "AWS_", "GIT_", "HOST_", "CLAUDE_", "TRIPLE_C_"];
@@ -645,6 +697,8 @@ pub async fn create_container(
labels.insert("triple-c.auth-mode".to_string(), format!("{:?}", project.auth_mode));
labels.insert("triple-c.paths-fingerprint".to_string(), compute_paths_fingerprint(&project.paths));
labels.insert("triple-c.bedrock-fingerprint".to_string(), compute_bedrock_fingerprint(project));
labels.insert("triple-c.ollama-fingerprint".to_string(), compute_ollama_fingerprint(project));
labels.insert("triple-c.litellm-fingerprint".to_string(), compute_litellm_fingerprint(project));
labels.insert("triple-c.ports-fingerprint".to_string(), compute_ports_fingerprint(&project.port_mappings));
labels.insert("triple-c.image".to_string(), image_name.to_string());
labels.insert("triple-c.timezone".to_string(), timezone.unwrap_or("").to_string());
@@ -884,6 +938,22 @@ pub async fn container_needs_recreation(
return Ok(true);
}
// ── Ollama config fingerprint ────────────────────────────────────────
let expected_ollama_fp = compute_ollama_fingerprint(project);
let container_ollama_fp = get_label("triple-c.ollama-fingerprint").unwrap_or_default();
if container_ollama_fp != expected_ollama_fp {
log::info!("Ollama config mismatch");
return Ok(true);
}
// ── LiteLLM config fingerprint ───────────────────────────────────────
let expected_litellm_fp = compute_litellm_fingerprint(project);
let container_litellm_fp = get_label("triple-c.litellm-fingerprint").unwrap_or_default();
if container_litellm_fp != expected_litellm_fp {
log::info!("LiteLLM config mismatch");
return Ok(true);
}
// ── Image ────────────────────────────────────────────────────────────
// The image label is set at creation time; if the user changed the
// configured image we need to recreate. We only compare when the
@@ -1073,11 +1143,6 @@ pub fn any_stdio_docker_mcp(servers: &[McpServer]) -> bool {
servers.iter().any(|s| s.is_docker() && s.transport_type == McpTransportType::Stdio)
}
/// Returns true if any MCP server uses Docker.
pub fn any_docker_mcp(servers: &[McpServer]) -> bool {
servers.iter().any(|s| s.is_docker())
}
/// Find an existing MCP container by its expected name.
pub async fn find_mcp_container(server: &McpServer) -> Result<Option<String>, String> {
let docker = get_docker()?;

View File

@@ -22,6 +22,7 @@ impl ExecSession {
.map_err(|e| format!("Failed to send input: {}", e))
}
#[allow(dead_code)]
pub async fn resize(&self, cols: u16, rows: u16) -> Result<(), String> {
let docker = get_docker()?;
docker

View File

@@ -4,8 +4,13 @@ pub mod image;
pub mod exec;
pub mod network;
#[allow(unused_imports)]
pub use client::*;
#[allow(unused_imports)]
pub use container::*;
#[allow(unused_imports)]
pub use image::*;
#[allow(unused_imports)]
pub use exec::*;
#[allow(unused_imports)]
pub use network::*;

View File

@@ -48,6 +48,7 @@ pub async fn ensure_project_network(project_id: &str) -> Result<String, String>
}
/// Connect a container to the project network.
#[allow(dead_code)]
pub async fn connect_container_to_network(
container_id: &str,
network_name: &str,

View File

@@ -114,6 +114,8 @@ pub fn run() {
commands::mcp_commands::add_mcp_server,
commands::mcp_commands::update_mcp_server,
commands::mcp_commands::remove_mcp_server,
// AWS
commands::aws_commands::aws_sso_refresh,
// Updates
commands::update_commands::get_app_version,
commands::update_commands::check_for_updates,

View File

@@ -33,6 +33,8 @@ pub struct Project {
pub status: ProjectStatus,
pub auth_mode: AuthMode,
pub bedrock_config: Option<BedrockConfig>,
pub ollama_config: Option<OllamaConfig>,
pub litellm_config: Option<LiteLlmConfig>,
pub allow_docker_access: bool,
#[serde(default)]
pub mission_control_enabled: bool,
@@ -74,6 +76,9 @@ pub enum AuthMode {
#[serde(alias = "login", alias = "api_key")]
Anthropic,
Bedrock,
Ollama,
#[serde(alias = "litellm")]
LiteLlm,
}
impl Default for AuthMode {
@@ -115,6 +120,29 @@ pub struct BedrockConfig {
pub disable_prompt_caching: bool,
}
/// Ollama configuration for a project.
/// Ollama exposes an Anthropic-compatible API endpoint.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OllamaConfig {
/// The base URL of the Ollama server (e.g., "http://host.docker.internal:11434" or "http://192.168.1.100:11434")
pub base_url: String,
/// Optional model override (e.g., "qwen3.5:27b")
pub model_id: Option<String>,
}
/// LiteLLM gateway configuration for a project.
/// LiteLLM translates Anthropic API calls to 100+ model providers.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LiteLlmConfig {
/// The base URL of the LiteLLM proxy (e.g., "http://host.docker.internal:4000" or "https://litellm.example.com")
pub base_url: String,
/// API key for the LiteLLM proxy
#[serde(skip_serializing, default)]
pub api_key: Option<String>,
/// Optional model override
pub model_id: Option<String>,
}
impl Project {
pub fn new(name: String, paths: Vec<ProjectPath>) -> Self {
let now = chrono::Utc::now().to_rfc3339();
@@ -126,6 +154,8 @@ impl Project {
status: ProjectStatus::Stopped,
auth_mode: AuthMode::default(),
bedrock_config: None,
ollama_config: None,
litellm_config: None,
allow_docker_access: false,
mission_control_enabled: false,
ssh_key_path: None,

View File

@@ -3,7 +3,11 @@ pub mod secure;
pub mod settings_store;
pub mod mcp_store;
#[allow(unused_imports)]
pub use projects_store::*;
#[allow(unused_imports)]
pub use secure::*;
#[allow(unused_imports)]
pub use settings_store::*;
#[allow(unused_imports)]
pub use mcp_store::*;

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://raw.githubusercontent.com/tauri-apps/tauri/dev/crates/tauri-cli/schema.json",
"productName": "Triple-C",
"version": "0.1.0",
"version": "0.2.0",
"identifier": "com.triple-c.desktop",
"build": {
"beforeDevCommand": "npm run dev",

View File

@@ -1,7 +1,7 @@
import { useState, useEffect } from "react";
import { open } from "@tauri-apps/plugin-dialog";
import { listen } from "@tauri-apps/api/event";
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod } from "../../lib/types";
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod, OllamaConfig, LiteLlmConfig } from "../../lib/types";
import { useProjects } from "../../hooks/useProjects";
import { useMcpServers } from "../../hooks/useMcpServers";
import { useTerminal } from "../../hooks/useTerminal";
@@ -58,6 +58,15 @@ export default function ProjectCard({ project }: Props) {
const [bedrockBearerToken, setBedrockBearerToken] = useState(project.bedrock_config?.aws_bearer_token ?? "");
const [bedrockModelId, setBedrockModelId] = useState(project.bedrock_config?.model_id ?? "");
// Ollama local state
const [ollamaBaseUrl, setOllamaBaseUrl] = useState(project.ollama_config?.base_url ?? "http://host.docker.internal:11434");
const [ollamaModelId, setOllamaModelId] = useState(project.ollama_config?.model_id ?? "");
// LiteLLM local state
const [litellmBaseUrl, setLitellmBaseUrl] = useState(project.litellm_config?.base_url ?? "http://host.docker.internal:4000");
const [litellmApiKey, setLitellmApiKey] = useState(project.litellm_config?.api_key ?? "");
const [litellmModelId, setLitellmModelId] = useState(project.litellm_config?.model_id ?? "");
// Sync local state when project prop changes (e.g., after save or external update)
useEffect(() => {
setEditName(project.name);
@@ -76,6 +85,11 @@ export default function ProjectCard({ project }: Props) {
setBedrockProfile(project.bedrock_config?.aws_profile ?? "");
setBedrockBearerToken(project.bedrock_config?.aws_bearer_token ?? "");
setBedrockModelId(project.bedrock_config?.model_id ?? "");
setOllamaBaseUrl(project.ollama_config?.base_url ?? "http://host.docker.internal:11434");
setOllamaModelId(project.ollama_config?.model_id ?? "");
setLitellmBaseUrl(project.litellm_config?.base_url ?? "http://host.docker.internal:4000");
setLitellmApiKey(project.litellm_config?.api_key ?? "");
setLitellmModelId(project.litellm_config?.model_id ?? "");
}, [project]);
// Listen for container progress events
@@ -177,12 +191,29 @@ export default function ProjectCard({ project }: Props) {
disable_prompt_caching: false,
};
const defaultOllamaConfig: OllamaConfig = {
base_url: "http://host.docker.internal:11434",
model_id: null,
};
const defaultLiteLlmConfig: LiteLlmConfig = {
base_url: "http://host.docker.internal:4000",
api_key: null,
model_id: null,
};
const handleAuthModeChange = async (mode: AuthMode) => {
try {
const updates: Partial<Project> = { auth_mode: mode };
if (mode === "bedrock" && !project.bedrock_config) {
updates.bedrock_config = defaultBedrockConfig;
}
if (mode === "ollama" && !project.ollama_config) {
updates.ollama_config = defaultOllamaConfig;
}
if (mode === "lit_llm" && !project.litellm_config) {
updates.litellm_config = defaultLiteLlmConfig;
}
await update({ ...project, ...updates });
} catch (e) {
setError(String(e));
@@ -305,6 +336,51 @@ export default function ProjectCard({ project }: Props) {
}
};
const handleOllamaBaseUrlBlur = async () => {
try {
const current = project.ollama_config ?? defaultOllamaConfig;
await update({ ...project, ollama_config: { ...current, base_url: ollamaBaseUrl } });
} catch (err) {
console.error("Failed to update Ollama base URL:", err);
}
};
const handleOllamaModelIdBlur = async () => {
try {
const current = project.ollama_config ?? defaultOllamaConfig;
await update({ ...project, ollama_config: { ...current, model_id: ollamaModelId || null } });
} catch (err) {
console.error("Failed to update Ollama model ID:", err);
}
};
const handleLitellmBaseUrlBlur = async () => {
try {
const current = project.litellm_config ?? defaultLiteLlmConfig;
await update({ ...project, litellm_config: { ...current, base_url: litellmBaseUrl } });
} catch (err) {
console.error("Failed to update LiteLLM base URL:", err);
}
};
const handleLitellmApiKeyBlur = async () => {
try {
const current = project.litellm_config ?? defaultLiteLlmConfig;
await update({ ...project, litellm_config: { ...current, api_key: litellmApiKey || null } });
} catch (err) {
console.error("Failed to update LiteLLM API key:", err);
}
};
const handleLitellmModelIdBlur = async () => {
try {
const current = project.litellm_config ?? defaultLiteLlmConfig;
await update({ ...project, litellm_config: { ...current, model_id: litellmModelId || null } });
} catch (err) {
console.error("Failed to update LiteLLM model ID:", err);
}
};
const statusColor = {
stopped: "bg-[var(--text-secondary)]",
starting: "bg-[var(--warning)]",
@@ -373,28 +449,18 @@ export default function ProjectCard({ project }: Props) {
{/* Auth mode selector */}
<div className="flex items-center gap-1 text-xs">
<span className="text-[var(--text-secondary)] mr-1">Auth:</span>
<button
onClick={(e) => { e.stopPropagation(); handleAuthModeChange("anthropic"); }}
<select
value={project.auth_mode}
onChange={(e) => { e.stopPropagation(); handleAuthModeChange(e.target.value as AuthMode); }}
onClick={(e) => e.stopPropagation()}
disabled={!isStopped}
className={`px-2 py-0.5 rounded transition-colors ${
project.auth_mode === "anthropic"
? "bg-[var(--accent)] text-white"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
} disabled:opacity-50`}
className="px-2 py-0.5 rounded bg-[var(--bg-primary)] border border-[var(--border-color)] text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50"
>
Anthropic
</button>
<button
onClick={(e) => { e.stopPropagation(); handleAuthModeChange("bedrock"); }}
disabled={!isStopped}
className={`px-2 py-0.5 rounded transition-colors ${
project.auth_mode === "bedrock"
? "bg-[var(--accent)] text-white"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
} disabled:opacity-50`}
>
Bedrock
</button>
<option value="anthropic">Anthropic</option>
<option value="bedrock">Bedrock</option>
<option value="ollama">Ollama</option>
<option value="lit_llm">LiteLLM</option>
</select>
</div>
{/* Action buttons */}
@@ -738,20 +804,17 @@ export default function ProjectCard({ project }: Props) {
{/* Sub-method selector */}
<div className="flex items-center gap-1 text-xs">
<span className="text-[var(--text-secondary)] mr-1">Method:</span>
{(["static_credentials", "profile", "bearer_token"] as BedrockAuthMethod[]).map((m) => (
<button
key={m}
onClick={() => updateBedrockConfig({ auth_method: m })}
<select
value={bc.auth_method}
onChange={(e) => updateBedrockConfig({ auth_method: e.target.value as BedrockAuthMethod })}
onClick={(e) => e.stopPropagation()}
disabled={!isStopped}
className={`px-2 py-0.5 rounded transition-colors ${
bc.auth_method === m
? "bg-[var(--accent)] text-white"
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
} disabled:opacity-50`}
className="px-2 py-0.5 rounded bg-[var(--bg-primary)] border border-[var(--border-color)] text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50"
>
{m === "static_credentials" ? "Keys" : m === "profile" ? "Profile" : "Token"}
</button>
))}
<option value="static_credentials">Keys</option>
<option value="profile">Profile</option>
<option value="bearer_token">Token</option>
</select>
</div>
{/* AWS Region (always shown) */}
@@ -851,6 +914,99 @@ export default function ProjectCard({ project }: Props) {
</div>
);
})()}
{/* Ollama config */}
{project.auth_mode === "ollama" && (() => {
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50";
return (
<div className="space-y-2 pt-1 border-t border-[var(--border-color)]">
<label className="block text-xs font-medium text-[var(--text-primary)]">Ollama</label>
<p className="text-xs text-[var(--text-secondary)]">
Connect to an Ollama server running locally or on a remote host.
</p>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Base URL</label>
<input
value={ollamaBaseUrl}
onChange={(e) => setOllamaBaseUrl(e.target.value)}
onBlur={handleOllamaBaseUrlBlur}
placeholder="http://host.docker.internal:11434"
disabled={!isStopped}
className={inputCls}
/>
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-70">
Use host.docker.internal for the host machine, or an IP/hostname for remote.
</p>
</div>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)</label>
<input
value={ollamaModelId}
onChange={(e) => setOllamaModelId(e.target.value)}
onBlur={handleOllamaModelIdBlur}
placeholder="qwen3.5:27b"
disabled={!isStopped}
className={inputCls}
/>
</div>
</div>
);
})()}
{/* LiteLLM config */}
{project.auth_mode === "lit_llm" && (() => {
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50";
return (
<div className="space-y-2 pt-1 border-t border-[var(--border-color)]">
<label className="block text-xs font-medium text-[var(--text-primary)]">LiteLLM Gateway</label>
<p className="text-xs text-[var(--text-secondary)]">
Connect through a LiteLLM proxy to use 100+ model providers.
</p>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Base URL</label>
<input
value={litellmBaseUrl}
onChange={(e) => setLitellmBaseUrl(e.target.value)}
onBlur={handleLitellmBaseUrlBlur}
placeholder="http://host.docker.internal:4000"
disabled={!isStopped}
className={inputCls}
/>
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-70">
Use host.docker.internal for local, or a URL for remote/containerized LiteLLM.
</p>
</div>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">API Key</label>
<input
type="password"
value={litellmApiKey}
onChange={(e) => setLitellmApiKey(e.target.value)}
onBlur={handleLitellmApiKeyBlur}
placeholder="sk-..."
disabled={!isStopped}
className={inputCls}
/>
</div>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)</label>
<input
value={litellmModelId}
onChange={(e) => setLitellmModelId(e.target.value)}
onBlur={handleLitellmModelIdBlur}
placeholder="gpt-4o / gemini-pro / etc."
disabled={!isStopped}
className={inputCls}
/>
</div>
</div>
);
})()}
</div>
)}
</div>

View File

@@ -6,6 +6,8 @@ import { WebLinksAddon } from "@xterm/addon-web-links";
import { openUrl } from "@tauri-apps/plugin-opener";
import "@xterm/xterm/css/xterm.css";
import { useTerminal } from "../../hooks/useTerminal";
import { useAppState } from "../../store/appState";
import { awsSsoRefresh } from "../../lib/tauri-commands";
import { UrlDetector } from "../../lib/urlDetector";
import UrlToast from "./UrlToast";
@@ -23,6 +25,12 @@ export default function TerminalView({ sessionId, active }: Props) {
const detectorRef = useRef<UrlDetector | null>(null);
const { sendInput, pasteImage, resize, onOutput, onExit } = useTerminal();
const ssoBufferRef = useRef("");
const ssoTriggeredRef = useRef(false);
const projectId = useAppState(
(s) => s.sessions.find((sess) => sess.id === sessionId)?.projectId
);
const [detectedUrl, setDetectedUrl] = useState<string | null>(null);
const [imagePasteMsg, setImagePasteMsg] = useState<string | null>(null);
const [isAtBottom, setIsAtBottom] = useState(true);
@@ -152,10 +160,30 @@ export default function TerminalView({ sessionId, active }: Props) {
const detector = new UrlDetector((url) => setDetectedUrl(url));
detectorRef.current = detector;
const SSO_MARKER = "###TRIPLE_C_SSO_REFRESH###";
const textDecoder = new TextDecoder();
const outputPromise = onOutput(sessionId, (data) => {
if (aborted) return;
term.write(data);
detector.feed(data);
// Scan for SSO refresh marker in terminal output
if (!ssoTriggeredRef.current && projectId) {
const text = textDecoder.decode(data, { stream: true });
// Combine with overlap from previous chunk to handle marker spanning chunks
const combined = ssoBufferRef.current + text;
if (combined.includes(SSO_MARKER)) {
ssoTriggeredRef.current = true;
ssoBufferRef.current = "";
awsSsoRefresh(projectId).catch((e) =>
console.error("AWS SSO refresh failed:", e)
);
} else {
// Keep last N chars as overlap for next chunk
ssoBufferRef.current = combined.slice(-SSO_MARKER.length);
}
}
}).then((unlisten) => {
if (aborted) unlisten();
return unlisten;
@@ -189,6 +217,8 @@ export default function TerminalView({ sessionId, active }: Props) {
aborted = true;
detector.dispose();
detectorRef.current = null;
ssoTriggeredRef.current = false;
ssoBufferRef.current = "";
osc52Disposable.dispose();
inputDisposable.dispose();
scrollDisposable.dispose();

View File

@@ -40,6 +40,10 @@ export const listAwsProfiles = () =>
export const detectHostTimezone = () =>
invoke<string>("detect_host_timezone");
// AWS
export const awsSsoRefresh = (projectId: string) =>
invoke<void>("aws_sso_refresh", { projectId });
// Terminal
export const openTerminalSession = (projectId: string, sessionId: string, sessionType?: string) =>
invoke<void>("open_terminal_session", { projectId, sessionId, sessionType });

View File

@@ -22,6 +22,8 @@ export interface Project {
status: ProjectStatus;
auth_mode: AuthMode;
bedrock_config: BedrockConfig | null;
ollama_config: OllamaConfig | null;
litellm_config: LiteLlmConfig | null;
allow_docker_access: boolean;
mission_control_enabled: boolean;
ssh_key_path: string | null;
@@ -43,7 +45,7 @@ export type ProjectStatus =
| "stopping"
| "error";
export type AuthMode = "anthropic" | "bedrock";
export type AuthMode = "anthropic" | "bedrock" | "ollama" | "lit_llm";
export type BedrockAuthMethod = "static_credentials" | "profile" | "bearer_token";
@@ -59,6 +61,17 @@ export interface BedrockConfig {
disable_prompt_caching: boolean;
}
export interface OllamaConfig {
base_url: string;
model_id: string | null;
}
export interface LiteLlmConfig {
base_url: string;
api_key: string | null;
model_id: string | null;
}
export interface ContainerInfo {
container_id: string;
project_id: string;

View File

@@ -119,6 +119,9 @@ RUN chmod +x /usr/local/bin/audio-shim \
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/rec \
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/arecord
COPY triple-c-sso-refresh /usr/local/bin/triple-c-sso-refresh
RUN chmod +x /usr/local/bin/triple-c-sso-refresh
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler

View File

@@ -84,6 +84,31 @@ if [ -d /tmp/.host-aws ]; then
# Ensure writable cache directories exist
mkdir -p /home/claude/.aws/sso/cache /home/claude/.aws/cli/cache
chown -R claude:claude /home/claude/.aws/sso /home/claude/.aws/cli
# Inline sso_session properties into profile sections so AWS SDKs that don't
# support the sso_session indirection format can resolve sso_region, etc.
if [ -f /home/claude/.aws/config ]; then
python3 -c '
import configparser, sys
c = configparser.ConfigParser()
c.read(sys.argv[1])
for sec in c.sections():
if not sec.startswith("profile ") and sec != "default":
continue
session = c.get(sec, "sso_session", fallback=None)
if not session or c.has_option(sec, "sso_start_url"):
continue
ss = f"sso-session {session}"
if not c.has_section(ss):
continue
for key in ("sso_start_url", "sso_region", "sso_registration_scopes"):
val = c.get(ss, key, fallback=None)
if val:
c.set(sec, key, val)
with open(sys.argv[1], "w") as f:
c.write(f)
' /home/claude/.aws/config 2>/dev/null || true
fi
fi
# ── Git credential helper (for HTTPS token) ─────────────────────────────────
@@ -164,6 +189,24 @@ if [ -n "$MCP_SERVERS_JSON" ]; then
unset MCP_SERVERS_JSON
fi
# ── AWS SSO auth refresh command ──────────────────────────────────────────────
# When set, inject awsAuthRefresh into ~/.claude.json so Claude Code calls
# triple-c-sso-refresh when AWS credentials expire mid-session.
if [ -n "$AWS_SSO_AUTH_REFRESH_CMD" ]; then
CLAUDE_JSON="/home/claude/.claude.json"
if [ -f "$CLAUDE_JSON" ]; then
MERGED=$(jq --arg cmd "$AWS_SSO_AUTH_REFRESH_CMD" '.awsAuthRefresh = $cmd' "$CLAUDE_JSON" 2>/dev/null)
if [ -n "$MERGED" ]; then
printf '%s\n' "$MERGED" > "$CLAUDE_JSON"
fi
else
printf '{"awsAuthRefresh":"%s"}\n' "$AWS_SSO_AUTH_REFRESH_CMD" > "$CLAUDE_JSON"
fi
chown claude:claude "$CLAUDE_JSON"
chmod 600 "$CLAUDE_JSON"
unset AWS_SSO_AUTH_REFRESH_CMD
fi
# ── Docker socket permissions ────────────────────────────────────────────────
if [ -S /var/run/docker.sock ]; then
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)

33
container/triple-c-sso-refresh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# Signal Triple-C to perform host-side AWS SSO login, then sync the result.
CACHE_DIR="$HOME/.aws/sso/cache"
HOST_CACHE="/tmp/.host-aws/sso/cache"
MARKER="/tmp/.sso-refresh-marker"
touch "$MARKER"
# Emit marker for Triple-C app to detect in terminal output
echo "###TRIPLE_C_SSO_REFRESH###"
echo "Waiting for SSO login to complete on host..."
TIMEOUT=120
ELAPSED=0
while [ $ELAPSED -lt $TIMEOUT ]; do
if [ -d "$HOST_CACHE" ]; then
NEW=$(find "$HOST_CACHE" -name "*.json" -newer "$MARKER" 2>/dev/null | head -1)
if [ -n "$NEW" ]; then
mkdir -p "$CACHE_DIR"
cp -f "$HOST_CACHE"/*.json "$CACHE_DIR/" 2>/dev/null
chown -R "$(whoami)" "$CACHE_DIR"
echo "AWS SSO credentials refreshed successfully."
rm -f "$MARKER"
exit 0
fi
fi
sleep 2
ELAPSED=$((ELAPSED + 2))
done
echo "SSO refresh timed out (${TIMEOUT}s). Please try again."
rm -f "$MARKER"
exit 1