Compare commits
4 Commits
v0.1.93-ma
...
v0.1.97-wi
| Author | SHA1 | Date | |
|---|---|---|---|
| 93deab68a7 | |||
| 2dce2993cc | |||
| e482452ffd | |||
| 8c710fc7bf |
@@ -72,7 +72,7 @@ docker exec stdout → tokio task → emit("terminal-output-{sessionId}") → li
|
|||||||
- `container.rs` — Container lifecycle (create, start, stop, remove, inspect)
|
- `container.rs` — Container lifecycle (create, start, stop, remove, inspect)
|
||||||
- `exec.rs` — PTY exec sessions with bidirectional stdin/stdout streaming
|
- `exec.rs` — PTY exec sessions with bidirectional stdin/stdout streaming
|
||||||
- `image.rs` — Image build/pull with progress streaming
|
- `image.rs` — Image build/pull with progress streaming
|
||||||
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
|
- **`models/`** — Serde structs (`Project`, `AuthMode`, `BedrockConfig`, `OllamaConfig`, `LiteLlmConfig`, `ContainerInfo`, `AppSettings`). These define the IPC contract with the frontend.
|
||||||
- **`storage/`** — Persistence: `projects_store.rs` (JSON file with atomic writes), `secure.rs` (OS keychain via `keyring` crate), `settings_store.rs`
|
- **`storage/`** — Persistence: `projects_store.rs` (JSON file with atomic writes), `secure.rs` (OS keychain via `keyring` crate), `settings_store.rs`
|
||||||
|
|
||||||
### Container (`container/`)
|
### Container (`container/`)
|
||||||
@@ -90,6 +90,8 @@ Containers use a **stop/start** model (not create/destroy). Installed packages p
|
|||||||
Per-project, independently configured:
|
Per-project, independently configured:
|
||||||
- **Anthropic (OAuth)** — `claude login` in terminal, token persists in config volume
|
- **Anthropic (OAuth)** — `claude login` in terminal, token persists in config volume
|
||||||
- **AWS Bedrock** — Static keys, profile, or bearer token injected as env vars
|
- **AWS Bedrock** — Static keys, profile, or bearer token injected as env vars
|
||||||
|
- **Ollama** — Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`)
|
||||||
|
- **LiteLLM** — Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers
|
||||||
|
|
||||||
## Styling
|
## Styling
|
||||||
|
|
||||||
|
|||||||
117
HOW-TO-USE.md
117
HOW-TO-USE.md
@@ -225,6 +225,17 @@ Click **Edit** to write per-project instructions for Claude Code. These are writ
|
|||||||
|
|
||||||
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers, which extend Claude Code with access to external tools and data sources. MCP servers are configured in a **global library** and **enabled per-project**.
|
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers, which extend Claude Code with access to external tools and data sources. MCP servers are configured in a **global library** and **enabled per-project**.
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
There are two dimensions to MCP server configuration:
|
||||||
|
|
||||||
|
| | **Manual** (no Docker image) | **Docker** (Docker image specified) |
|
||||||
|
|---|---|---|
|
||||||
|
| **Stdio** | Command runs inside the project container | Command runs in a separate MCP container via `docker exec` |
|
||||||
|
| **HTTP** | Connects to a URL you provide | Runs in a separate container, reached by hostname on a shared Docker network |
|
||||||
|
|
||||||
|
**Docker images are pulled automatically** if not already present when the project starts.
|
||||||
|
|
||||||
### Accessing MCP Configuration
|
### Accessing MCP Configuration
|
||||||
|
|
||||||
Click the **MCP** tab in the sidebar to open the MCP server library. This is where you define all available MCP servers.
|
Click the **MCP** tab in the sidebar to open the MCP server library. This is where you define all available MCP servers.
|
||||||
@@ -232,43 +243,103 @@ Click the **MCP** tab in the sidebar to open the MCP server library. This is whe
|
|||||||
### Adding an MCP Server
|
### Adding an MCP Server
|
||||||
|
|
||||||
1. Type a name in the input field and click **Add**.
|
1. Type a name in the input field and click **Add**.
|
||||||
2. Configure the server in its card:
|
2. Expand the server card and configure it.
|
||||||
|
|
||||||
| Setting | Description |
|
The key decision is whether to set a **Docker Image**:
|
||||||
|---------|-------------|
|
- **With Docker image** — The MCP server runs in its own isolated container. Best for servers that need specific dependencies or system-level packages.
|
||||||
| **Docker Image** | Optional. If provided, the server runs as an isolated Docker container. |
|
- **Without Docker image** (manual) — The command runs directly inside your project container. Best for lightweight npx-based servers that just need Node.js.
|
||||||
| **Transport Type** | **Stdio** (command-line) or **HTTP** (network endpoint) |
|
|
||||||
|
|
||||||
#### Stdio Mode (Manual)
|
Then choose the **Transport Type**:
|
||||||
- **Command** — The executable to run (e.g., `npx`)
|
- **Stdio** — The MCP server communicates over stdin/stdout. This is the most common type.
|
||||||
- **Arguments** — Space-separated arguments
|
- **HTTP** — The MCP server exposes an HTTP endpoint (streamable HTTP transport).
|
||||||
- **Environment Variables** — Key-value pairs passed to the command
|
|
||||||
|
|
||||||
#### HTTP Mode (Manual)
|
### Configuration Examples
|
||||||
- **URL** — The MCP endpoint (e.g., `http://localhost:3000/mcp`)
|
|
||||||
- **Headers** — Custom HTTP headers
|
|
||||||
|
|
||||||
#### Docker Mode
|
#### Example 1: Filesystem Server (Stdio, Manual)
|
||||||
When a Docker image is specified, the server runs as a container on a per-project network:
|
|
||||||
- **Container Port** — Port the MCP server listens on inside its container (default: 3000)
|
A simple npx-based server that runs inside the project container. No Docker image needed since Node.js is already installed.
|
||||||
- **Environment Variables** — Injected into the Docker container
|
|
||||||
|
| Field | Value |
|
||||||
|
|-------|-------|
|
||||||
|
| **Docker Image** | *(empty)* |
|
||||||
|
| **Transport** | Stdio |
|
||||||
|
| **Command** | `npx` |
|
||||||
|
| **Arguments** | `-y @modelcontextprotocol/server-filesystem /workspace` |
|
||||||
|
|
||||||
|
This gives Claude Code access to browse and read files via MCP. The command runs directly inside the project container using the pre-installed Node.js.
|
||||||
|
|
||||||
|
#### Example 2: GitHub Server (Stdio, Manual)
|
||||||
|
|
||||||
|
Another npx-based server, with an environment variable for authentication.
|
||||||
|
|
||||||
|
| Field | Value |
|
||||||
|
|-------|-------|
|
||||||
|
| **Docker Image** | *(empty)* |
|
||||||
|
| **Transport** | Stdio |
|
||||||
|
| **Command** | `npx` |
|
||||||
|
| **Arguments** | `-y @modelcontextprotocol/server-github` |
|
||||||
|
| **Environment Variables** | `GITHUB_PERSONAL_ACCESS_TOKEN` = `ghp_your_token` |
|
||||||
|
|
||||||
|
#### Example 3: Custom MCP Server (HTTP, Docker)
|
||||||
|
|
||||||
|
An MCP server packaged as a Docker image that exposes an HTTP endpoint.
|
||||||
|
|
||||||
|
| Field | Value |
|
||||||
|
|-------|-------|
|
||||||
|
| **Docker Image** | `myregistry/my-mcp-server:latest` |
|
||||||
|
| **Transport** | HTTP |
|
||||||
|
| **Container Port** | `8080` |
|
||||||
|
| **Environment Variables** | `API_KEY` = `your_key` |
|
||||||
|
|
||||||
|
Triple-C will:
|
||||||
|
1. Pull the image automatically if not present
|
||||||
|
2. Start the container on the project's bridge network
|
||||||
|
3. Configure Claude Code to reach it at `http://triple-c-mcp-{id}:8080/mcp`
|
||||||
|
|
||||||
|
The hostname is the MCP container's name on the Docker network — **not** `localhost`.
|
||||||
|
|
||||||
|
#### Example 4: Database Server (Stdio, Docker)
|
||||||
|
|
||||||
|
An MCP server that needs its own runtime environment, communicating over stdio.
|
||||||
|
|
||||||
|
| Field | Value |
|
||||||
|
|-------|-------|
|
||||||
|
| **Docker Image** | `mcp/postgres-server:latest` |
|
||||||
|
| **Transport** | Stdio |
|
||||||
|
| **Command** | `node` |
|
||||||
|
| **Arguments** | `dist/index.js` |
|
||||||
|
| **Environment Variables** | `DATABASE_URL` = `postgresql://user:pass@host:5432/db` |
|
||||||
|
|
||||||
|
Triple-C will:
|
||||||
|
1. Pull the image and start it on the project network
|
||||||
|
2. Configure Claude Code to communicate via `docker exec -i triple-c-mcp-{id} node dist/index.js`
|
||||||
|
3. Automatically enable Docker socket access on the project container (required for `docker exec`)
|
||||||
|
|
||||||
### Enabling MCP Servers Per-Project
|
### Enabling MCP Servers Per-Project
|
||||||
|
|
||||||
In a project's configuration panel, the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes require a container restart.
|
In a project's configuration panel (click **Config**), the **MCP Servers** section shows checkboxes for all globally defined servers. Toggle each server on or off for that project. Changes take effect on the next container start.
|
||||||
|
|
||||||
### How Docker-Based MCP Works
|
### How Docker-Based MCP Works
|
||||||
|
|
||||||
When a project with Docker-based MCP servers starts:
|
When a project with Docker-based MCP servers starts:
|
||||||
|
|
||||||
1. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
|
1. Missing Docker images are **automatically pulled** (progress shown in the progress modal)
|
||||||
2. Each enabled Docker MCP server gets its own container on that network
|
2. A dedicated **bridge network** is created for the project (`triple-c-net-{projectId}`)
|
||||||
3. The main project container is connected to the same network
|
3. Each enabled Docker MCP server gets its own container on that network
|
||||||
4. MCP server configuration is injected into Claude Code's config file
|
4. The main project container is connected to the same network
|
||||||
|
5. MCP server configuration is written to `~/.claude.json` inside the container
|
||||||
|
|
||||||
**Stdio + Docker** servers communicate via `docker exec`, which automatically enables Docker socket access on the main container. **HTTP + Docker** servers are reached by hostname on the shared network (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
|
**Networking**: Docker-based MCP containers are reached by their container name as a hostname (e.g., `triple-c-mcp-{serverId}`), not by `localhost`. Docker DNS resolves these names automatically on the shared bridge network.
|
||||||
|
|
||||||
When MCP configuration changes (servers added/removed/modified), the container is automatically recreated on the next start to apply the new configuration.
|
**Stdio + Docker**: The project container uses `docker exec` to communicate with the MCP container over stdin/stdout. This automatically enables Docker socket access on the project container.
|
||||||
|
|
||||||
|
**HTTP + Docker**: The project container connects to the MCP container's HTTP endpoint using the container hostname and port (e.g., `http://triple-c-mcp-{serverId}:3000/mcp`).
|
||||||
|
|
||||||
|
**Manual (no Docker image)**: Stdio commands run directly inside the project container. HTTP URLs connect to wherever you point them (could be an external service or something running on the host).
|
||||||
|
|
||||||
|
### Configuration Change Detection
|
||||||
|
|
||||||
|
MCP server configuration is tracked via SHA-256 fingerprints stored as Docker labels. If you add, remove, or modify MCP servers for a project, the container is automatically recreated on the next start to apply the new configuration. The container filesystem is snapshotted first, so installed packages are preserved.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
15
README.md
15
README.md
@@ -60,11 +60,22 @@ If the Docker access setting is toggled after a container already exists, the co
|
|||||||
|
|
||||||
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers as a Beta feature. MCP servers extend Claude Code with external tools and data sources.
|
Triple-C supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers as a Beta feature. MCP servers extend Claude Code with external tools and data sources.
|
||||||
|
|
||||||
|
**Modes**: Each MCP server operates in one of four modes based on transport type and whether a Docker image is specified:
|
||||||
|
|
||||||
|
| Mode | Where It Runs | How It Communicates |
|
||||||
|
|------|--------------|---------------------|
|
||||||
|
| Stdio + Manual | Inside the project container | Direct stdin/stdout (e.g., `npx -y @mcp/server`) |
|
||||||
|
| Stdio + Docker | Separate MCP container | `docker exec -i <mcp-container> <command>` from the project container |
|
||||||
|
| HTTP + Manual | External / user-provided | Connects to the URL you specify |
|
||||||
|
| HTTP + Docker | Separate MCP container | `http://<mcp-container>:<port>/mcp` via Docker DNS on a shared bridge network |
|
||||||
|
|
||||||
|
**Key behaviors**:
|
||||||
- **Global library**: MCP servers are defined globally in the MCP sidebar tab and stored in `mcp_servers.json`
|
- **Global library**: MCP servers are defined globally in the MCP sidebar tab and stored in `mcp_servers.json`
|
||||||
- **Per-project toggles**: Each project enables/disables individual servers via checkboxes
|
- **Per-project toggles**: Each project enables/disables individual servers via checkboxes
|
||||||
- **Docker isolation**: MCP servers can run as isolated Docker containers on a per-project bridge network (`triple-c-net-{projectId}`)
|
- **Auto-pull**: Docker images for MCP servers are pulled automatically if not present when the project starts
|
||||||
- **Transport types**: Stdio (command-line) and HTTP (network endpoint), each with manual or Docker mode
|
- **Docker networking**: Docker-based MCP containers run on a per-project bridge network (`triple-c-net-{projectId}`), reachable by container name — not localhost
|
||||||
- **Auto-detection**: Config changes are detected via SHA-256 fingerprints and trigger automatic container recreation
|
- **Auto-detection**: Config changes are detected via SHA-256 fingerprints and trigger automatic container recreation
|
||||||
|
- **Config injection**: MCP server configuration is written to `~/.claude.json` inside the container via the `MCP_SERVERS_JSON` environment variable, merged by the entrypoint using `jq`
|
||||||
|
|
||||||
### Mission Control Integration
|
### Mission Control Integration
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "triple-c",
|
"name": "triple-c",
|
||||||
"private": true,
|
"private": true,
|
||||||
"version": "0.1.0",
|
"version": "0.2.0",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"dev": "vite",
|
"dev": "vite",
|
||||||
|
|||||||
2
app/src-tauri/Cargo.lock
generated
2
app/src-tauri/Cargo.lock
generated
@@ -4668,7 +4668,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "triple-c"
|
name = "triple-c"
|
||||||
version = "0.1.0"
|
version = "0.2.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"bollard",
|
"bollard",
|
||||||
"chrono",
|
"chrono",
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "triple-c"
|
name = "triple-c"
|
||||||
version = "0.1.0"
|
version = "0.2.0"
|
||||||
edition = "2021"
|
edition = "2021"
|
||||||
|
|
||||||
[lib]
|
[lib]
|
||||||
|
|||||||
30
app/src-tauri/src/commands/aws_commands.rs
Normal file
30
app/src-tauri/src/commands/aws_commands.rs
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
use tauri::State;
|
||||||
|
use crate::AppState;
|
||||||
|
|
||||||
|
#[tauri::command]
|
||||||
|
pub async fn aws_sso_refresh(
|
||||||
|
project_id: String,
|
||||||
|
state: State<'_, AppState>,
|
||||||
|
) -> Result<(), String> {
|
||||||
|
let project = state.projects_store.get(&project_id)
|
||||||
|
.ok_or_else(|| format!("Project {} not found", project_id))?;
|
||||||
|
|
||||||
|
let profile = project.bedrock_config.as_ref()
|
||||||
|
.and_then(|b| b.aws_profile.clone())
|
||||||
|
.or_else(|| state.settings_store.get().global_aws.aws_profile.clone())
|
||||||
|
.unwrap_or_else(|| "default".to_string());
|
||||||
|
|
||||||
|
log::info!("Running host-side AWS SSO login for profile '{}'", profile);
|
||||||
|
|
||||||
|
let status = tokio::process::Command::new("aws")
|
||||||
|
.args(["sso", "login", "--profile", &profile])
|
||||||
|
.status()
|
||||||
|
.await
|
||||||
|
.map_err(|e| format!("Failed to run aws sso login: {}", e))?;
|
||||||
|
|
||||||
|
if !status.success() {
|
||||||
|
return Err("SSO login failed or was cancelled".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
@@ -1,3 +1,4 @@
|
|||||||
|
pub mod aws_commands;
|
||||||
pub mod docker_commands;
|
pub mod docker_commands;
|
||||||
pub mod file_commands;
|
pub mod file_commands;
|
||||||
pub mod mcp_commands;
|
pub mod mcp_commands;
|
||||||
|
|||||||
@@ -34,6 +34,11 @@ fn store_secrets_for_project(project: &Project) -> Result<(), String> {
|
|||||||
secure::store_project_secret(&project.id, "aws-bearer-token", v)?;
|
secure::store_project_secret(&project.id, "aws-bearer-token", v)?;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if let Some(ref litellm) = project.litellm_config {
|
||||||
|
if let Some(ref v) = litellm.api_key {
|
||||||
|
secure::store_project_secret(&project.id, "litellm-api-key", v)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -51,6 +56,10 @@ fn load_secrets_for_project(project: &mut Project) {
|
|||||||
bedrock.aws_bearer_token = secure::get_project_secret(&project.id, "aws-bearer-token")
|
bedrock.aws_bearer_token = secure::get_project_secret(&project.id, "aws-bearer-token")
|
||||||
.unwrap_or(None);
|
.unwrap_or(None);
|
||||||
}
|
}
|
||||||
|
if let Some(ref mut litellm) = project.litellm_config {
|
||||||
|
litellm.api_key = secure::get_project_secret(&project.id, "litellm-api-key")
|
||||||
|
.unwrap_or(None);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve enabled MCP servers and filter to Docker-only ones.
|
/// Resolve enabled MCP servers and filter to Docker-only ones.
|
||||||
@@ -180,6 +189,22 @@ pub async fn start_project_container(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if project.auth_mode == AuthMode::Ollama {
|
||||||
|
let ollama = project.ollama_config.as_ref()
|
||||||
|
.ok_or_else(|| "Ollama auth mode selected but no Ollama configuration found.".to_string())?;
|
||||||
|
if ollama.base_url.is_empty() {
|
||||||
|
return Err("Ollama base URL is required.".to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if project.auth_mode == AuthMode::LiteLlm {
|
||||||
|
let litellm = project.litellm_config.as_ref()
|
||||||
|
.ok_or_else(|| "LiteLLM auth mode selected but no LiteLLM configuration found.".to_string())?;
|
||||||
|
if litellm.base_url.is_empty() {
|
||||||
|
return Err("LiteLLM base URL is required.".to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Update status to starting
|
// Update status to starting
|
||||||
state.projects_store.update_status(&project_id, ProjectStatus::Starting)?;
|
state.projects_store.update_status(&project_id, ProjectStatus::Starting)?;
|
||||||
|
|
||||||
@@ -202,6 +227,28 @@ pub async fn start_project_container(
|
|||||||
|
|
||||||
// Set up Docker network and MCP containers if needed
|
// Set up Docker network and MCP containers if needed
|
||||||
let network_name = if !docker_mcp.is_empty() {
|
let network_name = if !docker_mcp.is_empty() {
|
||||||
|
// Pull any missing MCP Docker images before starting containers
|
||||||
|
for server in &docker_mcp {
|
||||||
|
if let Some(ref image) = server.docker_image {
|
||||||
|
if !docker::image_exists(image).await.unwrap_or(false) {
|
||||||
|
emit_progress(
|
||||||
|
&app_handle,
|
||||||
|
&project_id,
|
||||||
|
&format!("Pulling MCP image for '{}'...", server.name),
|
||||||
|
);
|
||||||
|
let image_clone = image.clone();
|
||||||
|
let app_clone = app_handle.clone();
|
||||||
|
let pid_clone = project_id.clone();
|
||||||
|
let sname = server.name.clone();
|
||||||
|
docker::pull_image(&image_clone, move |msg| {
|
||||||
|
emit_progress(&app_clone, &pid_clone, &format!("[{}] {}", sname, msg));
|
||||||
|
}).await.map_err(|e| {
|
||||||
|
format!("Failed to pull MCP image '{}' for '{}': {}", image, server.name, e)
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
emit_progress(&app_handle, &project_id, "Setting up MCP network...");
|
emit_progress(&app_handle, &project_id, "Setting up MCP network...");
|
||||||
let net = docker::ensure_project_network(&project.id).await?;
|
let net = docker::ensure_project_network(&project.id).await?;
|
||||||
emit_progress(&app_handle, &project_id, "Starting MCP containers...");
|
emit_progress(&app_handle, &project_id, "Starting MCP containers...");
|
||||||
|
|||||||
@@ -40,11 +40,12 @@ if aws sts get-caller-identity --profile '{profile}' >/dev/null 2>&1; then
|
|||||||
echo "AWS session valid."
|
echo "AWS session valid."
|
||||||
else
|
else
|
||||||
echo "AWS session expired or invalid."
|
echo "AWS session expired or invalid."
|
||||||
# Check if this profile uses SSO (has sso_start_url configured)
|
# Check if this profile uses SSO (has sso_start_url or sso_session configured)
|
||||||
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1; then
|
if aws configure get sso_start_url --profile '{profile}' >/dev/null 2>&1 || \
|
||||||
echo "Starting SSO login — click the URL below to authenticate:"
|
aws configure get sso_session --profile '{profile}' >/dev/null 2>&1; then
|
||||||
|
echo "Starting SSO login..."
|
||||||
echo ""
|
echo ""
|
||||||
aws sso login --profile '{profile}'
|
triple-c-sso-refresh
|
||||||
if [ $? -ne 0 ]; then
|
if [ $? -ne 0 ]; then
|
||||||
echo ""
|
echo ""
|
||||||
echo "SSO login failed or was cancelled. Starting Claude anyway..."
|
echo "SSO login failed or was cancelled. Starting Claude anyway..."
|
||||||
|
|||||||
@@ -231,6 +231,33 @@ fn compute_bedrock_fingerprint(project: &Project) -> String {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Compute a fingerprint for the Ollama configuration so we can detect changes.
|
||||||
|
fn compute_ollama_fingerprint(project: &Project) -> String {
|
||||||
|
if let Some(ref ollama) = project.ollama_config {
|
||||||
|
let parts = vec![
|
||||||
|
ollama.base_url.clone(),
|
||||||
|
ollama.model_id.as_deref().unwrap_or("").to_string(),
|
||||||
|
];
|
||||||
|
sha256_hex(&parts.join("|"))
|
||||||
|
} else {
|
||||||
|
String::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Compute a fingerprint for the LiteLLM configuration so we can detect changes.
|
||||||
|
fn compute_litellm_fingerprint(project: &Project) -> String {
|
||||||
|
if let Some(ref litellm) = project.litellm_config {
|
||||||
|
let parts = vec![
|
||||||
|
litellm.base_url.clone(),
|
||||||
|
litellm.api_key.as_deref().unwrap_or("").to_string(),
|
||||||
|
litellm.model_id.as_deref().unwrap_or("").to_string(),
|
||||||
|
];
|
||||||
|
sha256_hex(&parts.join("|"))
|
||||||
|
} else {
|
||||||
|
String::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Compute a fingerprint for the project paths so we can detect changes.
|
/// Compute a fingerprint for the project paths so we can detect changes.
|
||||||
/// Sorted by mount_name so order changes don't cause spurious recreation.
|
/// Sorted by mount_name so order changes don't cause spurious recreation.
|
||||||
fn compute_paths_fingerprint(paths: &[ProjectPath]) -> String {
|
fn compute_paths_fingerprint(paths: &[ProjectPath]) -> String {
|
||||||
@@ -459,6 +486,7 @@ pub async fn create_container(
|
|||||||
if let Some(p) = profile {
|
if let Some(p) = profile {
|
||||||
env_vars.push(format!("AWS_PROFILE={}", p));
|
env_vars.push(format!("AWS_PROFILE={}", p));
|
||||||
}
|
}
|
||||||
|
env_vars.push("AWS_SSO_AUTH_REFRESH_CMD=triple-c-sso-refresh".to_string());
|
||||||
}
|
}
|
||||||
BedrockAuthMethod::BearerToken => {
|
BedrockAuthMethod::BearerToken => {
|
||||||
if let Some(ref token) = bedrock.aws_bearer_token {
|
if let Some(ref token) = bedrock.aws_bearer_token {
|
||||||
@@ -477,6 +505,30 @@ pub async fn create_container(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Ollama configuration
|
||||||
|
if project.auth_mode == AuthMode::Ollama {
|
||||||
|
if let Some(ref ollama) = project.ollama_config {
|
||||||
|
env_vars.push(format!("ANTHROPIC_BASE_URL={}", ollama.base_url));
|
||||||
|
env_vars.push("ANTHROPIC_AUTH_TOKEN=ollama".to_string());
|
||||||
|
if let Some(ref model) = ollama.model_id {
|
||||||
|
env_vars.push(format!("ANTHROPIC_MODEL={}", model));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LiteLLM configuration
|
||||||
|
if project.auth_mode == AuthMode::LiteLlm {
|
||||||
|
if let Some(ref litellm) = project.litellm_config {
|
||||||
|
env_vars.push(format!("ANTHROPIC_BASE_URL={}", litellm.base_url));
|
||||||
|
if let Some(ref key) = litellm.api_key {
|
||||||
|
env_vars.push(format!("ANTHROPIC_AUTH_TOKEN={}", key));
|
||||||
|
}
|
||||||
|
if let Some(ref model) = litellm.model_id {
|
||||||
|
env_vars.push(format!("ANTHROPIC_MODEL={}", model));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Custom environment variables (global + per-project, project overrides global for same key)
|
// Custom environment variables (global + per-project, project overrides global for same key)
|
||||||
let merged_env = merge_custom_env_vars(global_custom_env_vars, &project.custom_env_vars);
|
let merged_env = merge_custom_env_vars(global_custom_env_vars, &project.custom_env_vars);
|
||||||
let reserved_prefixes = ["ANTHROPIC_", "AWS_", "GIT_", "HOST_", "CLAUDE_", "TRIPLE_C_"];
|
let reserved_prefixes = ["ANTHROPIC_", "AWS_", "GIT_", "HOST_", "CLAUDE_", "TRIPLE_C_"];
|
||||||
@@ -645,6 +697,8 @@ pub async fn create_container(
|
|||||||
labels.insert("triple-c.auth-mode".to_string(), format!("{:?}", project.auth_mode));
|
labels.insert("triple-c.auth-mode".to_string(), format!("{:?}", project.auth_mode));
|
||||||
labels.insert("triple-c.paths-fingerprint".to_string(), compute_paths_fingerprint(&project.paths));
|
labels.insert("triple-c.paths-fingerprint".to_string(), compute_paths_fingerprint(&project.paths));
|
||||||
labels.insert("triple-c.bedrock-fingerprint".to_string(), compute_bedrock_fingerprint(project));
|
labels.insert("triple-c.bedrock-fingerprint".to_string(), compute_bedrock_fingerprint(project));
|
||||||
|
labels.insert("triple-c.ollama-fingerprint".to_string(), compute_ollama_fingerprint(project));
|
||||||
|
labels.insert("triple-c.litellm-fingerprint".to_string(), compute_litellm_fingerprint(project));
|
||||||
labels.insert("triple-c.ports-fingerprint".to_string(), compute_ports_fingerprint(&project.port_mappings));
|
labels.insert("triple-c.ports-fingerprint".to_string(), compute_ports_fingerprint(&project.port_mappings));
|
||||||
labels.insert("triple-c.image".to_string(), image_name.to_string());
|
labels.insert("triple-c.image".to_string(), image_name.to_string());
|
||||||
labels.insert("triple-c.timezone".to_string(), timezone.unwrap_or("").to_string());
|
labels.insert("triple-c.timezone".to_string(), timezone.unwrap_or("").to_string());
|
||||||
@@ -884,6 +938,22 @@ pub async fn container_needs_recreation(
|
|||||||
return Ok(true);
|
return Ok(true);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── Ollama config fingerprint ────────────────────────────────────────
|
||||||
|
let expected_ollama_fp = compute_ollama_fingerprint(project);
|
||||||
|
let container_ollama_fp = get_label("triple-c.ollama-fingerprint").unwrap_or_default();
|
||||||
|
if container_ollama_fp != expected_ollama_fp {
|
||||||
|
log::info!("Ollama config mismatch");
|
||||||
|
return Ok(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── LiteLLM config fingerprint ───────────────────────────────────────
|
||||||
|
let expected_litellm_fp = compute_litellm_fingerprint(project);
|
||||||
|
let container_litellm_fp = get_label("triple-c.litellm-fingerprint").unwrap_or_default();
|
||||||
|
if container_litellm_fp != expected_litellm_fp {
|
||||||
|
log::info!("LiteLLM config mismatch");
|
||||||
|
return Ok(true);
|
||||||
|
}
|
||||||
|
|
||||||
// ── Image ────────────────────────────────────────────────────────────
|
// ── Image ────────────────────────────────────────────────────────────
|
||||||
// The image label is set at creation time; if the user changed the
|
// The image label is set at creation time; if the user changed the
|
||||||
// configured image we need to recreate. We only compare when the
|
// configured image we need to recreate. We only compare when the
|
||||||
|
|||||||
@@ -114,6 +114,8 @@ pub fn run() {
|
|||||||
commands::mcp_commands::add_mcp_server,
|
commands::mcp_commands::add_mcp_server,
|
||||||
commands::mcp_commands::update_mcp_server,
|
commands::mcp_commands::update_mcp_server,
|
||||||
commands::mcp_commands::remove_mcp_server,
|
commands::mcp_commands::remove_mcp_server,
|
||||||
|
// AWS
|
||||||
|
commands::aws_commands::aws_sso_refresh,
|
||||||
// Updates
|
// Updates
|
||||||
commands::update_commands::get_app_version,
|
commands::update_commands::get_app_version,
|
||||||
commands::update_commands::check_for_updates,
|
commands::update_commands::check_for_updates,
|
||||||
|
|||||||
@@ -33,6 +33,8 @@ pub struct Project {
|
|||||||
pub status: ProjectStatus,
|
pub status: ProjectStatus,
|
||||||
pub auth_mode: AuthMode,
|
pub auth_mode: AuthMode,
|
||||||
pub bedrock_config: Option<BedrockConfig>,
|
pub bedrock_config: Option<BedrockConfig>,
|
||||||
|
pub ollama_config: Option<OllamaConfig>,
|
||||||
|
pub litellm_config: Option<LiteLlmConfig>,
|
||||||
pub allow_docker_access: bool,
|
pub allow_docker_access: bool,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub mission_control_enabled: bool,
|
pub mission_control_enabled: bool,
|
||||||
@@ -74,6 +76,9 @@ pub enum AuthMode {
|
|||||||
#[serde(alias = "login", alias = "api_key")]
|
#[serde(alias = "login", alias = "api_key")]
|
||||||
Anthropic,
|
Anthropic,
|
||||||
Bedrock,
|
Bedrock,
|
||||||
|
Ollama,
|
||||||
|
#[serde(alias = "litellm")]
|
||||||
|
LiteLlm,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for AuthMode {
|
impl Default for AuthMode {
|
||||||
@@ -115,6 +120,29 @@ pub struct BedrockConfig {
|
|||||||
pub disable_prompt_caching: bool,
|
pub disable_prompt_caching: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Ollama configuration for a project.
|
||||||
|
/// Ollama exposes an Anthropic-compatible API endpoint.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct OllamaConfig {
|
||||||
|
/// The base URL of the Ollama server (e.g., "http://host.docker.internal:11434" or "http://192.168.1.100:11434")
|
||||||
|
pub base_url: String,
|
||||||
|
/// Optional model override (e.g., "qwen3.5:27b")
|
||||||
|
pub model_id: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// LiteLLM gateway configuration for a project.
|
||||||
|
/// LiteLLM translates Anthropic API calls to 100+ model providers.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct LiteLlmConfig {
|
||||||
|
/// The base URL of the LiteLLM proxy (e.g., "http://host.docker.internal:4000" or "https://litellm.example.com")
|
||||||
|
pub base_url: String,
|
||||||
|
/// API key for the LiteLLM proxy
|
||||||
|
#[serde(skip_serializing, default)]
|
||||||
|
pub api_key: Option<String>,
|
||||||
|
/// Optional model override
|
||||||
|
pub model_id: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
impl Project {
|
impl Project {
|
||||||
pub fn new(name: String, paths: Vec<ProjectPath>) -> Self {
|
pub fn new(name: String, paths: Vec<ProjectPath>) -> Self {
|
||||||
let now = chrono::Utc::now().to_rfc3339();
|
let now = chrono::Utc::now().to_rfc3339();
|
||||||
@@ -126,6 +154,8 @@ impl Project {
|
|||||||
status: ProjectStatus::Stopped,
|
status: ProjectStatus::Stopped,
|
||||||
auth_mode: AuthMode::default(),
|
auth_mode: AuthMode::default(),
|
||||||
bedrock_config: None,
|
bedrock_config: None,
|
||||||
|
ollama_config: None,
|
||||||
|
litellm_config: None,
|
||||||
allow_docker_access: false,
|
allow_docker_access: false,
|
||||||
mission_control_enabled: false,
|
mission_control_enabled: false,
|
||||||
ssh_key_path: None,
|
ssh_key_path: None,
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"$schema": "https://raw.githubusercontent.com/tauri-apps/tauri/dev/crates/tauri-cli/schema.json",
|
"$schema": "https://raw.githubusercontent.com/tauri-apps/tauri/dev/crates/tauri-cli/schema.json",
|
||||||
"productName": "Triple-C",
|
"productName": "Triple-C",
|
||||||
"version": "0.1.0",
|
"version": "0.2.0",
|
||||||
"identifier": "com.triple-c.desktop",
|
"identifier": "com.triple-c.desktop",
|
||||||
"build": {
|
"build": {
|
||||||
"beforeDevCommand": "npm run dev",
|
"beforeDevCommand": "npm run dev",
|
||||||
|
|||||||
@@ -147,7 +147,7 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
|
|||||||
className={inputCls}
|
className={inputCls}
|
||||||
/>
|
/>
|
||||||
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
|
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
|
||||||
Set a Docker image to run this MCP server as a container. Leave empty for manual mode.
|
Set a Docker image to run this MCP server in its own container. Leave empty to run commands inside the project container. Images are pulled automatically if not present.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -171,6 +171,14 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* Mode description */}
|
||||||
|
<p className="text-xs text-[var(--text-secondary)] opacity-60">
|
||||||
|
{transportType === "stdio" && isDocker && "Runs via docker exec in a separate MCP container."}
|
||||||
|
{transportType === "stdio" && !isDocker && "Runs inside the project container (e.g. npx commands)."}
|
||||||
|
{transportType === "http" && isDocker && "Runs in a separate container, reached by hostname on the project network."}
|
||||||
|
{transportType === "http" && !isDocker && "Connects to an MCP server at the URL you specify."}
|
||||||
|
</p>
|
||||||
|
|
||||||
{/* Container Port (HTTP+Docker only) */}
|
{/* Container Port (HTTP+Docker only) */}
|
||||||
{transportType === "http" && isDocker && (
|
{transportType === "http" && isDocker && (
|
||||||
<div>
|
<div>
|
||||||
@@ -183,7 +191,7 @@ export default function McpServerCard({ server, onUpdate, onRemove }: Props) {
|
|||||||
className={inputCls}
|
className={inputCls}
|
||||||
/>
|
/>
|
||||||
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
|
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-60">
|
||||||
Port inside the MCP container (default: 3000)
|
Port the MCP server listens on inside its container. The URL is auto-generated as http://<container>:<port>/mcp on the project network.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import { useState, useEffect } from "react";
|
import { useState, useEffect } from "react";
|
||||||
import { open } from "@tauri-apps/plugin-dialog";
|
import { open } from "@tauri-apps/plugin-dialog";
|
||||||
import { listen } from "@tauri-apps/api/event";
|
import { listen } from "@tauri-apps/api/event";
|
||||||
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod } from "../../lib/types";
|
import type { Project, ProjectPath, AuthMode, BedrockConfig, BedrockAuthMethod, OllamaConfig, LiteLlmConfig } from "../../lib/types";
|
||||||
import { useProjects } from "../../hooks/useProjects";
|
import { useProjects } from "../../hooks/useProjects";
|
||||||
import { useMcpServers } from "../../hooks/useMcpServers";
|
import { useMcpServers } from "../../hooks/useMcpServers";
|
||||||
import { useTerminal } from "../../hooks/useTerminal";
|
import { useTerminal } from "../../hooks/useTerminal";
|
||||||
@@ -58,6 +58,15 @@ export default function ProjectCard({ project }: Props) {
|
|||||||
const [bedrockBearerToken, setBedrockBearerToken] = useState(project.bedrock_config?.aws_bearer_token ?? "");
|
const [bedrockBearerToken, setBedrockBearerToken] = useState(project.bedrock_config?.aws_bearer_token ?? "");
|
||||||
const [bedrockModelId, setBedrockModelId] = useState(project.bedrock_config?.model_id ?? "");
|
const [bedrockModelId, setBedrockModelId] = useState(project.bedrock_config?.model_id ?? "");
|
||||||
|
|
||||||
|
// Ollama local state
|
||||||
|
const [ollamaBaseUrl, setOllamaBaseUrl] = useState(project.ollama_config?.base_url ?? "http://host.docker.internal:11434");
|
||||||
|
const [ollamaModelId, setOllamaModelId] = useState(project.ollama_config?.model_id ?? "");
|
||||||
|
|
||||||
|
// LiteLLM local state
|
||||||
|
const [litellmBaseUrl, setLitellmBaseUrl] = useState(project.litellm_config?.base_url ?? "http://host.docker.internal:4000");
|
||||||
|
const [litellmApiKey, setLitellmApiKey] = useState(project.litellm_config?.api_key ?? "");
|
||||||
|
const [litellmModelId, setLitellmModelId] = useState(project.litellm_config?.model_id ?? "");
|
||||||
|
|
||||||
// Sync local state when project prop changes (e.g., after save or external update)
|
// Sync local state when project prop changes (e.g., after save or external update)
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
setEditName(project.name);
|
setEditName(project.name);
|
||||||
@@ -76,6 +85,11 @@ export default function ProjectCard({ project }: Props) {
|
|||||||
setBedrockProfile(project.bedrock_config?.aws_profile ?? "");
|
setBedrockProfile(project.bedrock_config?.aws_profile ?? "");
|
||||||
setBedrockBearerToken(project.bedrock_config?.aws_bearer_token ?? "");
|
setBedrockBearerToken(project.bedrock_config?.aws_bearer_token ?? "");
|
||||||
setBedrockModelId(project.bedrock_config?.model_id ?? "");
|
setBedrockModelId(project.bedrock_config?.model_id ?? "");
|
||||||
|
setOllamaBaseUrl(project.ollama_config?.base_url ?? "http://host.docker.internal:11434");
|
||||||
|
setOllamaModelId(project.ollama_config?.model_id ?? "");
|
||||||
|
setLitellmBaseUrl(project.litellm_config?.base_url ?? "http://host.docker.internal:4000");
|
||||||
|
setLitellmApiKey(project.litellm_config?.api_key ?? "");
|
||||||
|
setLitellmModelId(project.litellm_config?.model_id ?? "");
|
||||||
}, [project]);
|
}, [project]);
|
||||||
|
|
||||||
// Listen for container progress events
|
// Listen for container progress events
|
||||||
@@ -177,12 +191,29 @@ export default function ProjectCard({ project }: Props) {
|
|||||||
disable_prompt_caching: false,
|
disable_prompt_caching: false,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const defaultOllamaConfig: OllamaConfig = {
|
||||||
|
base_url: "http://host.docker.internal:11434",
|
||||||
|
model_id: null,
|
||||||
|
};
|
||||||
|
|
||||||
|
const defaultLiteLlmConfig: LiteLlmConfig = {
|
||||||
|
base_url: "http://host.docker.internal:4000",
|
||||||
|
api_key: null,
|
||||||
|
model_id: null,
|
||||||
|
};
|
||||||
|
|
||||||
const handleAuthModeChange = async (mode: AuthMode) => {
|
const handleAuthModeChange = async (mode: AuthMode) => {
|
||||||
try {
|
try {
|
||||||
const updates: Partial<Project> = { auth_mode: mode };
|
const updates: Partial<Project> = { auth_mode: mode };
|
||||||
if (mode === "bedrock" && !project.bedrock_config) {
|
if (mode === "bedrock" && !project.bedrock_config) {
|
||||||
updates.bedrock_config = defaultBedrockConfig;
|
updates.bedrock_config = defaultBedrockConfig;
|
||||||
}
|
}
|
||||||
|
if (mode === "ollama" && !project.ollama_config) {
|
||||||
|
updates.ollama_config = defaultOllamaConfig;
|
||||||
|
}
|
||||||
|
if (mode === "lit_llm" && !project.litellm_config) {
|
||||||
|
updates.litellm_config = defaultLiteLlmConfig;
|
||||||
|
}
|
||||||
await update({ ...project, ...updates });
|
await update({ ...project, ...updates });
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
setError(String(e));
|
setError(String(e));
|
||||||
@@ -305,6 +336,51 @@ export default function ProjectCard({ project }: Props) {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const handleOllamaBaseUrlBlur = async () => {
|
||||||
|
try {
|
||||||
|
const current = project.ollama_config ?? defaultOllamaConfig;
|
||||||
|
await update({ ...project, ollama_config: { ...current, base_url: ollamaBaseUrl } });
|
||||||
|
} catch (err) {
|
||||||
|
console.error("Failed to update Ollama base URL:", err);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleOllamaModelIdBlur = async () => {
|
||||||
|
try {
|
||||||
|
const current = project.ollama_config ?? defaultOllamaConfig;
|
||||||
|
await update({ ...project, ollama_config: { ...current, model_id: ollamaModelId || null } });
|
||||||
|
} catch (err) {
|
||||||
|
console.error("Failed to update Ollama model ID:", err);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleLitellmBaseUrlBlur = async () => {
|
||||||
|
try {
|
||||||
|
const current = project.litellm_config ?? defaultLiteLlmConfig;
|
||||||
|
await update({ ...project, litellm_config: { ...current, base_url: litellmBaseUrl } });
|
||||||
|
} catch (err) {
|
||||||
|
console.error("Failed to update LiteLLM base URL:", err);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleLitellmApiKeyBlur = async () => {
|
||||||
|
try {
|
||||||
|
const current = project.litellm_config ?? defaultLiteLlmConfig;
|
||||||
|
await update({ ...project, litellm_config: { ...current, api_key: litellmApiKey || null } });
|
||||||
|
} catch (err) {
|
||||||
|
console.error("Failed to update LiteLLM API key:", err);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleLitellmModelIdBlur = async () => {
|
||||||
|
try {
|
||||||
|
const current = project.litellm_config ?? defaultLiteLlmConfig;
|
||||||
|
await update({ ...project, litellm_config: { ...current, model_id: litellmModelId || null } });
|
||||||
|
} catch (err) {
|
||||||
|
console.error("Failed to update LiteLLM model ID:", err);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
const statusColor = {
|
const statusColor = {
|
||||||
stopped: "bg-[var(--text-secondary)]",
|
stopped: "bg-[var(--text-secondary)]",
|
||||||
starting: "bg-[var(--warning)]",
|
starting: "bg-[var(--warning)]",
|
||||||
@@ -395,6 +471,28 @@ export default function ProjectCard({ project }: Props) {
|
|||||||
>
|
>
|
||||||
Bedrock
|
Bedrock
|
||||||
</button>
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={(e) => { e.stopPropagation(); handleAuthModeChange("ollama"); }}
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={`px-2 py-0.5 rounded transition-colors ${
|
||||||
|
project.auth_mode === "ollama"
|
||||||
|
? "bg-[var(--accent)] text-white"
|
||||||
|
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
|
||||||
|
} disabled:opacity-50`}
|
||||||
|
>
|
||||||
|
Ollama
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={(e) => { e.stopPropagation(); handleAuthModeChange("lit_llm"); }}
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={`px-2 py-0.5 rounded transition-colors ${
|
||||||
|
project.auth_mode === "lit_llm"
|
||||||
|
? "bg-[var(--accent)] text-white"
|
||||||
|
: "text-[var(--text-secondary)] hover:text-[var(--text-primary)] hover:bg-[var(--bg-primary)]"
|
||||||
|
} disabled:opacity-50`}
|
||||||
|
>
|
||||||
|
LiteLLM
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Action buttons */}
|
{/* Action buttons */}
|
||||||
@@ -851,6 +949,99 @@ export default function ProjectCard({ project }: Props) {
|
|||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
})()}
|
})()}
|
||||||
|
|
||||||
|
{/* Ollama config */}
|
||||||
|
{project.auth_mode === "ollama" && (() => {
|
||||||
|
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50";
|
||||||
|
return (
|
||||||
|
<div className="space-y-2 pt-1 border-t border-[var(--border-color)]">
|
||||||
|
<label className="block text-xs font-medium text-[var(--text-primary)]">Ollama</label>
|
||||||
|
<p className="text-xs text-[var(--text-secondary)]">
|
||||||
|
Connect to an Ollama server running locally or on a remote host.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Base URL</label>
|
||||||
|
<input
|
||||||
|
value={ollamaBaseUrl}
|
||||||
|
onChange={(e) => setOllamaBaseUrl(e.target.value)}
|
||||||
|
onBlur={handleOllamaBaseUrlBlur}
|
||||||
|
placeholder="http://host.docker.internal:11434"
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={inputCls}
|
||||||
|
/>
|
||||||
|
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-70">
|
||||||
|
Use host.docker.internal for the host machine, or an IP/hostname for remote.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)</label>
|
||||||
|
<input
|
||||||
|
value={ollamaModelId}
|
||||||
|
onChange={(e) => setOllamaModelId(e.target.value)}
|
||||||
|
onBlur={handleOllamaModelIdBlur}
|
||||||
|
placeholder="qwen3.5:27b"
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={inputCls}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
})()}
|
||||||
|
|
||||||
|
{/* LiteLLM config */}
|
||||||
|
{project.auth_mode === "lit_llm" && (() => {
|
||||||
|
const inputCls = "w-full px-2 py-1 bg-[var(--bg-primary)] border border-[var(--border-color)] rounded text-xs text-[var(--text-primary)] focus:outline-none focus:border-[var(--accent)] disabled:opacity-50";
|
||||||
|
return (
|
||||||
|
<div className="space-y-2 pt-1 border-t border-[var(--border-color)]">
|
||||||
|
<label className="block text-xs font-medium text-[var(--text-primary)]">LiteLLM Gateway</label>
|
||||||
|
<p className="text-xs text-[var(--text-secondary)]">
|
||||||
|
Connect through a LiteLLM proxy to use 100+ model providers.
|
||||||
|
</p>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Base URL</label>
|
||||||
|
<input
|
||||||
|
value={litellmBaseUrl}
|
||||||
|
onChange={(e) => setLitellmBaseUrl(e.target.value)}
|
||||||
|
onBlur={handleLitellmBaseUrlBlur}
|
||||||
|
placeholder="http://host.docker.internal:4000"
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={inputCls}
|
||||||
|
/>
|
||||||
|
<p className="text-xs text-[var(--text-secondary)] mt-0.5 opacity-70">
|
||||||
|
Use host.docker.internal for local, or a URL for remote/containerized LiteLLM.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">API Key</label>
|
||||||
|
<input
|
||||||
|
type="password"
|
||||||
|
value={litellmApiKey}
|
||||||
|
onChange={(e) => setLitellmApiKey(e.target.value)}
|
||||||
|
onBlur={handleLitellmApiKeyBlur}
|
||||||
|
placeholder="sk-..."
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={inputCls}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)</label>
|
||||||
|
<input
|
||||||
|
value={litellmModelId}
|
||||||
|
onChange={(e) => setLitellmModelId(e.target.value)}
|
||||||
|
onBlur={handleLitellmModelIdBlur}
|
||||||
|
placeholder="gpt-4o / gemini-pro / etc."
|
||||||
|
disabled={!isStopped}
|
||||||
|
className={inputCls}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
})()}
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -6,6 +6,8 @@ import { WebLinksAddon } from "@xterm/addon-web-links";
|
|||||||
import { openUrl } from "@tauri-apps/plugin-opener";
|
import { openUrl } from "@tauri-apps/plugin-opener";
|
||||||
import "@xterm/xterm/css/xterm.css";
|
import "@xterm/xterm/css/xterm.css";
|
||||||
import { useTerminal } from "../../hooks/useTerminal";
|
import { useTerminal } from "../../hooks/useTerminal";
|
||||||
|
import { useAppState } from "../../store/appState";
|
||||||
|
import { awsSsoRefresh } from "../../lib/tauri-commands";
|
||||||
import { UrlDetector } from "../../lib/urlDetector";
|
import { UrlDetector } from "../../lib/urlDetector";
|
||||||
import UrlToast from "./UrlToast";
|
import UrlToast from "./UrlToast";
|
||||||
|
|
||||||
@@ -23,6 +25,12 @@ export default function TerminalView({ sessionId, active }: Props) {
|
|||||||
const detectorRef = useRef<UrlDetector | null>(null);
|
const detectorRef = useRef<UrlDetector | null>(null);
|
||||||
const { sendInput, pasteImage, resize, onOutput, onExit } = useTerminal();
|
const { sendInput, pasteImage, resize, onOutput, onExit } = useTerminal();
|
||||||
|
|
||||||
|
const ssoBufferRef = useRef("");
|
||||||
|
const ssoTriggeredRef = useRef(false);
|
||||||
|
const projectId = useAppState(
|
||||||
|
(s) => s.sessions.find((sess) => sess.id === sessionId)?.projectId
|
||||||
|
);
|
||||||
|
|
||||||
const [detectedUrl, setDetectedUrl] = useState<string | null>(null);
|
const [detectedUrl, setDetectedUrl] = useState<string | null>(null);
|
||||||
const [imagePasteMsg, setImagePasteMsg] = useState<string | null>(null);
|
const [imagePasteMsg, setImagePasteMsg] = useState<string | null>(null);
|
||||||
const [isAtBottom, setIsAtBottom] = useState(true);
|
const [isAtBottom, setIsAtBottom] = useState(true);
|
||||||
@@ -152,10 +160,30 @@ export default function TerminalView({ sessionId, active }: Props) {
|
|||||||
const detector = new UrlDetector((url) => setDetectedUrl(url));
|
const detector = new UrlDetector((url) => setDetectedUrl(url));
|
||||||
detectorRef.current = detector;
|
detectorRef.current = detector;
|
||||||
|
|
||||||
|
const SSO_MARKER = "###TRIPLE_C_SSO_REFRESH###";
|
||||||
|
const textDecoder = new TextDecoder();
|
||||||
|
|
||||||
const outputPromise = onOutput(sessionId, (data) => {
|
const outputPromise = onOutput(sessionId, (data) => {
|
||||||
if (aborted) return;
|
if (aborted) return;
|
||||||
term.write(data);
|
term.write(data);
|
||||||
detector.feed(data);
|
detector.feed(data);
|
||||||
|
|
||||||
|
// Scan for SSO refresh marker in terminal output
|
||||||
|
if (!ssoTriggeredRef.current && projectId) {
|
||||||
|
const text = textDecoder.decode(data, { stream: true });
|
||||||
|
// Combine with overlap from previous chunk to handle marker spanning chunks
|
||||||
|
const combined = ssoBufferRef.current + text;
|
||||||
|
if (combined.includes(SSO_MARKER)) {
|
||||||
|
ssoTriggeredRef.current = true;
|
||||||
|
ssoBufferRef.current = "";
|
||||||
|
awsSsoRefresh(projectId).catch((e) =>
|
||||||
|
console.error("AWS SSO refresh failed:", e)
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
// Keep last N chars as overlap for next chunk
|
||||||
|
ssoBufferRef.current = combined.slice(-SSO_MARKER.length);
|
||||||
|
}
|
||||||
|
}
|
||||||
}).then((unlisten) => {
|
}).then((unlisten) => {
|
||||||
if (aborted) unlisten();
|
if (aborted) unlisten();
|
||||||
return unlisten;
|
return unlisten;
|
||||||
@@ -189,6 +217,8 @@ export default function TerminalView({ sessionId, active }: Props) {
|
|||||||
aborted = true;
|
aborted = true;
|
||||||
detector.dispose();
|
detector.dispose();
|
||||||
detectorRef.current = null;
|
detectorRef.current = null;
|
||||||
|
ssoTriggeredRef.current = false;
|
||||||
|
ssoBufferRef.current = "";
|
||||||
osc52Disposable.dispose();
|
osc52Disposable.dispose();
|
||||||
inputDisposable.dispose();
|
inputDisposable.dispose();
|
||||||
scrollDisposable.dispose();
|
scrollDisposable.dispose();
|
||||||
|
|||||||
@@ -40,6 +40,10 @@ export const listAwsProfiles = () =>
|
|||||||
export const detectHostTimezone = () =>
|
export const detectHostTimezone = () =>
|
||||||
invoke<string>("detect_host_timezone");
|
invoke<string>("detect_host_timezone");
|
||||||
|
|
||||||
|
// AWS
|
||||||
|
export const awsSsoRefresh = (projectId: string) =>
|
||||||
|
invoke<void>("aws_sso_refresh", { projectId });
|
||||||
|
|
||||||
// Terminal
|
// Terminal
|
||||||
export const openTerminalSession = (projectId: string, sessionId: string, sessionType?: string) =>
|
export const openTerminalSession = (projectId: string, sessionId: string, sessionType?: string) =>
|
||||||
invoke<void>("open_terminal_session", { projectId, sessionId, sessionType });
|
invoke<void>("open_terminal_session", { projectId, sessionId, sessionType });
|
||||||
|
|||||||
@@ -22,6 +22,8 @@ export interface Project {
|
|||||||
status: ProjectStatus;
|
status: ProjectStatus;
|
||||||
auth_mode: AuthMode;
|
auth_mode: AuthMode;
|
||||||
bedrock_config: BedrockConfig | null;
|
bedrock_config: BedrockConfig | null;
|
||||||
|
ollama_config: OllamaConfig | null;
|
||||||
|
litellm_config: LiteLlmConfig | null;
|
||||||
allow_docker_access: boolean;
|
allow_docker_access: boolean;
|
||||||
mission_control_enabled: boolean;
|
mission_control_enabled: boolean;
|
||||||
ssh_key_path: string | null;
|
ssh_key_path: string | null;
|
||||||
@@ -43,7 +45,7 @@ export type ProjectStatus =
|
|||||||
| "stopping"
|
| "stopping"
|
||||||
| "error";
|
| "error";
|
||||||
|
|
||||||
export type AuthMode = "anthropic" | "bedrock";
|
export type AuthMode = "anthropic" | "bedrock" | "ollama" | "lit_llm";
|
||||||
|
|
||||||
export type BedrockAuthMethod = "static_credentials" | "profile" | "bearer_token";
|
export type BedrockAuthMethod = "static_credentials" | "profile" | "bearer_token";
|
||||||
|
|
||||||
@@ -59,6 +61,17 @@ export interface BedrockConfig {
|
|||||||
disable_prompt_caching: boolean;
|
disable_prompt_caching: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface OllamaConfig {
|
||||||
|
base_url: string;
|
||||||
|
model_id: string | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface LiteLlmConfig {
|
||||||
|
base_url: string;
|
||||||
|
api_key: string | null;
|
||||||
|
model_id: string | null;
|
||||||
|
}
|
||||||
|
|
||||||
export interface ContainerInfo {
|
export interface ContainerInfo {
|
||||||
container_id: string;
|
container_id: string;
|
||||||
project_id: string;
|
project_id: string;
|
||||||
|
|||||||
@@ -119,6 +119,9 @@ RUN chmod +x /usr/local/bin/audio-shim \
|
|||||||
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/rec \
|
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/rec \
|
||||||
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/arecord
|
&& ln -sf /usr/local/bin/audio-shim /usr/local/bin/arecord
|
||||||
|
|
||||||
|
COPY triple-c-sso-refresh /usr/local/bin/triple-c-sso-refresh
|
||||||
|
RUN chmod +x /usr/local/bin/triple-c-sso-refresh
|
||||||
|
|
||||||
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
|
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
|
||||||
RUN chmod +x /usr/local/bin/entrypoint.sh
|
RUN chmod +x /usr/local/bin/entrypoint.sh
|
||||||
COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler
|
COPY triple-c-scheduler /usr/local/bin/triple-c-scheduler
|
||||||
|
|||||||
@@ -84,6 +84,31 @@ if [ -d /tmp/.host-aws ]; then
|
|||||||
# Ensure writable cache directories exist
|
# Ensure writable cache directories exist
|
||||||
mkdir -p /home/claude/.aws/sso/cache /home/claude/.aws/cli/cache
|
mkdir -p /home/claude/.aws/sso/cache /home/claude/.aws/cli/cache
|
||||||
chown -R claude:claude /home/claude/.aws/sso /home/claude/.aws/cli
|
chown -R claude:claude /home/claude/.aws/sso /home/claude/.aws/cli
|
||||||
|
|
||||||
|
# Inline sso_session properties into profile sections so AWS SDKs that don't
|
||||||
|
# support the sso_session indirection format can resolve sso_region, etc.
|
||||||
|
if [ -f /home/claude/.aws/config ]; then
|
||||||
|
python3 -c '
|
||||||
|
import configparser, sys
|
||||||
|
c = configparser.ConfigParser()
|
||||||
|
c.read(sys.argv[1])
|
||||||
|
for sec in c.sections():
|
||||||
|
if not sec.startswith("profile ") and sec != "default":
|
||||||
|
continue
|
||||||
|
session = c.get(sec, "sso_session", fallback=None)
|
||||||
|
if not session or c.has_option(sec, "sso_start_url"):
|
||||||
|
continue
|
||||||
|
ss = f"sso-session {session}"
|
||||||
|
if not c.has_section(ss):
|
||||||
|
continue
|
||||||
|
for key in ("sso_start_url", "sso_region", "sso_registration_scopes"):
|
||||||
|
val = c.get(ss, key, fallback=None)
|
||||||
|
if val:
|
||||||
|
c.set(sec, key, val)
|
||||||
|
with open(sys.argv[1], "w") as f:
|
||||||
|
c.write(f)
|
||||||
|
' /home/claude/.aws/config 2>/dev/null || true
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# ── Git credential helper (for HTTPS token) ─────────────────────────────────
|
# ── Git credential helper (for HTTPS token) ─────────────────────────────────
|
||||||
@@ -164,6 +189,24 @@ if [ -n "$MCP_SERVERS_JSON" ]; then
|
|||||||
unset MCP_SERVERS_JSON
|
unset MCP_SERVERS_JSON
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# ── AWS SSO auth refresh command ──────────────────────────────────────────────
|
||||||
|
# When set, inject awsAuthRefresh into ~/.claude.json so Claude Code calls
|
||||||
|
# triple-c-sso-refresh when AWS credentials expire mid-session.
|
||||||
|
if [ -n "$AWS_SSO_AUTH_REFRESH_CMD" ]; then
|
||||||
|
CLAUDE_JSON="/home/claude/.claude.json"
|
||||||
|
if [ -f "$CLAUDE_JSON" ]; then
|
||||||
|
MERGED=$(jq --arg cmd "$AWS_SSO_AUTH_REFRESH_CMD" '.awsAuthRefresh = $cmd' "$CLAUDE_JSON" 2>/dev/null)
|
||||||
|
if [ -n "$MERGED" ]; then
|
||||||
|
printf '%s\n' "$MERGED" > "$CLAUDE_JSON"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
printf '{"awsAuthRefresh":"%s"}\n' "$AWS_SSO_AUTH_REFRESH_CMD" > "$CLAUDE_JSON"
|
||||||
|
fi
|
||||||
|
chown claude:claude "$CLAUDE_JSON"
|
||||||
|
chmod 600 "$CLAUDE_JSON"
|
||||||
|
unset AWS_SSO_AUTH_REFRESH_CMD
|
||||||
|
fi
|
||||||
|
|
||||||
# ── Docker socket permissions ────────────────────────────────────────────────
|
# ── Docker socket permissions ────────────────────────────────────────────────
|
||||||
if [ -S /var/run/docker.sock ]; then
|
if [ -S /var/run/docker.sock ]; then
|
||||||
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)
|
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)
|
||||||
|
|||||||
33
container/triple-c-sso-refresh
Executable file
33
container/triple-c-sso-refresh
Executable file
@@ -0,0 +1,33 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Signal Triple-C to perform host-side AWS SSO login, then sync the result.
|
||||||
|
CACHE_DIR="$HOME/.aws/sso/cache"
|
||||||
|
HOST_CACHE="/tmp/.host-aws/sso/cache"
|
||||||
|
MARKER="/tmp/.sso-refresh-marker"
|
||||||
|
|
||||||
|
touch "$MARKER"
|
||||||
|
|
||||||
|
# Emit marker for Triple-C app to detect in terminal output
|
||||||
|
echo "###TRIPLE_C_SSO_REFRESH###"
|
||||||
|
echo "Waiting for SSO login to complete on host..."
|
||||||
|
|
||||||
|
TIMEOUT=120
|
||||||
|
ELAPSED=0
|
||||||
|
while [ $ELAPSED -lt $TIMEOUT ]; do
|
||||||
|
if [ -d "$HOST_CACHE" ]; then
|
||||||
|
NEW=$(find "$HOST_CACHE" -name "*.json" -newer "$MARKER" 2>/dev/null | head -1)
|
||||||
|
if [ -n "$NEW" ]; then
|
||||||
|
mkdir -p "$CACHE_DIR"
|
||||||
|
cp -f "$HOST_CACHE"/*.json "$CACHE_DIR/" 2>/dev/null
|
||||||
|
chown -R "$(whoami)" "$CACHE_DIR"
|
||||||
|
echo "AWS SSO credentials refreshed successfully."
|
||||||
|
rm -f "$MARKER"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
sleep 2
|
||||||
|
ELAPSED=$((ELAPSED + 2))
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "SSO refresh timed out (${TIMEOUT}s). Please try again."
|
||||||
|
rm -f "$MARKER"
|
||||||
|
exit 1
|
||||||
Reference in New Issue
Block a user