Compare commits

...

2 Commits

Author SHA1 Message Date
4732feb33e Add Ctrl+Shift+C keyboard shortcut for copying terminal text
All checks were successful
Build App / compute-version (push) Successful in 5s
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 3m25s
Build App / build-linux (push) Successful in 5m33s
Build App / create-tag (push) Successful in 3s
Build App / sync-to-github (push) Successful in 10s
Ctrl+C in the terminal sends SIGINT which cancels running Claude work.
This adds a custom key handler so Ctrl+Shift+C copies selected text to
the clipboard without interrupting the container.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:05:10 -07:00
5977024953 Update Ollama docs and UI to mark model as required
All checks were successful
Build App / compute-version (push) Successful in 4s
Build App / build-macos (push) Successful in 2m22s
Build App / build-windows (push) Successful in 3m25s
Build App / build-linux (push) Successful in 4m48s
Build App / create-tag (push) Successful in 9s
Build App / sync-to-github (push) Successful in 14s
The model field must be set and the model must be pre-pulled in Ollama
before the container will work. Updated README, HOW-TO-USE, and the
ProjectCard UI label/tooltip to reflect this.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:53:24 -07:00
4 changed files with 24 additions and 5 deletions

View File

@@ -113,8 +113,9 @@ Claude Code launches automatically with `--dangerously-skip-permissions` inside
1. Stop the container first (settings can only be changed while stopped).
2. In the project card, switch the backend to **Ollama**.
3. Expand the **Config** panel and set the base URL of your Ollama server (defaults to `http://host.docker.internal:11434` for a local instance). Optionally set a model ID.
4. Start the container again.
3. Expand the **Config** panel and set the base URL of your Ollama server (defaults to `http://host.docker.internal:11434` for a local instance). Set the **Model ID** to the model you want to use (required).
4. Make sure the model has been pulled in Ollama (e.g., `ollama pull qwen3.5:27b`) or used via Ollama cloud before starting.
5. Start the container again.
**LiteLLM:**
@@ -414,7 +415,7 @@ To use Claude Code with a local or remote Ollama server, switch the backend to *
### Settings
- **Base URL** — The URL of your Ollama server. Defaults to `http://host.docker.internal:11434`, which reaches a locally running Ollama instance from inside the container. For a remote server, use its IP or hostname (e.g., `http://192.168.1.100:11434`).
- **Model ID** — Optional. Override the model to use (e.g., `qwen3.5:27b`).
- **Model ID** — **Required.** The model to use (e.g., `qwen3.5:27b`). The model must be pulled in Ollama before use — run `ollama pull <model>` or use it via Ollama cloud so it is available when the container starts.
### How It Works
@@ -422,6 +423,8 @@ Triple-C sets `ANTHROPIC_BASE_URL` to point Claude Code at your Ollama server in
> **Note:** Ollama support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models.
> **Important:** The model must already be available in Ollama before starting the container. If using a local Ollama instance, pull the model first with `ollama pull <model-name>`. If using Ollama's cloud service, ensure the model has been used at least once so it is cached.
---
## LiteLLM Configuration

View File

@@ -49,7 +49,7 @@ Each project can independently use one of:
- **Anthropic** (OAuth): User runs `claude login` inside the terminal on first use. Token persisted in the config volume across restarts and resets.
- **AWS Bedrock**: Per-project AWS credentials (static keys, profile, or bearer token). SSO sessions are validated before launching Claude for Profile auth.
- **Ollama**: Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`). Optional model override.
- **Ollama**: Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`). Requires a model ID, and the model must be pulled (or used via Ollama cloud) before starting the container.
- **LiteLLM**: Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers. API key stored securely in OS keychain.
> **Note:** Ollama and LiteLLM support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models behind these backends.

View File

@@ -942,7 +942,7 @@ export default function ProjectCard({ project }: Props) {
</div>
<div>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)<Tooltip text="Ollama model name to use (e.g. qwen3.5:27b). Leave blank for the server default." /></label>
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (required)<Tooltip text="Ollama model name to use (e.g. qwen3.5:27b). The model must be pulled in Ollama before starting the container." /></label>
<input
value={ollamaModelId}
onChange={(e) => setOllamaModelId(e.target.value)}

View File

@@ -80,6 +80,22 @@ export default function TerminalView({ sessionId, active }: Props) {
term.open(containerRef.current);
// Ctrl+Shift+C copies selected terminal text to clipboard.
// This prevents the keystroke from reaching the container (where
// Ctrl+C would send SIGINT and cancel running work).
term.attachCustomKeyEventHandler((event) => {
if (event.type === "keydown" && event.ctrlKey && event.shiftKey && event.key === "C") {
const sel = term.getSelection();
if (sel) {
navigator.clipboard.writeText(sel).catch((e) =>
console.error("Ctrl+Shift+C clipboard write failed:", e),
);
}
return false; // prevent xterm from processing this key
}
return true;
});
// WebGL addon is loaded/disposed dynamically in the active effect
// to avoid exhausting the browser's limited WebGL context pool.