Compare commits
2 Commits
v0.2.6-mac
...
v0.2.8
| Author | SHA1 | Date | |
|---|---|---|---|
| 4732feb33e | |||
| 5977024953 |
@@ -113,8 +113,9 @@ Claude Code launches automatically with `--dangerously-skip-permissions` inside
|
||||
|
||||
1. Stop the container first (settings can only be changed while stopped).
|
||||
2. In the project card, switch the backend to **Ollama**.
|
||||
3. Expand the **Config** panel and set the base URL of your Ollama server (defaults to `http://host.docker.internal:11434` for a local instance). Optionally set a model ID.
|
||||
4. Start the container again.
|
||||
3. Expand the **Config** panel and set the base URL of your Ollama server (defaults to `http://host.docker.internal:11434` for a local instance). Set the **Model ID** to the model you want to use (required).
|
||||
4. Make sure the model has been pulled in Ollama (e.g., `ollama pull qwen3.5:27b`) or used via Ollama cloud before starting.
|
||||
5. Start the container again.
|
||||
|
||||
**LiteLLM:**
|
||||
|
||||
@@ -414,7 +415,7 @@ To use Claude Code with a local or remote Ollama server, switch the backend to *
|
||||
### Settings
|
||||
|
||||
- **Base URL** — The URL of your Ollama server. Defaults to `http://host.docker.internal:11434`, which reaches a locally running Ollama instance from inside the container. For a remote server, use its IP or hostname (e.g., `http://192.168.1.100:11434`).
|
||||
- **Model ID** — Optional. Override the model to use (e.g., `qwen3.5:27b`).
|
||||
- **Model ID** — **Required.** The model to use (e.g., `qwen3.5:27b`). The model must be pulled in Ollama before use — run `ollama pull <model>` or use it via Ollama cloud so it is available when the container starts.
|
||||
|
||||
### How It Works
|
||||
|
||||
@@ -422,6 +423,8 @@ Triple-C sets `ANTHROPIC_BASE_URL` to point Claude Code at your Ollama server in
|
||||
|
||||
> **Note:** Ollama support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models.
|
||||
|
||||
> **Important:** The model must already be available in Ollama before starting the container. If using a local Ollama instance, pull the model first with `ollama pull <model-name>`. If using Ollama's cloud service, ensure the model has been used at least once so it is cached.
|
||||
|
||||
---
|
||||
|
||||
## LiteLLM Configuration
|
||||
|
||||
@@ -49,7 +49,7 @@ Each project can independently use one of:
|
||||
|
||||
- **Anthropic** (OAuth): User runs `claude login` inside the terminal on first use. Token persisted in the config volume across restarts and resets.
|
||||
- **AWS Bedrock**: Per-project AWS credentials (static keys, profile, or bearer token). SSO sessions are validated before launching Claude for Profile auth.
|
||||
- **Ollama**: Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`). Optional model override.
|
||||
- **Ollama**: Connect to a local or remote Ollama server via `ANTHROPIC_BASE_URL` (e.g., `http://host.docker.internal:11434`). Requires a model ID, and the model must be pulled (or used via Ollama cloud) before starting the container.
|
||||
- **LiteLLM**: Connect through a LiteLLM proxy gateway via `ANTHROPIC_BASE_URL` + `ANTHROPIC_AUTH_TOKEN` to access 100+ model providers. API key stored securely in OS keychain.
|
||||
|
||||
> **Note:** Ollama and LiteLLM support is best-effort. Claude Code is designed for Anthropic models, so some features (tool use, extended thinking, prompt caching, etc.) may not work as expected with non-Anthropic models behind these backends.
|
||||
|
||||
@@ -942,7 +942,7 @@ export default function ProjectCard({ project }: Props) {
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (optional)<Tooltip text="Ollama model name to use (e.g. qwen3.5:27b). Leave blank for the server default." /></label>
|
||||
<label className="block text-xs text-[var(--text-secondary)] mb-0.5">Model (required)<Tooltip text="Ollama model name to use (e.g. qwen3.5:27b). The model must be pulled in Ollama before starting the container." /></label>
|
||||
<input
|
||||
value={ollamaModelId}
|
||||
onChange={(e) => setOllamaModelId(e.target.value)}
|
||||
|
||||
@@ -80,6 +80,22 @@ export default function TerminalView({ sessionId, active }: Props) {
|
||||
|
||||
term.open(containerRef.current);
|
||||
|
||||
// Ctrl+Shift+C copies selected terminal text to clipboard.
|
||||
// This prevents the keystroke from reaching the container (where
|
||||
// Ctrl+C would send SIGINT and cancel running work).
|
||||
term.attachCustomKeyEventHandler((event) => {
|
||||
if (event.type === "keydown" && event.ctrlKey && event.shiftKey && event.key === "C") {
|
||||
const sel = term.getSelection();
|
||||
if (sel) {
|
||||
navigator.clipboard.writeText(sel).catch((e) =>
|
||||
console.error("Ctrl+Shift+C clipboard write failed:", e),
|
||||
);
|
||||
}
|
||||
return false; // prevent xterm from processing this key
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
// WebGL addon is loaded/disposed dynamically in the active effect
|
||||
// to avoid exhausting the browser's limited WebGL context pool.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user