Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
<script lang="ts">
|
|
|
|
|
import { invoke } from '@tauri-apps/api/core';
|
|
|
|
|
import { segments, speakers } from '$lib/stores/transcript';
|
2026-03-20 21:33:43 -07:00
|
|
|
import { settings } from '$lib/stores/settings';
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
|
|
|
|
|
interface ChatMessage {
|
|
|
|
|
role: 'user' | 'assistant';
|
|
|
|
|
content: string;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
let messages = $state<ChatMessage[]>([]);
|
|
|
|
|
let inputText = $state('');
|
|
|
|
|
let isLoading = $state(false);
|
|
|
|
|
let chatContainer: HTMLDivElement;
|
|
|
|
|
|
|
|
|
|
function getTranscriptContext(): string {
|
|
|
|
|
const segs = $segments;
|
|
|
|
|
const spks = $speakers;
|
|
|
|
|
if (segs.length === 0) return '';
|
|
|
|
|
|
|
|
|
|
return segs.map(seg => {
|
|
|
|
|
const speaker = spks.find(s => s.id === seg.speaker_id);
|
|
|
|
|
const name = speaker?.display_name || speaker?.label || 'Unknown';
|
|
|
|
|
return `[${name}]: ${seg.text}`;
|
|
|
|
|
}).join('\n');
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
async function sendMessage() {
|
|
|
|
|
const text = inputText.trim();
|
|
|
|
|
if (!text || isLoading) return;
|
|
|
|
|
|
|
|
|
|
messages = [...messages, { role: 'user', content: text }];
|
|
|
|
|
inputText = '';
|
|
|
|
|
isLoading = true;
|
|
|
|
|
|
|
|
|
|
// Auto-scroll to bottom
|
|
|
|
|
requestAnimationFrame(() => {
|
|
|
|
|
if (chatContainer) chatContainer.scrollTop = chatContainer.scrollHeight;
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
try {
|
|
|
|
|
const chatMessages = messages.map(m => ({
|
|
|
|
|
role: m.role,
|
|
|
|
|
content: m.content,
|
|
|
|
|
}));
|
|
|
|
|
|
2026-03-20 21:33:43 -07:00
|
|
|
// Ensure the provider is configured with current credentials before chatting
|
|
|
|
|
const s = $settings;
|
|
|
|
|
const configMap: Record<string, Record<string, string>> = {
|
|
|
|
|
openai: { api_key: s.openai_api_key, model: s.openai_model },
|
|
|
|
|
anthropic: { api_key: s.anthropic_api_key, model: s.anthropic_model },
|
|
|
|
|
litellm: { api_key: s.litellm_api_key, api_base: s.litellm_api_base, model: s.litellm_model },
|
|
|
|
|
local: { model: s.local_model_path, base_url: 'http://localhost:8080' },
|
|
|
|
|
};
|
|
|
|
|
const config = configMap[s.ai_provider];
|
|
|
|
|
if (config) {
|
|
|
|
|
await invoke('ai_configure', { provider: s.ai_provider, config });
|
|
|
|
|
}
|
|
|
|
|
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
const result = await invoke<{ response: string }>('ai_chat', {
|
|
|
|
|
messages: chatMessages,
|
|
|
|
|
transcriptContext: getTranscriptContext(),
|
2026-03-20 21:33:43 -07:00
|
|
|
provider: s.ai_provider,
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
});
|
|
|
|
|
|
|
|
|
|
messages = [...messages, { role: 'assistant', content: result.response }];
|
|
|
|
|
} catch (err) {
|
|
|
|
|
messages = [...messages, {
|
|
|
|
|
role: 'assistant',
|
|
|
|
|
content: `Error: ${err}`,
|
|
|
|
|
}];
|
|
|
|
|
} finally {
|
|
|
|
|
isLoading = false;
|
|
|
|
|
requestAnimationFrame(() => {
|
|
|
|
|
if (chatContainer) chatContainer.scrollTop = chatContainer.scrollHeight;
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
function handleKeydown(e: KeyboardEvent) {
|
|
|
|
|
if (e.key === 'Enter' && !e.shiftKey) {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
sendMessage();
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
function clearChat() {
|
|
|
|
|
messages = [];
|
|
|
|
|
}
|
|
|
|
|
|
2026-03-20 22:06:29 -07:00
|
|
|
function formatMarkdown(text: string): string {
|
|
|
|
|
// Split into lines for block-level processing
|
|
|
|
|
const lines = text.split('\n');
|
|
|
|
|
const result: string[] = [];
|
|
|
|
|
let inList = false;
|
|
|
|
|
|
|
|
|
|
for (let i = 0; i < lines.length; i++) {
|
|
|
|
|
let line = lines[i];
|
|
|
|
|
|
|
|
|
|
// Headers
|
|
|
|
|
if (line.startsWith('### ')) {
|
|
|
|
|
if (inList) { result.push('</ul>'); inList = false; }
|
|
|
|
|
const content = applyInlineFormatting(line.slice(4));
|
|
|
|
|
result.push(`<h4>${content}</h4>`);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
if (line.startsWith('## ')) {
|
|
|
|
|
if (inList) { result.push('</ul>'); inList = false; }
|
|
|
|
|
const content = applyInlineFormatting(line.slice(3));
|
|
|
|
|
result.push(`<h3>${content}</h3>`);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
if (line.startsWith('# ')) {
|
|
|
|
|
if (inList) { result.push('</ul>'); inList = false; }
|
|
|
|
|
const content = applyInlineFormatting(line.slice(2));
|
|
|
|
|
result.push(`<h2>${content}</h2>`);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// List items (- or *)
|
|
|
|
|
if (/^[\-\*] /.test(line)) {
|
|
|
|
|
if (!inList) { result.push('<ul>'); inList = true; }
|
|
|
|
|
const content = applyInlineFormatting(line.slice(2));
|
|
|
|
|
result.push(`<li>${content}</li>`);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Numbered list items
|
|
|
|
|
if (/^\d+\.\s/.test(line)) {
|
|
|
|
|
if (!inList) { result.push('<ol>'); inList = true; }
|
|
|
|
|
const content = applyInlineFormatting(line.replace(/^\d+\.\s/, ''));
|
|
|
|
|
result.push(`<li>${content}</li>`);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Non-list line: close any open list
|
|
|
|
|
if (inList) {
|
|
|
|
|
// Check if previous list was ordered or unordered
|
|
|
|
|
const lastOpen = result.findLast(r => r === '<ul>' || r === '<ol>');
|
|
|
|
|
result.push(lastOpen === '<ol>' ? '</ol>' : '</ul>');
|
|
|
|
|
inList = false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Empty line = paragraph break
|
|
|
|
|
if (line.trim() === '') {
|
|
|
|
|
result.push('<br>');
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Regular text line
|
|
|
|
|
result.push(applyInlineFormatting(line));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Close any trailing open list
|
|
|
|
|
if (inList) {
|
|
|
|
|
const lastOpen = result.findLast(r => r === '<ul>' || r === '<ol>');
|
|
|
|
|
result.push(lastOpen === '<ol>' ? '</ol>' : '</ul>');
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return result.join('\n');
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
function applyInlineFormatting(text: string): string {
|
|
|
|
|
// Code blocks (backtick) — process first to avoid conflicts
|
|
|
|
|
text = text.replace(/`([^`]+)`/g, '<code>$1</code>');
|
|
|
|
|
// Bold (**text**)
|
|
|
|
|
text = text.replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>');
|
|
|
|
|
// Italic (*text*) — only single asterisks not already consumed by bold
|
|
|
|
|
text = text.replace(/\*([^*]+)\*/g, '<em>$1</em>');
|
|
|
|
|
return text;
|
|
|
|
|
}
|
|
|
|
|
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
// Quick action buttons
|
|
|
|
|
async function summarize() {
|
|
|
|
|
inputText = 'Please summarize this transcript in bullet points.';
|
|
|
|
|
await sendMessage();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
async function extractActions() {
|
|
|
|
|
inputText = 'What action items or follow-ups were discussed?';
|
|
|
|
|
await sendMessage();
|
|
|
|
|
}
|
|
|
|
|
</script>
|
|
|
|
|
|
Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
<div class="ai-chat-panel">
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
<div class="panel-header">
|
|
|
|
|
<h3>AI Chat</h3>
|
|
|
|
|
{#if messages.length > 0}
|
|
|
|
|
<button class="clear-btn" onclick={clearChat} title="Clear chat">Clear</button>
|
|
|
|
|
{/if}
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<div class="chat-messages" bind:this={chatContainer}>
|
|
|
|
|
{#if messages.length === 0}
|
|
|
|
|
<div class="empty-state">
|
|
|
|
|
<p>Ask questions about the transcript</p>
|
|
|
|
|
{#if $segments.length > 0}
|
|
|
|
|
<div class="quick-actions">
|
|
|
|
|
<button class="quick-btn" onclick={summarize}>Summarize</button>
|
|
|
|
|
<button class="quick-btn" onclick={extractActions}>Action Items</button>
|
|
|
|
|
</div>
|
|
|
|
|
{/if}
|
|
|
|
|
</div>
|
|
|
|
|
{:else}
|
|
|
|
|
{#each messages as msg}
|
|
|
|
|
<div class="message {msg.role}">
|
2026-03-20 22:06:29 -07:00
|
|
|
{#if msg.role === 'assistant'}
|
|
|
|
|
<div class="message-content">{@html formatMarkdown(msg.content)}</div>
|
|
|
|
|
{:else}
|
|
|
|
|
<div class="message-content">{msg.content}</div>
|
|
|
|
|
{/if}
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
</div>
|
|
|
|
|
{/each}
|
|
|
|
|
{#if isLoading}
|
|
|
|
|
<div class="message assistant loading">
|
|
|
|
|
<div class="message-content">Thinking...</div>
|
|
|
|
|
</div>
|
|
|
|
|
{/if}
|
|
|
|
|
{/if}
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<div class="chat-input">
|
|
|
|
|
<textarea
|
|
|
|
|
class="input-textarea"
|
|
|
|
|
placeholder="Ask about the transcript..."
|
|
|
|
|
bind:value={inputText}
|
|
|
|
|
onkeydown={handleKeydown}
|
|
|
|
|
disabled={isLoading}
|
|
|
|
|
></textarea>
|
|
|
|
|
<button
|
|
|
|
|
class="send-btn"
|
|
|
|
|
onclick={sendMessage}
|
|
|
|
|
disabled={isLoading || !inputText.trim()}
|
|
|
|
|
>
|
|
|
|
|
Send
|
|
|
|
|
</button>
|
|
|
|
|
</div>
|
Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<style>
|
|
|
|
|
.ai-chat-panel {
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
flex: 1;
|
|
|
|
|
display: flex;
|
|
|
|
|
flex-direction: column;
|
Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
background: #16213e;
|
|
|
|
|
border-radius: 8px;
|
|
|
|
|
color: #e0e0e0;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
min-height: 0;
|
|
|
|
|
}
|
|
|
|
|
.panel-header {
|
|
|
|
|
display: flex;
|
|
|
|
|
align-items: center;
|
|
|
|
|
justify-content: space-between;
|
|
|
|
|
padding: 0.75rem 1rem 0.5rem;
|
Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
}
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
.panel-header h3 {
|
|
|
|
|
margin: 0;
|
|
|
|
|
font-size: 0.95rem;
|
|
|
|
|
}
|
|
|
|
|
.clear-btn {
|
|
|
|
|
background: none;
|
|
|
|
|
border: 1px solid #4a5568;
|
|
|
|
|
color: #999;
|
|
|
|
|
padding: 0.15rem 0.5rem;
|
|
|
|
|
border-radius: 3px;
|
|
|
|
|
cursor: pointer;
|
|
|
|
|
font-size: 0.7rem;
|
|
|
|
|
}
|
|
|
|
|
.clear-btn:hover {
|
|
|
|
|
color: #e0e0e0;
|
|
|
|
|
border-color: #e94560;
|
|
|
|
|
}
|
|
|
|
|
.chat-messages {
|
|
|
|
|
flex: 1;
|
|
|
|
|
overflow-y: auto;
|
|
|
|
|
padding: 0 0.75rem;
|
|
|
|
|
min-height: 0;
|
|
|
|
|
}
|
|
|
|
|
.empty-state {
|
|
|
|
|
text-align: center;
|
2026-03-20 22:06:29 -07:00
|
|
|
color: #888;
|
|
|
|
|
font-size: 0.85rem;
|
|
|
|
|
padding: 2rem 1rem;
|
|
|
|
|
}
|
|
|
|
|
.empty-state p {
|
|
|
|
|
margin-bottom: 1rem;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
}
|
|
|
|
|
.quick-actions {
|
|
|
|
|
display: flex;
|
2026-03-20 22:06:29 -07:00
|
|
|
gap: 0.75rem;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
justify-content: center;
|
2026-03-20 22:06:29 -07:00
|
|
|
margin-top: 1rem;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
}
|
|
|
|
|
.quick-btn {
|
|
|
|
|
background: rgba(233, 69, 96, 0.15);
|
|
|
|
|
border: 1px solid rgba(233, 69, 96, 0.3);
|
|
|
|
|
color: #e94560;
|
2026-03-20 22:06:29 -07:00
|
|
|
padding: 0.45rem 0.85rem;
|
|
|
|
|
border-radius: 6px;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
cursor: pointer;
|
2026-03-20 22:06:29 -07:00
|
|
|
font-size: 0.8rem;
|
|
|
|
|
transition: background 0.15s;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
}
|
|
|
|
|
.quick-btn:hover {
|
|
|
|
|
background: rgba(233, 69, 96, 0.25);
|
|
|
|
|
}
|
|
|
|
|
.message {
|
2026-03-20 22:06:29 -07:00
|
|
|
margin-bottom: 0.75rem;
|
|
|
|
|
padding: 0.75rem 1rem;
|
|
|
|
|
border-radius: 8px;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
font-size: 0.8rem;
|
2026-03-20 22:06:29 -07:00
|
|
|
line-height: 1.55;
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
}
|
|
|
|
|
.message.user {
|
|
|
|
|
background: rgba(233, 69, 96, 0.15);
|
2026-03-20 22:06:29 -07:00
|
|
|
border-left: 3px solid rgba(233, 69, 96, 0.4);
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
}
|
|
|
|
|
.message.assistant {
|
|
|
|
|
background: rgba(255, 255, 255, 0.05);
|
2026-03-20 22:06:29 -07:00
|
|
|
border-left: 3px solid rgba(255, 255, 255, 0.1);
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
}
|
|
|
|
|
.message.loading {
|
|
|
|
|
opacity: 0.6;
|
|
|
|
|
font-style: italic;
|
|
|
|
|
}
|
2026-03-20 22:06:29 -07:00
|
|
|
|
|
|
|
|
/* Markdown styles inside assistant messages */
|
|
|
|
|
.message.assistant :global(h2) {
|
|
|
|
|
font-size: 1rem;
|
|
|
|
|
font-weight: 600;
|
|
|
|
|
margin: 0.6rem 0 0.3rem;
|
|
|
|
|
color: #f0f0f0;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(h3) {
|
|
|
|
|
font-size: 0.9rem;
|
|
|
|
|
font-weight: 600;
|
|
|
|
|
margin: 0.5rem 0 0.25rem;
|
|
|
|
|
color: #e8e8e8;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(h4) {
|
|
|
|
|
font-size: 0.85rem;
|
|
|
|
|
font-weight: 600;
|
|
|
|
|
margin: 0.4rem 0 0.2rem;
|
|
|
|
|
color: #e0e0e0;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(strong) {
|
|
|
|
|
color: #f0f0f0;
|
|
|
|
|
font-weight: 600;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(em) {
|
|
|
|
|
color: #ccc;
|
|
|
|
|
font-style: italic;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(code) {
|
|
|
|
|
background: rgba(0, 0, 0, 0.3);
|
|
|
|
|
color: #e94560;
|
|
|
|
|
padding: 0.1rem 0.35rem;
|
|
|
|
|
border-radius: 3px;
|
|
|
|
|
font-size: 0.75rem;
|
|
|
|
|
font-family: 'Fira Code', 'Cascadia Code', 'Consolas', monospace;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(ul),
|
|
|
|
|
.message.assistant :global(ol) {
|
|
|
|
|
margin: 0.35rem 0;
|
|
|
|
|
padding-left: 1.3rem;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(li) {
|
|
|
|
|
margin-bottom: 0.25rem;
|
|
|
|
|
line-height: 1.5;
|
|
|
|
|
}
|
|
|
|
|
.message.assistant :global(br) {
|
|
|
|
|
display: block;
|
|
|
|
|
content: '';
|
|
|
|
|
margin-top: 0.35rem;
|
|
|
|
|
}
|
Phase 5: AI provider system with local and cloud support
- Implement AIProvider base interface with chat() and is_available()
- Add LocalProvider connecting to bundled llama-server via OpenAI SDK
- Add OpenAIProvider for direct OpenAI API access
- Add AnthropicProvider for Anthropic Claude API
- Add LiteLLMProvider for multi-provider gateway
- Build AIProviderService with provider routing, auto-selection,
and transcript context injection
- Add ai.chat IPC handler supporting chat, list_providers, set_provider,
and configure actions
- Add ai_chat, ai_list_providers, ai_configure Tauri commands
- Build interactive AIChatPanel with message history, quick actions
(Summarize, Action Items), and transcript context awareness
- Tests: 30 Python, 6 Rust, 0 Svelte errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:25:10 -08:00
|
|
|
.chat-input {
|
|
|
|
|
display: flex;
|
|
|
|
|
gap: 0.5rem;
|
|
|
|
|
padding: 0.5rem 0.75rem 0.75rem;
|
|
|
|
|
}
|
|
|
|
|
.input-textarea {
|
|
|
|
|
flex: 1;
|
|
|
|
|
background: #1a1a2e;
|
|
|
|
|
color: #e0e0e0;
|
|
|
|
|
border: 1px solid #4a5568;
|
|
|
|
|
border-radius: 4px;
|
|
|
|
|
padding: 0.4rem 0.5rem;
|
|
|
|
|
font-family: inherit;
|
|
|
|
|
font-size: 0.8rem;
|
|
|
|
|
resize: none;
|
|
|
|
|
min-height: 2rem;
|
|
|
|
|
max-height: 4rem;
|
|
|
|
|
}
|
|
|
|
|
.input-textarea:focus {
|
|
|
|
|
outline: none;
|
|
|
|
|
border-color: #e94560;
|
|
|
|
|
}
|
|
|
|
|
.send-btn {
|
|
|
|
|
background: #e94560;
|
|
|
|
|
border: none;
|
|
|
|
|
color: white;
|
|
|
|
|
padding: 0.4rem 0.75rem;
|
|
|
|
|
border-radius: 4px;
|
|
|
|
|
cursor: pointer;
|
|
|
|
|
font-size: 0.8rem;
|
|
|
|
|
font-weight: 500;
|
|
|
|
|
align-self: flex-end;
|
|
|
|
|
}
|
|
|
|
|
.send-btn:hover:not(:disabled) {
|
|
|
|
|
background: #d63851;
|
|
|
|
|
}
|
|
|
|
|
.send-btn:disabled {
|
|
|
|
|
opacity: 0.5;
|
|
|
|
|
cursor: not-allowed;
|
Phase 1 foundation: Tauri shell, Python sidecar, SQLite database
Tauri v2 + Svelte + TypeScript frontend:
- App shell with workspace layout (waveform, transcript, speakers, AI chat)
- Placeholder components for all major UI areas
- Typed stores (project, transcript, playback, AI)
- TypeScript interfaces matching the database schema
- Tauri bridge service with typed invoke wrappers
- svelte-check passes with 0 errors
Rust backend:
- Tauri v2 app entry point with command registration
- SQLite database layer (rusqlite with bundled SQLite)
- Full schema: projects, media_files, speakers, segments, words,
ai_outputs, annotations (with indexes)
- Model structs with serde serialization
- CRUD queries for projects, speakers, segments, words
- Segment text editing preserves original text
- Schema versioning for future migrations
- 6 tests passing
- Command stubs for project, transcribe, export, AI, settings, system
- App state management
Python sidecar:
- JSON-line IPC protocol (stdin/stdout)
- Message types: IPCMessage, progress, error, ready
- Handler registry with routing and error handling
- Ping/pong handler for connectivity testing
- Service stubs: transcribe, diarize, pipeline, AI, export
- Provider stubs: local (llama-server), OpenAI, Anthropic, LiteLLM
- Hardware detection stubs
- 14 tests passing, ruff clean
Also adds:
- Testing strategy document (docs/TESTING.md)
- Validation script (scripts/validate.sh)
- Updated .gitignore for Svelte, Rust, Python artifacts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:16:06 -08:00
|
|
|
}
|
|
|
|
|
</style>
|