New Scripts:
- demo-http-api.js: Simple demo showing HTTP server is accessible TODAY
- test-http-mcp.js: Full MCP protocol test over HTTP/SSE
- npm run test:http: Run HTTP/SSE MCP protocol tests
Purpose:
Demonstrates that while major AI tools don't support HTTP/SSE MCP yet,
the deployed server IS accessible and usable right now for:
- Custom integrations (web apps, bots, extensions)
- Testing the MCP protocol over HTTP
- Future-proofing for when tools add support
Usage:
node demo-http-api.js # Quick demo (works now)
npm run test:http # Full MCP protocol test
Dev Dependencies Added:
- eventsource: For SSE client connections
- node-fetch: For HTTP requests
Shows Real Value:
- Server is deployed and working at hpr-knowledge-base.onrender.com
- Can be integrated into custom apps TODAY
- Ready for future MCP client adoption
- Not just waiting for tool support
This addresses the question: "Did I build something nothing supports?"
Answer: No! It's accessible now for custom code, and ready for the future.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Critical Fix:
- Added app.set('trust proxy', true) to server-http.js
- Fixes ValidationError about X-Forwarded-For headers
- Allows rate limiting to work correctly on Render/Heroku/etc
Problem:
- Without trust proxy, Express doesn't recognize real client IPs
- All users appear to have the same IP (the proxy's IP)
- Rate limiting applied to ALL users as a single entity
- One user hitting limit blocks everyone
Solution:
- Trust X-Forwarded-For headers from reverse proxies
- Each user now has their own rate limit bucket
- Rate limiting works as designed (50 req/min per IP)
Documentation:
- Added troubleshooting section in DEPLOYMENT.md
- Explains the error and impact
- Shows how to verify the fix
This is required for any deployment behind a reverse proxy
(Render, Heroku, AWS ELB, nginx, etc.)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
New Documentation:
- CONFIGURATION.md: Complete setup guide for all major AI platforms
- Covers Claude Desktop, ChatGPT, Copilot, Gemini, and custom integrations
- Both stdio (local) and HTTP/SSE (remote) configuration options
- Troubleshooting section with common issues and solutions
- MCP protocol reference for developers
- Code examples in Python and Node.js
AI Tool Support Status:
- ✅ Claude Desktop: stdio only (fully documented)
- ❌ ChatGPT: Not supported (workarounds provided)
- ❌ GitHub Copilot: Not supported (alternatives included)
- ❌ Google Gemini: Not supported (integration examples)
- ✅ Custom MCP clients: Full support (examples provided)
Key Sections:
- Connection method comparison (stdio vs HTTP/SSE)
- Quick start commands for testing
- Platform-specific configuration paths
- JSON-RPC 2.0 protocol examples
- Future compatibility roadmap
- Summary table of AI tool support
Updated README.md:
- Links to new CONFIGURATION.md guide
- Clear support status indicators
- Note about Claude Desktop stdio-only limitation
This addresses the current limitation where Claude Desktop doesn't
support HTTP/SSE connections, while preparing documentation for
future MCP adoption by other AI platforms.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Documentation improvements:
- Added detailed timing breakdown (2-5 min first deployment)
- Explained data loading phase (30-90s for 4,511 episodes)
- Added "What to expect in logs" section
- Included free tier vs paid tier timing differences
- Added health check grace period recommendation (180s)
- New troubleshooting section for deployment delays
- Clarified when service is actually ready to use
Helps users understand:
- Why deployment takes several minutes
- What log messages indicate progress
- When to test the health endpoint
- Free tier spin-down behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
New Features:
- HTTP/SSE server (server-http.js) for network access
- Express-based web server with MCP SSE transport
- Rate limiting (50 req/min per IP)
- Request timeouts (30s)
- Concurrent request limiting (max 10)
- Circuit breaker pattern for failure handling
- Memory monitoring (450MB threshold)
- Gzip compression for responses
- CORS support for cross-origin requests
- Health check endpoint (/health)
Infrastructure:
- Updated package.json with new dependencies (express, cors, compression, rate-limit)
- New npm script: start:http for HTTP server
- Comprehensive deployment guide (DEPLOYMENT.md)
- Updated README with deployment instructions
Graceful Degradation:
- Automatically rejects requests when at capacity
- Circuit breaker opens after 5 failures
- Memory-aware request handling
- Per-IP rate limiting to prevent abuse
The original stdio server (index.js) remains unchanged for local use.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>