Configure zymtrace AI Assistant
The AI Assistant feature enables AI-powered analysis capabilities directly in the zymtrace UI. You can configure one or multiple AI providers to get intelligent insights about your performance data.
Supported Providers​
zymtrace supports the following AI providers:
| Provider | Get API Key |
|---|---|
| Anthropic Claude | console.anthropic.com |
| Google Gemini | aistudio.google.com/apikey |
| OpenAI | platform.openai.com/api-keys |
| Custom LLM | Any OpenAI-compatible chat completions endpoint (e.g., Groq, Together AI, self-hosted models) |
Key Advantages​
Flexible AI Integration: zymtrace provides flexibility for custom AI inferences, allowing you to integrate any OpenAI-compatible LLM endpoint. This includes popular providers like Groq, Crusoe, Together AI, and self-hosted models. This means you can:
- Use self-hosted models for enhanced privacy and control
- Leverage enterprise-grade AI infrastructure
- Maintain data sovereignty with on-premises deployments
- Customize models for domain-specific performance analysis
- Reduce costs by using optimized, specialized models
Whether you're using cloud-hosted LLMs, on-premises AI infrastructure, or specialized models fine-tuned for your specific use case, zymtrace seamlessly integrates with your preferred AI setup.
Prerequisites​
- zymtrace backend version
25.12.3or later - API key from at least one supported AI provider
Configuration​
- Helm
- Docker Compose
Enable AI Assistant with Helm​
The Helm chart source code is available on GitHub: zystem-io/zymtrace-charts
To enable the AI Assistant, set aiAssistant.enabled=true and provide API key(s) for your chosen provider(s). You can configure one or multiple providers:
helm upgrade --install backend zymtrace/backend \
--namespace zymtrace \
--set aiAssistant.enabled=true \
--set aiAssistant.anthropic.apiKey="$ANTHROPIC_API_KEY" \
--set aiAssistant.gemini.apiKey="$GEMINI_API_KEY" \
--set aiAssistant.openai.apiKey="$OPENAI_API_KEY"
You only need to configure the providers you want to use. For example, to use only Anthropic Claude, simply omit the other --set flags.
Using a Values File​
For more maintainable configuration, use a custom values file:
aiAssistant:
enabled: true
# Anthropic Claude - https://console.anthropic.com/
anthropic:
apiKey: "sk-ant-api03-..."
# Google Gemini - https://aistudio.google.com/apikey
gemini:
apiKey: "AIzaSy..."
# OpenAI - https://platform.openai.com/api-keys
openai:
apiKey: "sk-proj-..."
# Custom LLM - Any OpenAI-compatible endpoint (e.g., Groq, Together AI, self-hosted models)
customLLM:
url: "https://your-llm-endpoint.com/v1/chat/completions"
apiKey: "your-custom-api-key"
models: "model-1,model-2,model-3"
Then install or upgrade with:
helm upgrade --install backend zymtrace/backend \
--namespace zymtrace \
-f custom-values.yaml
For production deployments, consider using external secrets management (like External Secrets Operator or Sealed Secrets) rather than storing API keys directly in values files.
Enable AI Assistant with Docker Compose​
Add the following environment variables to your docker-compose.yml file for the web service:
services:
web:
# ... other configuration ...
environment:
# Enable one or more AI providers
WEB__ASSISTANT__API_KEYS__ANTHROPIC: "${ANTHROPIC_API_KEY}"
WEB__ASSISTANT__API_KEYS__GEMINI: "${GEMINI_API_KEY}"
WEB__ASSISTANT__API_KEYS__OPENAI: "${OPENAI_API_KEY}"
# Custom LLM configuration (e.g., Groq, Together AI, self-hosted models)
WEB__ASSISTANT__CUSTOM_LLM__URL: "${CUSTOM_LLM_URL}"
WEB__ASSISTANT__CUSTOM_LLM__API_KEY: "${CUSTOM_LLM_API_KEY}"
WEB__ASSISTANT__CUSTOM_LLM__MODELS: "${CUSTOM_LLM_MODELS}"
Then set the environment variables before starting the containers:
export ANTHROPIC_API_KEY="sk-ant-api03-..."
export GEMINI_API_KEY="AIzaSy..."
export OPENAI_API_KEY="sk-proj-..."
docker compose up -d
Alternatively, create a .env file in the same directory as your docker-compose.yml:
ANTHROPIC_API_KEY=sk-ant-api03-...
GEMINI_API_KEY=AIzaSy...
OPENAI_API_KEY=sk-proj-...
CUSTOM_LLM_URL=https://your-llm-endpoint.com/v1/chat/completions
CUSTOM_LLM_API_KEY=your-custom-api-key
CUSTOM_LLM_MODELS=model-1,model-2,model-3
Never commit .env files containing API keys to version control. Add .env to your .gitignore file.
Advanced Configuration​
Default Provider and Model​
When multiple AI providers are configured, specify which one to use by default:
- Helm
- Docker Compose
aiAssistant:
enabled: true
defaultProvider: "anthropic" # Options: anthropic, gemini, openai, custom
defaultModel: "claude-sonnet-4-5" # Provider-specific model name
Available Models by Provider (changes dynamically as newer models are available):
- Anthropic:
claude-sonnet-4-5,claude-opus-4-5,claude-haiku-4-5 - Gemini:
gemini-3-pro-preview,gemini-2.5-pro,gemini-2.5-flash - OpenAI:
gpt-5-1,gpt-5-2 - Custom: Depends on your custom LLM endpoint
services:
web:
environment:
WEB__ASSISTANT__DEFAULT_PROVIDER: "anthropic"
WEB__ASSISTANT__DEFAULT_MODEL: "claude-sonnet"
MCP Server Configuration​
Configure MCP (Model Context Protocol) servers that the AI Assistant can use for enhanced profiling analysis or other tasks.
- Helm
- Docker Compose
aiAssistant:
enabled: true
mcpServers:
# Optional: Add additional MCP servers
- name: custom-mcp
endpoint: "http://my-mcp-server:8080/mcp"
authToken: "my-secret-token"
If no authentication is required, comment out or omit authToken.
The MCP servers configuration is provided as a JSON array via environment variable:
services:
web:
environment:
WEB__ASSISTANT__MCP_SERVERS: '[{"name":"custom-mcp","endpoint":"http://my-mcp-server:8080/mcp","auth-token":"secret"}]'
JSON Format:
[
{
"name": "custom-mcp",
"endpoint": "http://my-mcp-server:8080/mcp",
"auth-token": "my-secret-token"
}
]
Fields:
name: Identifier for the MCP serverendpoint: Full URL to the MCP server endpointauth-token: Authentication token
Configuration Reference​
API Keys and Providers​
| Parameter | Description | Default |
|---|---|---|
aiAssistant.enabled | Enable AI Assistant features (Helm only) | false |
aiAssistant.anthropic.apiKey | Anthropic Claude API key | - |
aiAssistant.gemini.apiKey | Google Gemini API key | - |
aiAssistant.openai.apiKey | OpenAI API key | - |
aiAssistant.customLLM.url | Custom LLM endpoint URL (OpenAI-compatible) | - |
aiAssistant.customLLM.apiKey | Custom LLM API key | - |
aiAssistant.customLLM.models | Comma-separated list of available custom models | - |
Advanced Configuration​
| Parameter | Description | Default |
|---|---|---|
aiAssistant.defaultProvider | Default AI provider to use | anthropic |
aiAssistant.defaultModel | Default model for the provider | claude-sonnet-4-5 |
aiAssistant.mcpServers | Array of MCP server configurations (Helm only) | None |
Verify Configuration​
After deploying, you can verify the AI Assistant is enabled:
- Open the zymtrace UI in your browser
- Navigate to any flamegraph or performance view
- Look for the AI Assistant icon or panel
- Try asking a question about your performance data
Troubleshooting​
AI Assistant not appearing in UI​
- Verify
aiAssistant.enabledis set totrue(Helm deployments) - Check that at least one API key is configured
- Ensure you're using zymtrace backend version
25.12.0or later - Check the web service logs for any API key validation errors:
kubectl logs -l app=zymtrace-web -n zymtrace
API errors or rate limiting​
- Verify your API key is valid and has not expired
- Check your API provider's usage dashboard for rate limits
- Ensure your API key has the necessary permissions/scopes
Connection issues​
- If using a proxy, ensure the web service can reach the AI provider's API endpoints
- Check network policies if running in a restricted Kubernetes environment
Security Considerations​
- API keys are stored as Kubernetes Secrets when using Helm, providing encryption at rest (if enabled in your cluster)
- Rotate API keys periodically according to your organization's security policies
- Use separate API keys for development and production environments
- Monitor API usage through your provider's dashboard to detect any anomalies