Skip to main content

Configure zymtrace AI Assistant

The AI Assistant feature enables AI-powered analysis capabilities directly in the zymtrace UI. You can configure one or multiple AI providers to get intelligent insights about your performance data.

Supported Providers​

zymtrace supports the following AI providers:

ProviderGet API Key
Anthropic Claudeconsole.anthropic.com
Google Geminiaistudio.google.com/apikey
OpenAIplatform.openai.com/api-keys
Custom LLMAny OpenAI-compatible chat completions endpoint (e.g., Groq, Together AI, self-hosted models)

Key Advantages​

Flexible AI Integration: zymtrace provides flexibility for custom AI inferences, allowing you to integrate any OpenAI-compatible LLM endpoint. This includes popular providers like Groq, Crusoe, Together AI, and self-hosted models. This means you can:

  • Use self-hosted models for enhanced privacy and control
  • Leverage enterprise-grade AI infrastructure
  • Maintain data sovereignty with on-premises deployments
  • Customize models for domain-specific performance analysis
  • Reduce costs by using optimized, specialized models

Whether you're using cloud-hosted LLMs, on-premises AI infrastructure, or specialized models fine-tuned for your specific use case, zymtrace seamlessly integrates with your preferred AI setup.

Prerequisites​

  • zymtrace backend version 25.12.3 or later
  • API key from at least one supported AI provider

Configuration​

Enable AI Assistant with Helm​

Artifact Hub: zymtrace backend

Helm Chart Source

The Helm chart source code is available on GitHub: zystem-io/zymtrace-charts

To enable the AI Assistant, set aiAssistant.enabled=true and provide API key(s) for your chosen provider(s). You can configure one or multiple providers:

helm upgrade --install backend zymtrace/backend \
--namespace zymtrace \
--set aiAssistant.enabled=true \
--set aiAssistant.anthropic.apiKey="$ANTHROPIC_API_KEY" \
--set aiAssistant.gemini.apiKey="$GEMINI_API_KEY" \
--set aiAssistant.openai.apiKey="$OPENAI_API_KEY"
tip

You only need to configure the providers you want to use. For example, to use only Anthropic Claude, simply omit the other --set flags.

Using a Values File​

For more maintainable configuration, use a custom values file:

custom-values.yaml
aiAssistant:
enabled: true

# Anthropic Claude - https://console.anthropic.com/
anthropic:
apiKey: "sk-ant-api03-..."

# Google Gemini - https://aistudio.google.com/apikey
gemini:
apiKey: "AIzaSy..."

# OpenAI - https://platform.openai.com/api-keys
openai:
apiKey: "sk-proj-..."

# Custom LLM - Any OpenAI-compatible endpoint (e.g., Groq, Together AI, self-hosted models)
customLLM:
url: "https://your-llm-endpoint.com/v1/chat/completions"
apiKey: "your-custom-api-key"
models: "model-1,model-2,model-3"

Then install or upgrade with:

helm upgrade --install backend zymtrace/backend \
--namespace zymtrace \
-f custom-values.yaml
Using Kubernetes Secrets

For production deployments, consider using external secrets management (like External Secrets Operator or Sealed Secrets) rather than storing API keys directly in values files.

Advanced Configuration​

Default Provider and Model​

When multiple AI providers are configured, specify which one to use by default:

custom-values.yaml
aiAssistant:
enabled: true
defaultProvider: "anthropic" # Options: anthropic, gemini, openai, custom
defaultModel: "claude-sonnet-4-5" # Provider-specific model name

Available Models by Provider (changes dynamically as newer models are available):

  • Anthropic: claude-sonnet-4-5, claude-opus-4-5, claude-haiku-4-5
  • Gemini: gemini-3-pro-preview, gemini-2.5-pro, gemini-2.5-flash
  • OpenAI: gpt-5-1, gpt-5-2
  • Custom: Depends on your custom LLM endpoint

MCP Server Configuration​

Configure MCP (Model Context Protocol) servers that the AI Assistant can use for enhanced profiling analysis or other tasks.

custom-values.yaml
aiAssistant:
enabled: true
mcpServers:
# Optional: Add additional MCP servers
- name: custom-mcp
endpoint: "http://my-mcp-server:8080/mcp"
authToken: "my-secret-token"

If no authentication is required, comment out or omit authToken.

Configuration Reference​

API Keys and Providers​

ParameterDescriptionDefault
aiAssistant.enabledEnable AI Assistant features (Helm only)false
aiAssistant.anthropic.apiKeyAnthropic Claude API key-
aiAssistant.gemini.apiKeyGoogle Gemini API key-
aiAssistant.openai.apiKeyOpenAI API key-
aiAssistant.customLLM.urlCustom LLM endpoint URL (OpenAI-compatible)-
aiAssistant.customLLM.apiKeyCustom LLM API key-
aiAssistant.customLLM.modelsComma-separated list of available custom models-

Advanced Configuration​

ParameterDescriptionDefault
aiAssistant.defaultProviderDefault AI provider to useanthropic
aiAssistant.defaultModelDefault model for the providerclaude-sonnet-4-5
aiAssistant.mcpServersArray of MCP server configurations (Helm only)None

Verify Configuration​

After deploying, you can verify the AI Assistant is enabled:

  1. Open the zymtrace UI in your browser
  2. Navigate to any flamegraph or performance view
  3. Look for the AI Assistant icon or panel
  4. Try asking a question about your performance data

Troubleshooting​

AI Assistant not appearing in UI​

  • Verify aiAssistant.enabled is set to true (Helm deployments)
  • Check that at least one API key is configured
  • Ensure you're using zymtrace backend version 25.12.0 or later
  • Check the web service logs for any API key validation errors:
    kubectl logs -l app=zymtrace-web -n zymtrace

API errors or rate limiting​

  • Verify your API key is valid and has not expired
  • Check your API provider's usage dashboard for rate limits
  • Ensure your API key has the necessary permissions/scopes

Connection issues​

  • If using a proxy, ensure the web service can reach the AI provider's API endpoints
  • Check network policies if running in a restricted Kubernetes environment

Security Considerations​

  • API keys are stored as Kubernetes Secrets when using Helm, providing encryption at rest (if enabled in your cluster)
  • Rotate API keys periodically according to your organization's security policies
  • Use separate API keys for development and production environments
  • Monitor API usage through your provider's dashboard to detect any anomalies