Object Storage Configuration
Object storage is used for storing large binary files such as debugging symbols and other artifacts. zymtrace supports S3-compatible storage solutions including MinIO, AWS S3, and Google Cloud Storage.
Configuration Modes​
- Use Existing
- Create Mode
Connect to existing S3-compatible storage​
This mode connects to your existing MinIO, AWS S3, or Google Cloud Storage.
- MinIO
- AWS S3
- Google Cloud Storage
MinIO Configuration​
MinIO Configuration
MinIO is a high-performance, S3-compatible object storage solution that can be deployed on-premises or in the cloud. Configure your existing MinIO instance with the following settings:
storage:
mode: "use_existing"
use_existing:
type: "minio"
minio:
endpoint: "" # must be a URL, HTTP/HTTPS
user: ""
password: ""
buckets:
symbols: "zymtrace-symdb"
Required fields:
endpoint: Complete URL to your MinIO server (e.g.,https://minio.example.comorhttp://192.168.1.100:9000)user: MinIO access keypassword: MinIO secret key
Bucket Creation​
mc mb minio/zymtrace-symdb
Testing Connectivity​
mc config host add minio http://minio.company.com:9000 ACCESS_KEY SECRET_KEY
mc ls minio/zymtrace-symdb
Configuration Example​
Development Environment​
storage:
mode: "use_existing"
use_existing:
type: "minio"
minio:
endpoint: "http://minio.dev.company.com:9000"
user: "dev-access-key"
password: "dev-secret-key"
buckets:
symbols: "zymtrace-symdb-dev"
AWS S3 Configuration​
AWS S3 supports two authentication methods. Static keys are the most straightforward option. IAM authentication avoids long-lived credentials and is better suited for production EKS deployments.
- Static Keys
- IAM / IRSA
IAM Authentication​
Instead of static access keys, zymtrace can authenticate to S3 using AWS IAM — either via IRSA (pod-level, recommended) or the EC2 node instance profile (node-level, simpler for dedicated clusters).
Required IAM Policy​
Both options require the following permissions on your S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME",
"arn:aws:s3:::YOUR_BUCKET_NAME/*"
]
}
]
}
- IRSA
- Node Instance Profile
IRSA​
IRSA binds an IAM role directly to a Kubernetes service account. Credentials are injected into the pod as a projected token — no IMDS hop required. Use this when your cluster is shared with other workloads.
1. Create the IAM policy and role:
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
CLUSTER_NAME="your-cluster"
NAMESPACE="your-namespace"
BUCKET_NAME="your-bucket-name" # must match storage.buckets.symbols in helm values
SA_NAME="zymtrace-s3-sa"
REGION="us-west-2"
# Get OIDC provider
OIDC=$(aws eks describe-cluster --name $CLUSTER_NAME --region $REGION \
--query "cluster.identity.oidc.issuer" --output text | sed 's|https://||')
# Create S3 policy
S3_POLICY_ARN=$(aws iam create-policy \
--policy-name zymtrace-s3-policy \
--policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:GetObject\",\"s3:PutObject\",\"s3:DeleteObject\",\"s3:ListBucket\",\"s3:GetBucketLocation\"],\"Resource\":[\"arn:aws:s3:::${BUCKET_NAME}\",\"arn:aws:s3:::${BUCKET_NAME}/*\"]}]}" \
--query "Policy.Arn" --output text)
# Create IAM role with IRSA trust policy
ROLE_ARN=$(aws iam create-role \
--role-name zymtrace-s3-role \
--assume-role-policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":\"arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC}\"},\"Action\":\"sts:AssumeRoleWithWebIdentity\",\"Condition\":{\"StringEquals\":{\"${OIDC}:sub\":\"system:serviceaccount:${NAMESPACE}:${SA_NAME}\",\"${OIDC}:aud\":\"sts.amazonaws.com\"}}}]}" \
--query "Role.Arn" --output text)
aws iam attach-role-policy --role-name zymtrace-s3-role --policy-arn $S3_POLICY_ARN
2. Create and annotate the service account:
kubectl create serviceaccount $SA_NAME -n $NAMESPACE
kubectl annotate serviceaccount $SA_NAME \
eks.amazonaws.com/role-arn=$ROLE_ARN \
-n $NAMESPACE
3. Set helm values:
storage:
mode: "use_existing"
use_existing:
type: "s3"
s3:
region: "us-west-2"
useIAM: "aws"
serviceAccount: "zymtrace-s3-sa" # must match SA_NAME above
buckets:
symbols: "your-bucket-name"
When using both postgres.mode: aws_aurora and storage.use_existing.s3.useIAM: "aws", application pods run under the Aurora service account (postgres.aws_aurora.serviceAccount). Because EKS pods can only be bound to a single service account, the S3 IAM policy must also be attached to the Aurora IAM role so those pods can access S3:
aws iam attach-role-policy \
--role-name <aurora-iam-role> \
--policy-arn <s3-policy-arn>
Set storage.use_existing.s3.serviceAccount to a dedicated SA (annotated with an S3-only role) for the bucket-check job.
This is an EKS constraint, not a zymtrace requirement. If attaching both policies to the Aurora role conflicts with your least-privilege requirements, use static keys or the node instance profile for S3 access instead.
Node Instance Profile​
Attach the S3 policy to your EKS worker node IAM role. All pods on those nodes will inherit S3 access automatically via IMDS. This is the simpler option for dedicated zymtrace clusters where all node workloads are trusted.
1. Attach the S3 policy to your node IAM role:
NODE_ROLE_NAME="your-node-role" # e.g. eksctl-my-cluster-nodegroup-NodeInstanceRole-...
BUCKET_NAME="your-bucket-name"
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
REGION="us-west-2"
S3_POLICY_ARN=$(aws iam create-policy \
--policy-name zymtrace-s3-policy \
--policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:GetObject\",\"s3:PutObject\",\"s3:DeleteObject\",\"s3:ListBucket\",\"s3:GetBucketLocation\"],\"Resource\":[\"arn:aws:s3:::${BUCKET_NAME}\",\"arn:aws:s3:::${BUCKET_NAME}/*\"]}]}" \
--query "Policy.Arn" --output text)
aws iam attach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $S3_POLICY_ARN
2. Increase the IMDS hop limit to 2 so pods can reach the metadata service.
EKS nodes default to IMDSv2 with a hop limit of 1, which blocks pods from reaching IMDS. If you skip this step, pods will log:
WARN 🚧 [warn]: failed to load region from IMDS | err: failed to load IMDS session token: dispatch failure: timeout: HTTP read timeout
- CLI
- Console (running nodes)
- Console (launch template)
for id in $(aws ec2 describe-instances \
--region $REGION \
--filters "Name=tag:eks:cluster-name,Values=${CLUSTER_NAME}" \
"Name=instance-state-name,Values=running" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text); do
aws ec2 modify-instance-metadata-options \
--instance-id "$id" \
--http-put-response-hop-limit 2 \
--http-tokens required \
--region $REGION
done
Applies immediately to running nodes, but will revert on node replacement. Also update the launch template to persist the setting.
- Open EC2 → Instances
- Select all nodes in your node group
- Actions → Instance settings → Modify instance metadata options
- Set Metadata response hop limit to
2 - Click Save
Persists the setting for all future nodes created by the node group.
- Open EC2 → Launch Templates
- Find the launch template used by your EKS node group
- Actions → Modify template (Create new version)
- Under Advanced details → set Metadata response hop limit to
2 - Click Create template version
- Open EKS → your cluster → Compute → select the node group → Edit
- Update the launch template version to the new version
- Roll the node group to apply (cordon + drain + replace nodes)
3. Set helm values:
storage:
mode: "use_existing"
use_existing:
type: "s3"
s3:
region: "us-west-2"
useIAM: "aws"
# no serviceAccount needed — credentials come from the node role
buckets:
symbols: "your-bucket-name"
Node instance profile grants S3 access to all pods on the node, not just zymtrace. Use IRSA if your cluster is shared with other workloads.
Static Key Configuration​
Helm values
storage:
mode: "use_existing"
use_existing:
type: "s3"
s3:
region: ""
accessKey: ""
secretKey: ""
buckets:
symbols: "zymtrace-symdb"
Required fields:
region: AWS region where your S3 bucket is located (e.g.,us-west-2)accessKey: AWS access key IDsecretKey: AWS secret access key
IAM Permissions​
Your AWS user or role needs the following permissions for the specified bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::zymtrace-symdb",
"arn:aws:s3:::zymtrace-symdb/*"
]
}
]
}
Configuration Example​
storage:
mode: "use_existing"
use_existing:
type: "s3"
s3:
region: "us-west-2"
accessKey: "AKIA..."
secretKey: "..."
buckets:
symbols: "zymtrace-symdb-prod"
Bucket Creation​
aws s3 mb s3://zymtrace-symdb --region us-west-2
Testing Connectivity​
aws s3 ls s3://zymtrace-symdb --region us-west-2
Google Cloud Storage Configuration​
Google Cloud Storage Configuration
storage:
mode: "use_existing"
use_existing:
type: "gcs"
gcs:
endpoint: "https://storage.googleapis.com" # GCS endpoint, defaults to https://storage.googleapis.com
accessKey: ""
secretKey: ""
buckets:
symbols: "zymtrace-symdb"
Required fields:
accessKey: GCP service account access keysecretKey: GCP service account secret key
GCS Service Account Permissions​
Your GCP service account needs the following permissions for the specified bucket:
storage.objects.createstorage.objects.deletestorage.objects.getstorage.objects.liststorage.buckets.get
You can assign the Storage Object Admin role for the specific bucket, or create a custom role with the minimal required permissions.
Bucket Creation​
gsutil mb -c STANDARD -l us-west1 gs://zymtrace-symdb
Testing Connectivity​
gsutil ls gs://zymtrace-symdb
Configuration Example​
Production Environment​
storage:
mode: "use_existing"
use_existing:
type: "gcs"
gcs:
endpoint: "https://storage.googleapis.com"
accessKey: "service-account@project.iam.gserviceaccount.com"
secretKey: "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n"
buckets:
symbols: "zymtrace-symdb-prod"
Bucket Configuration​
Bucket Names​
zymtrace uses the following bucket for storing symbols:
- symbols: Stores debugging symbols and related metadata (default:
zymtrace-symdb)
Bucket Creation​
Automatic Creation: If you have appropriate permissions, zymtrace can automatically create the required bucket during initialization.
Manual Creation: For production environments, create the bucket manually using your cloud provider's console or CLI tools. See the specific instructions in each storage provider tab above.
Security Considerations​
Access Control​
- Principle of Least Privilege: Grant only the minimum required permissions
- Network Security: Use HTTPS/TLS for connections to object storage
- Access Keys: Use dedicated service accounts with limited scope
Data Encryption​
- Encryption in Transit: Always use HTTPS for object storage connections
- Encryption at Rest: Enable server-side encryption on your buckets
- Key Management: Use managed encryption keys when possible
Troubleshooting​
AWS S3 Authentication Modes​
zymtrace supports two authentication modes for AWS S3:
| Mode | How it works | When to use |
|---|---|---|
| Static Keys | Long-lived accessKey / secretKey set in helm values | Simple setups, non-EKS environments |
| IAM / IRSA | useIAM: "aws" — credentials from EC2 node role (IMDS) or IRSA token | EKS deployments — IRSA is recommended |
Common Issues​
- Connection Timeout: Check network connectivity and firewall rules
- Access Denied: Verify credentials and permissions
- Bucket Not Found: Ensure the bucket exists and is accessible
- SSL/TLS Errors: Verify certificate configuration for HTTPS endpoints
AWS: IMDS Timeout (failed to load region from IMDS)​
Symptom: Logs contain warnings like:
WARN 🚧 [warn]: failed to load region from IMDS | err: failed to load IMDS session token: dispatch failure: timeout: HTTP read timeout
Cause: This occurs when useIAM: "aws" is set and the pod tries to fetch credentials from the EC2 metadata service. EKS nodes default to IMDSv2 with a hop limit of 1, which blocks pods inside containers from reaching IMDS.
Fix A — IRSA (recommended): Use IRSA so credentials are injected directly into the pod via a projected service account token — no IMDS required. See IAM / IRSA setup above.
Fix B — Increase hop limit: Increase the IMDSv2 hop limit to 2 on all nodes so pods can reach IMDS:
for id in $(aws ec2 describe-instances \
--region <region> \
--filters "Name=tag:eks:cluster-name,Values=<cluster-name>" \
"Name=instance-state-name,Values=running" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text); do
aws ec2 modify-instance-metadata-options \
--instance-id "$id" \
--http-put-response-hop-limit 2 \
--http-tokens required \
--region <region>
done
Also update your node group launch template to persist this setting across node replacements.
AWS: Access Denied with IAM Authentication​
Symptom: S3 operations fail with AccessDenied despite useIAM: "aws" being set.
Checklist:
- The IAM role has the S3 policy attached (
s3:GetObject,s3:PutObject,s3:DeleteObject,s3:ListBucket) - For IRSA: the role trust policy matches the exact namespace and service account name
- For IRSA: the service account has the
eks.amazonaws.com/role-arnannotation - For IRSA:
storage.use_existing.s3.serviceAccountis set to the annotated service account name - The bucket name in
storage.buckets.symbolsmatches the bucket in the IAM policy resource ARN
Next Steps​
After configuring object storage, proceed to configure the other storage components:
Deploy new MinIO instance​
This mode deploys and manages MinIO within your cluster.
MinIO Create Mode Configuration
storage:
mode: "create"
create:
image:
repository: minio/minio
tag: "RELEASE.2024-12-18T13-15-44Z"
config:
user: "minio"
password: "minio123"
service:
api:
port: 9000
console:
port: 9001
replicas: 1
resources:
requests:
cpu: "200m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "1Gi"
storage:
type: "persistent"
size: 20Gi
className: ""
buckets:
symbols: "zymtrace-symdb"