Bucket Migration
This guide walks you through migrating your Logfire self-hosted deployment from one object storage bucket to another. This applies whether you’re moving between providers (e.g., Amazon S3 to Google Cloud Storage), between regions, or simply to a different bucket within the same provider.
The migration consists of three steps:
- Replicate & sync data from the source bucket to the destination bucket.
- Scale down writer workloads to stop writes to the source bucket.
- Deploy the Helm chart with updated object storage configuration and scale writer workloads back up.
- Admin access to your Kubernetes cluster.
- The Helm CLI installed.
- Read access to the source bucket and write access to the destination bucket.
- Familiarity with your current
values.yamlobject storage configuration.
Before switching Logfire to the new bucket, you need to copy all existing data from the source bucket to the destination. The exact tool depends on your cloud provider or storage solution.
Use Storage Transfer Service for managed transfers, or gsutil/gcloud storage for a manual copy:
gcloud storage rsync gs://source-bucket gs://destination-bucket --recursive
Use S3 Cross-Region Replication for ongoing replication, or the AWS CLI for a one-time sync:
aws s3 sync s3://source-bucket s3://destination-bucket
For MinIO, Ceph, or other S3-compatible storage, you can use rclone or rsync-style tools:
rclone sync source:source-bucket destination:destination-bucket
Once the bucket data is fully synced, scale the writer workloads to zero. This prevents any further writes to the source bucket while you update the configuration.
kubectl scale deployment logfire-ff-maintenance-worker --replicas=0
kubectl scale deployment logfire-ff-compaction-worker --replicas=0
kubectl scale deployment logfire-ff-ingest-processor --replicas=0
Verify that all writer pods have terminated:
kubectl get pods -l 'app.kubernetes.io/component in (logfire-ff-maintenance-worker,logfire-ff-compaction-worker,logfire-ff-ingest-processor)'
You should see no running pods for these workloads.
After the writers are stopped, run a final incremental sync to capture any data written between the initial sync and the scale-down:
# Example using AWS CLI
aws s3 sync s3://source-bucket s3://destination-bucket
Use the equivalent command for your provider as shown in Step 1.
Update your values.yaml with the new object storage configuration. Refer to the Object Storage section of the installation guide for full details on configuring credentials for each provider.
For example, if migrating to a new S3 bucket:
objectStore:
uri: s3://new-destination-bucket
env:
AWS_DEFAULT_REGION: "<new-region>"
AWS_ACCESS_KEY_ID:
valueFrom:
secretKeyRef:
name: my-aws-secret
key: access-key
AWS_SECRET_ACCESS_KEY:
valueFrom:
secretKeyRef:
name: my-aws-secret
key: secret-key
Or, if migrating to Google Cloud Storage:
objectStore:
uri: gs://new-destination-bucket
Deploy the updated Helm chart:
helm upgrade logfire pydantic/logfire -f values.yaml
Once the deployment is complete, scale the writer workloads back up:
kubectl scale deployment logfire-ff-maintenance-worker --replicas=1
kubectl scale deployment logfire-ff-compaction-worker --replicas=1
kubectl scale deployment logfire-ff-ingest-processor --replicas=1
After scaling the workloads back up, verify that the system is healthy:
-
Check pod status — all pods should be running without restarts:
Terminal kubectl get pods -
Check logs for writer workloads to ensure they are writing to the new bucket:
Terminal kubectl logs -l app.kubernetes.io/component=logfire-ff-ingest-processor --tail=50 -
Send test data to confirm end-to-end ingestion is working:
import logfire logfire.configure( advanced=logfire.AdvancedOptions(base_url='https://<your_logfire_hostname>'), token='<YOUR_LOGFIRE_WRITE_TOKEN>', ) logfire.info('Bucket migration verification') -
Query recent data in the Logfire UI to confirm both historical and new data are accessible.