Skip to content

Bucket Migration

This guide walks you through migrating your Logfire self-hosted deployment from one object storage bucket to another. This applies whether you’re moving between providers (e.g., Amazon S3 to Google Cloud Storage), between regions, or simply to a different bucket within the same provider.


The migration consists of three steps:

  1. Replicate & sync data from the source bucket to the destination bucket.
  2. Scale down writer workloads to stop writes to the source bucket.
  3. Deploy the Helm chart with updated object storage configuration and scale writer workloads back up.

Prerequisites

  • Admin access to your Kubernetes cluster.
  • The Helm CLI installed.
  • Read access to the source bucket and write access to the destination bucket.
  • Familiarity with your current values.yaml object storage configuration.

Step 1: Replicate & Sync Bucket Data

Before switching Logfire to the new bucket, you need to copy all existing data from the source bucket to the destination. The exact tool depends on your cloud provider or storage solution.

Google Cloud Storage

Use Storage Transfer Service for managed transfers, or gsutil/gcloud storage for a manual copy:

Terminal
gcloud storage rsync gs://source-bucket gs://destination-bucket --recursive

Amazon S3

Use S3 Cross-Region Replication for ongoing replication, or the AWS CLI for a one-time sync:

Terminal
aws s3 sync s3://source-bucket s3://destination-bucket

S3-Compatible / Other Providers

For MinIO, Ceph, or other S3-compatible storage, you can use rclone or rsync-style tools:

Terminal
rclone sync source:source-bucket destination:destination-bucket

Step 2: Scale Down Writer Workloads

Once the bucket data is fully synced, scale the writer workloads to zero. This prevents any further writes to the source bucket while you update the configuration.

Terminal
kubectl scale deployment logfire-ff-maintenance-worker --replicas=0
kubectl scale deployment logfire-ff-compaction-worker --replicas=0
kubectl scale deployment logfire-ff-ingest-processor --replicas=0

Verify that all writer pods have terminated:

Terminal
kubectl get pods -l 'app.kubernetes.io/component in (logfire-ff-maintenance-worker,logfire-ff-compaction-worker,logfire-ff-ingest-processor)'

You should see no running pods for these workloads.

Final Sync

After the writers are stopped, run a final incremental sync to capture any data written between the initial sync and the scale-down:

Terminal
# Example using AWS CLI
aws s3 sync s3://source-bucket s3://destination-bucket

Use the equivalent command for your provider as shown in Step 1.


Step 3: Deploy Updated Configuration & Scale Up

Update your values.yaml with the new object storage configuration. Refer to the Object Storage section of the installation guide for full details on configuring credentials for each provider.

For example, if migrating to a new S3 bucket:

objectStore:
  uri: s3://new-destination-bucket
  env:
    AWS_DEFAULT_REGION: "<new-region>"
    AWS_ACCESS_KEY_ID:
      valueFrom:
        secretKeyRef:
          name: my-aws-secret
          key: access-key
    AWS_SECRET_ACCESS_KEY:
      valueFrom:
        secretKeyRef:
          name: my-aws-secret
          key: secret-key

Or, if migrating to Google Cloud Storage:

objectStore:
  uri: gs://new-destination-bucket

Deploy the updated Helm chart:

Terminal
helm upgrade logfire pydantic/logfire -f values.yaml

Once the deployment is complete, scale the writer workloads back up:

Terminal
kubectl scale deployment logfire-ff-maintenance-worker --replicas=1
kubectl scale deployment logfire-ff-compaction-worker --replicas=1
kubectl scale deployment logfire-ff-ingest-processor --replicas=1

Verification

After scaling the workloads back up, verify that the system is healthy:

  1. Check pod status — all pods should be running without restarts:

    Terminal
    kubectl get pods
    
  2. Check logs for writer workloads to ensure they are writing to the new bucket:

    Terminal
    kubectl logs -l app.kubernetes.io/component=logfire-ff-ingest-processor --tail=50
    
  3. Send test data to confirm end-to-end ingestion is working:

    import logfire
    
    logfire.configure(
        advanced=logfire.AdvancedOptions(base_url='https://<your_logfire_hostname>'),
        token='<YOUR_LOGFIRE_WRITE_TOKEN>',
    )
    logfire.info('Bucket migration verification')
    
  4. Query recent data in the Logfire UI to confirm both historical and new data are accessible.