Skip to content

Docker with S3 Storage

By default, club stores package tarballs on the local filesystem inside the /data/packages directory. You can switch to S3-compatible object storage (AWS S3, Cloudflare R2, MinIO, DigitalOcean Spaces, Backblaze B2, or GCS via the S3 interop endpoint), or use the native GCS backend by setting a few environment variables.

When to Use S3

Stick with filesystem storage when:

  • You run a single club instance
  • Your packages fit comfortably on the server’s disk
  • You want the simplest possible setup

Switch to S3 when:

  • You want package storage decoupled from the server
  • You need more storage than the server’s disk provides
  • You run multiple club instances that need shared access to packages
  • You want to use your cloud provider’s durability and redundancy guarantees
  • You want to offload backups to the storage provider

Configuration

Set these environment variables in your .env file:

Terminal window
BLOB_BACKEND=s3
S3_BUCKET=club-packages
S3_REGION=us-east-1
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key

For non-AWS S3-compatible services, also set the endpoint:

Terminal window
S3_ENDPOINT=https://your-s3-endpoint.example.com

AWS S3

Terminal window
BLOB_BACKEND=s3
S3_BUCKET=club-packages
S3_REGION=us-east-1
S3_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
S3_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

MinIO (Self-Hosted S3)

MinIO is an open-source S3-compatible object storage server. Use it when you want S3-style storage without a cloud provider.

docker-compose with MinIO

Add MinIO to your docker-compose.yml:

version: "3.9"
services:
club:
image: ghcr.io/birjuvachhani/club:latest
container_name: club
restart: unless-stopped
ports:
- "127.0.0.1:8080:8080"
env_file: .env
volumes:
- club_data:/data
depends_on:
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/api/v1/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
minio:
image: minio/minio:latest
container_name: club_minio
restart: unless-stopped
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}
volumes:
- minio_data:/data
ports:
- "127.0.0.1:9001:9001" # MinIO console (optional)
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 3s
retries: 5
# Create the bucket on first startup
minio-init:
image: minio/mc:latest
depends_on:
minio:
condition: service_healthy
entrypoint: >
/bin/sh -c "
mc alias set myminio http://minio:9000 $${MINIO_ACCESS_KEY} $${MINIO_SECRET_KEY};
mc mb --ignore-existing myminio/club-packages;
exit 0;
"
environment:
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
volumes:
club_data:
minio_data:

.env for MinIO

Terminal window
# club
SERVER_URL=https://packages.example.com
JWT_SECRET=<your-jwt-secret>
TRUST_PROXY=true
# S3 (MinIO)
BLOB_BACKEND=s3
S3_ENDPOINT=http://minio:9000
S3_BUCKET=club-packages
S3_REGION=us-east-1
S3_ACCESS_KEY=minioadmin
S3_SECRET_KEY=minioadmin
# MinIO credentials (must match S3 keys above)
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin

The minio-init service creates the club-packages bucket on first startup and then exits. It runs only when the bucket does not already exist.

The MinIO console is available at http://localhost:9001 for browsing stored objects.

Cloudflare R2

Cloudflare R2 is S3-compatible object storage with zero egress fees, which makes it a strong fit for a package registry.

Terminal window
BLOB_BACKEND=s3
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_BUCKET=club-packages
S3_REGION=auto
S3_ACCESS_KEY=<your-r2-access-key-id>
S3_SECRET_KEY=<your-r2-secret-access-key>

Setup steps:

  1. In the Cloudflare dashboard, go to R2 and create a bucket named club-packages.
  2. Find your account ID on the R2 overview page — it forms the endpoint hostname.
  3. Under Manage R2 API Tokens, create a token with Object Read & Write permission scoped to the bucket. Save the access key ID and secret.
  4. Set S3_REGION=auto. R2 ignores region but the SigV4 signer requires the value.

Firebase Storage / GCS via S3 Interop

Firebase Storage buckets are plain Google Cloud Storage buckets, and GCS exposes an S3-compatible XML API. This is the fastest way to point club at a Firebase bucket — no extra code path, just HMAC keys against the S3 backend.

Terminal window
BLOB_BACKEND=s3
S3_ENDPOINT=https://storage.googleapis.com
S3_BUCKET=<project-id>.appspot.com
S3_REGION=auto
S3_ACCESS_KEY=<HMAC access key ID>
S3_SECRET_KEY=<HMAC secret>

Setup steps:

  1. In the GCP Console, open Cloud Storage → Settings → Interoperability.
  2. Under Service account HMAC keys, click Create a key for a service account and pick a service account (or create one) with the Storage Object Admin role on the target bucket.
  3. Copy the access key ID and secret — they’re shown once.
  4. Set S3_BUCKET to your Firebase default bucket (<project-id>.appspot.com) or any other GCS bucket the service account can access.
  5. S3_REGION=auto works; the S3 interop endpoint ignores region for routing but SigV4 requires some value.

Firebase Storage / GCS (native)

Use this when you want idiomatic GCP auth (service-account JSON or Application Default Credentials) and don’t want to manage HMAC keys. It’s the recommended path for deployments running on GCE, GKE, Cloud Run, or any GCP-hosted environment.

Terminal window
BLOB_BACKEND=gcs
GCS_BUCKET=<project-id>.appspot.com
# Pick one of the three auth modes below:
GCS_CREDENTIALS_FILE=/secrets/sa.json # path to service-account JSON
# GCS_CREDENTIALS_JSON='{"type":"service_account",...}' # inline JSON
# (unset both to use Application Default Credentials)

Auth resolution priority: file → inline → ADC.

Setup steps:

  1. Create a service account in the GCP Console with the Storage Object Admin role on the bucket.
  2. Generate a JSON key for that service account and download it.
  3. Mount it into the container (e.g. as a Docker secret) and point GCS_CREDENTIALS_FILE at its path, or paste its contents into GCS_CREDENTIALS_JSON.
  4. For GCE/GKE/Cloud Run deployments with an attached service account, leave both env vars unset — the SDK auto-discovers credentials from the metadata server.

DigitalOcean Spaces

DigitalOcean Spaces is S3-compatible object storage.

Terminal window
BLOB_BACKEND=s3
S3_ENDPOINT=https://nyc3.digitaloceanspaces.com
S3_BUCKET=club-packages
S3_REGION=nyc3
S3_ACCESS_KEY=<your-spaces-access-key>
S3_SECRET_KEY=<your-spaces-secret-key>

Create the Space via the DigitalOcean control panel or CLI:

Terminal window
doctl serverless functions create # not needed, just create via UI

Backblaze B2

Backblaze B2 offers S3-compatible storage at lower cost.

Terminal window
BLOB_BACKEND=s3
S3_ENDPOINT=https://s3.us-west-004.backblazeb2.com
S3_BUCKET=club-packages
S3_REGION=us-west-004
S3_ACCESS_KEY=<your-b2-application-key-id>
S3_SECRET_KEY=<your-b2-application-key>

Create the bucket and application key in the Backblaze B2 console:

  1. Create a bucket named club-packages (private)
  2. Create an application key with read/write access to that bucket
  3. The endpoint URL is https://s3.<region>.backblazeb2.com (shown in your bucket details)

Migrating from Filesystem to S3

If you have existing packages stored on the filesystem and want to move to S3:

  1. Create the S3 bucket using the instructions for your provider above.

  2. Upload existing packages to S3:

    Terminal window
    # For AWS S3
    docker cp club:/data/packages /tmp/club-packages
    aws s3 sync /tmp/club-packages s3://club-packages/
    # For MinIO (using mc)
    docker cp club:/data/packages /tmp/club-packages
    mc alias set myminio http://localhost:9000 minioadmin minioadmin
    mc mirror /tmp/club-packages myminio/club-packages/
  3. Update the .env file with the S3 configuration variables shown above.

  4. Restart club:

    Terminal window
    docker compose restart club
  5. Verify by fetching a known package:

    Terminal window
    curl -sf -H "Authorization: Bearer $TOKEN" \
    https://packages.example.com/api/packages/my_package | jq .latest.version