Docker with S3 Storage
By default, club stores package tarballs on the local filesystem inside the /data/packages directory. You can switch to S3-compatible object storage (AWS S3, Cloudflare R2, MinIO, DigitalOcean Spaces, Backblaze B2, or GCS via the S3 interop endpoint), or use the native GCS backend by setting a few environment variables.
When to Use S3
Stick with filesystem storage when:
- You run a single club instance
- Your packages fit comfortably on the server’s disk
- You want the simplest possible setup
Switch to S3 when:
- You want package storage decoupled from the server
- You need more storage than the server’s disk provides
- You run multiple club instances that need shared access to packages
- You want to use your cloud provider’s durability and redundancy guarantees
- You want to offload backups to the storage provider
Configuration
Set these environment variables in your .env file:
BLOB_BACKEND=s3S3_BUCKET=club-packagesS3_REGION=us-east-1S3_ACCESS_KEY=your-access-keyS3_SECRET_KEY=your-secret-keyFor non-AWS S3-compatible services, also set the endpoint:
S3_ENDPOINT=https://your-s3-endpoint.example.comAWS S3
BLOB_BACKEND=s3S3_BUCKET=club-packagesS3_REGION=us-east-1S3_ACCESS_KEY=AKIAIOSFODNN7EXAMPLES3_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYCreate the bucket with the AWS CLI:
aws s3 mb s3://club-packages --region us-east-1The bucket does not need to be public. club accesses it server-side using the provided credentials. Block all public access:
aws s3api put-public-access-block \ --bucket club-packages \ --public-access-block-configuration \ BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=trueCreate an IAM user or role with this minimal policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::club-packages", "arn:aws:s3:::club-packages/*" ] } ]}MinIO (Self-Hosted S3)
MinIO is an open-source S3-compatible object storage server. Use it when you want S3-style storage without a cloud provider.
docker-compose with MinIO
Add MinIO to your docker-compose.yml:
version: "3.9"
services: club: image: ghcr.io/birjuvachhani/club:latest container_name: club restart: unless-stopped ports: - "127.0.0.1:8080:8080" env_file: .env volumes: - club_data:/data depends_on: minio: condition: service_healthy healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8080/api/v1/health"] interval: 30s timeout: 5s retries: 3 start_period: 10s
minio: image: minio/minio:latest container_name: club_minio restart: unless-stopped command: server /data --console-address ":9001" environment: MINIO_ROOT_USER: ${MINIO_ACCESS_KEY} MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY} volumes: - minio_data:/data ports: - "127.0.0.1:9001:9001" # MinIO console (optional) healthcheck: test: ["CMD", "mc", "ready", "local"] interval: 10s timeout: 3s retries: 5
# Create the bucket on first startup minio-init: image: minio/mc:latest depends_on: minio: condition: service_healthy entrypoint: > /bin/sh -c " mc alias set myminio http://minio:9000 $${MINIO_ACCESS_KEY} $${MINIO_SECRET_KEY}; mc mb --ignore-existing myminio/club-packages; exit 0; " environment: MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY} MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
volumes: club_data: minio_data:.env for MinIO
# clubSERVER_URL=https://packages.example.comJWT_SECRET=<your-jwt-secret>TRUST_PROXY=true
# S3 (MinIO)BLOB_BACKEND=s3S3_ENDPOINT=http://minio:9000S3_BUCKET=club-packagesS3_REGION=us-east-1S3_ACCESS_KEY=minioadminS3_SECRET_KEY=minioadmin
# MinIO credentials (must match S3 keys above)MINIO_ACCESS_KEY=minioadminMINIO_SECRET_KEY=minioadminThe minio-init service creates the club-packages bucket on first startup and then exits. It runs only when the bucket does not already exist.
The MinIO console is available at http://localhost:9001 for browsing stored objects.
Cloudflare R2
Cloudflare R2 is S3-compatible object storage with zero egress fees, which makes it a strong fit for a package registry.
BLOB_BACKEND=s3S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.comS3_BUCKET=club-packagesS3_REGION=autoS3_ACCESS_KEY=<your-r2-access-key-id>S3_SECRET_KEY=<your-r2-secret-access-key>Setup steps:
- In the Cloudflare dashboard, go to R2 and create a bucket named
club-packages. - Find your account ID on the R2 overview page — it forms the endpoint hostname.
- Under Manage R2 API Tokens, create a token with Object Read & Write permission scoped to the bucket. Save the access key ID and secret.
- Set
S3_REGION=auto. R2 ignores region but the SigV4 signer requires the value.
Firebase Storage / GCS via S3 Interop
Firebase Storage buckets are plain Google Cloud Storage buckets, and GCS exposes an S3-compatible XML API. This is the fastest way to point club at a Firebase bucket — no extra code path, just HMAC keys against the S3 backend.
BLOB_BACKEND=s3S3_ENDPOINT=https://storage.googleapis.comS3_BUCKET=<project-id>.appspot.comS3_REGION=autoS3_ACCESS_KEY=<HMAC access key ID>S3_SECRET_KEY=<HMAC secret>Setup steps:
- In the GCP Console, open Cloud Storage → Settings → Interoperability.
- Under Service account HMAC keys, click Create a key for a service account and pick a service account (or create one) with the Storage Object Admin role on the target bucket.
- Copy the access key ID and secret — they’re shown once.
- Set
S3_BUCKETto your Firebase default bucket (<project-id>.appspot.com) or any other GCS bucket the service account can access. S3_REGION=autoworks; the S3 interop endpoint ignores region for routing but SigV4 requires some value.
Firebase Storage / GCS (native)
Use this when you want idiomatic GCP auth (service-account JSON or Application Default Credentials) and don’t want to manage HMAC keys. It’s the recommended path for deployments running on GCE, GKE, Cloud Run, or any GCP-hosted environment.
BLOB_BACKEND=gcsGCS_BUCKET=<project-id>.appspot.com# Pick one of the three auth modes below:GCS_CREDENTIALS_FILE=/secrets/sa.json # path to service-account JSON# GCS_CREDENTIALS_JSON='{"type":"service_account",...}' # inline JSON# (unset both to use Application Default Credentials)Auth resolution priority: file → inline → ADC.
Setup steps:
- Create a service account in the GCP Console with the Storage Object Admin role on the bucket.
- Generate a JSON key for that service account and download it.
- Mount it into the container (e.g. as a Docker secret) and point
GCS_CREDENTIALS_FILEat its path, or paste its contents intoGCS_CREDENTIALS_JSON. - For GCE/GKE/Cloud Run deployments with an attached service account, leave both env vars unset — the SDK auto-discovers credentials from the metadata server.
DigitalOcean Spaces
DigitalOcean Spaces is S3-compatible object storage.
BLOB_BACKEND=s3S3_ENDPOINT=https://nyc3.digitaloceanspaces.comS3_BUCKET=club-packagesS3_REGION=nyc3S3_ACCESS_KEY=<your-spaces-access-key>S3_SECRET_KEY=<your-spaces-secret-key>Create the Space via the DigitalOcean control panel or CLI:
doctl serverless functions create # not needed, just create via UIBackblaze B2
Backblaze B2 offers S3-compatible storage at lower cost.
BLOB_BACKEND=s3S3_ENDPOINT=https://s3.us-west-004.backblazeb2.comS3_BUCKET=club-packagesS3_REGION=us-west-004S3_ACCESS_KEY=<your-b2-application-key-id>S3_SECRET_KEY=<your-b2-application-key>Create the bucket and application key in the Backblaze B2 console:
- Create a bucket named
club-packages(private) - Create an application key with read/write access to that bucket
- The endpoint URL is
https://s3.<region>.backblazeb2.com(shown in your bucket details)
Migrating from Filesystem to S3
If you have existing packages stored on the filesystem and want to move to S3:
-
Create the S3 bucket using the instructions for your provider above.
-
Upload existing packages to S3:
Terminal window # For AWS S3docker cp club:/data/packages /tmp/club-packagesaws s3 sync /tmp/club-packages s3://club-packages/# For MinIO (using mc)docker cp club:/data/packages /tmp/club-packagesmc alias set myminio http://localhost:9000 minioadmin minioadminmc mirror /tmp/club-packages myminio/club-packages/ -
Update the
.envfile with the S3 configuration variables shown above. -
Restart club:
Terminal window docker compose restart club -
Verify by fetching a known package:
Terminal window curl -sf -H "Authorization: Bearer $TOKEN" \https://packages.example.com/api/packages/my_package | jq .latest.version