Docker Deployment
Docker is the recommended way to deploy club. The image bundles the AOT-compiled Dart server and the pre-built SvelteKit frontend into a single container with no external dependencies.
Image Details
| Property | Value |
|---|---|
| Build stages | 3 (Dart AOT, Node static, Debian runtime) |
| Runtime base | debian:bookworm-slim |
| Final image size | ~80 MB |
| Runtime user | club (UID 1000) |
| Exposed port | 8080 |
| Data volume | /data |
| Entrypoint | /app/bin/server (AOT-compiled Dart binary) |
| Health check | TCP GET /api/v1/health on port 8080 |
No Dart SDK or Node.js runtime is included in the final image. The Dart server is an AOT-compiled native binary. The SvelteKit frontend is pre-built static HTML/JS/CSS.
The runtime image also ships git, curl, and unzip — required by the built-in Flutter SDK installer used for pana scoring. SDK caches live under /data/sdks and /data/caches/pub-cache, so they persist on the /data volume.
docker-compose.yml
Create /opt/club/docker-compose.yml:
version: "3.9"
services: club: image: ghcr.io/birjuvachhani/club:latest # Or build from source: # build: # context: . # dockerfile: docker/Dockerfile container_name: club restart: unless-stopped ports: - "127.0.0.1:8080:8080" env_file: .env volumes: - club_data:/data healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8080/api/v1/health"] interval: 30s timeout: 5s retries: 3 start_period: 10s
volumes: club_data: # Explicit volume name — stable regardless of the working directory # you run compose from. This is what keeps your data safe across # `docker compose up -d --build` cycles. name: club_data driver: local
# Pin the compose project name. Without this, compose prefixes the volume# with the directory name, and moving the repo would orphan your data.name: clubField-by-field explanation
| Field | Purpose |
|---|---|
image | Pre-built image from the container registry. Comment out if using build. |
container_name | Fixed name for easy reference in commands (docker logs club). |
restart: unless-stopped | Restarts on crash or reboot, but not if you explicitly stop it. |
ports: "127.0.0.1:8080:8080" | Binds port 8080 to localhost only. Traffic should come through a reverse proxy. |
env_file: .env | Loads configuration from the .env file in the same directory. |
volumes: club_data:/data | Named volume for persistent storage (SQLite database + package tarballs). |
healthcheck | Docker monitors the health endpoint and marks the container as unhealthy if it fails 3 times. |
start_period: 10s | Gives the server 10 seconds to start before health checks begin. |
.env Configuration
Create /opt/club/.env:
# RequiredSERVER_URL=https://packages.example.comJWT_SECRET=<output of: openssl rand -hex 32>
# Recommended when behind a reverse proxy (Caddy / nginx / Traefik)TRUST_PROXY=true
# Optional overrides# PORT=10234 # Host-side port for docker-compose (NOT the internal listen port)# LISTEN_PORT=8080 # Internal listen port inside the container# LOG_LEVEL=info# MAX_UPLOAD_BYTES=104857600Generate the JWT secret:
echo "JWT_SECRET=$(openssl rand -hex 32)" >> .envFirst-boot admin setup
club does not read ADMIN_EMAIL / ADMIN_PASSWORD environment variables. Instead, on first boot — before any user exists — the server prints a one-time setup code to the logs and exposes a web wizard at /setup.
-
Start the container:
docker compose up -d -
Watch the logs for the setup code:
Terminal window docker compose logs club | grep -i 'setup code'You’ll see something like:
[INFO] Setup code: XXXX-XXXX-XXXX-XXXX[INFO] Finish setup at: https://packages.example.com/setup -
Open the URL in your browser, paste the code, and create the admin account (email + password).
-
Once the admin exists, the wizard closes and the code is no longer valid.
Data persistence
The /data volume inside the container holds all persistent state:
/data/├── club.db # SQLite database (metadata, users, tokens, etc.)├── packages/ # Package tarballs│ ├── my_package/│ │ ├── 1.0.0.tar.gz│ │ └── 2.0.0.tar.gz│ └── other_package/│ └── 0.1.0.tar.gz├── docs/ # Generated dartdoc output (DARTDOC_PATH)├── sdks/ # Flutter/Dart SDKs installed for pana scoring└── caches/ └── pub-cache/ # pub cache used by panaWhy the volume has an explicit name:
The docker-compose.yml file declares:
volumes: club_data: name: club_data # ← explicit, stable identity driver: local
name: club # ← pin the compose project nameBoth lines are critical for your data surviving rebuilds:
- Without
name: club_dataunder the volume, Docker Compose prefixes the name with the project — e.g.docker_club_dataif you run compose from/docker/, orclub_club_datafrom/club/. Moving the repo or renaming the directory would silently orphan your data (still on disk, but not mounted). - Without the top-level
name: club, the compose project name defaults to the current directory name, amplifying the same problem.
With both set, the volume is always club_data no matter where you run
compose from.
What does and does not delete data
| Command | Volume deleted? |
|---|---|
docker compose up -d | No |
docker compose up -d --build | No |
docker compose up -d --force-recreate | No |
docker compose restart | No |
docker compose stop | No |
docker compose down | No |
docker compose down --rmi all | No (image removed, volume kept) |
docker compose down -v | Yes — all data lost |
docker volume rm club_data | Yes — all data lost |
make clean | Yes — prompts first, then deletes |
Verifying your data is persisted
# Confirm the volume exists and its mountpoint on diskdocker volume inspect club_data
# See what is insidedocker exec club ls -la /data/docker exec club du -sh /data/Before a major change, you can always take a quick snapshot — see the Backups section below.
Using a bind mount
If you prefer to store data in a specific host directory instead of a Docker named volume:
volumes: - /var/lib/club:/dataMake sure the directory is owned by UID 1000 (the club user inside the container):
sudo mkdir -p /var/lib/clubsudo chown 1000:1000 /var/lib/clubHealth Checks
The built-in Docker health check issues a raw TCP GET /api/v1/health against localhost:8080 inside the container — no curl or wget dependency — and marks the container unhealthy if the endpoint fails 3 consecutive times.
Check container health status:
docker inspect --format='{{.State.Health.Status}}' clubCheck the health endpoint directly:
curl -sf http://localhost:8080/api/v1/health | jq .Example response:
{ "status": "ok", "version": "0.1.0", "timestamp": "2026-01-01T00:00:00Z", "checks": { "metadata_store": {"status": "ok", "latencyMs": 2}, "blob_store": {"status": "ok", "latencyMs": 1}, "search_index": {"status": "ok", "latencyMs": 1} }}Possible status values: ok (all checks pass) or degraded (at least one dependency failing — endpoint returns HTTP 503).
For external monitoring (Datadog, Prometheus, Uptime Kuma, etc.), monitor:
https://packages.example.com/api/v1/healthAlert if the response status is not ok or the endpoint is unreachable.
Day-to-Day Operations
Starting the server
cd /opt/clubdocker compose up -dStopping the server
docker compose stopRestarting
docker compose restartViewing logs
# Follow logs in real timedocker compose logs -f club
# Last 100 linesdocker compose logs --tail 100 club
# Logs since a specific timedocker compose logs --since "2025-01-15T10:00:00" clubChecking disk usage
docker exec club du -sh /data/docker exec club du -sh /data/packages/docker exec club ls -la /data/club.dbBacking up
The runtime image is minimal and does not include the sqlite3 CLI. Run backups either via a short-lived sidecar container or by snapshotting the whole /data volume.
# SQLite online backup via a sidecar alpine container that shares the volumedocker run --rm --volumes-from club -v "$PWD:/backup" alpine:latest sh -c ' apk add --no-cache sqlite >/dev/null && sqlite3 /data/club.db ".backup /backup/club-$(date +%Y%m%d).db"'
# Package tarballs (direct copy)docker cp club:/data/packages ./packages-backup/
# Or snapshot the entire volume at once (includes db + tarballs + SDKs)docker run --rm -v club_data:/data -v "$PWD:/backup" \ alpine:latest tar czf /backup/club-data-$(date +%Y%m%d).tar.gz -C /data .Restoring from backup
# Stop the serverdocker compose stop club
# Restore databasedocker cp club-backup.db club:/data/club.db
# Restore packagesdocker cp packages-backup/. club:/data/packages/
# Start the serverdocker compose start clubUpgrading
Safe rebuild sequence — your data is preserved:
cd /opt/club
# 1. (Optional but recommended) snapshot the data volumedocker run --rm -v club_data:/data -v "$PWD:/backup" alpine:latest \ tar czf /backup/club-pre-upgrade.tar.gz -C /data .
# 2. Pull the new image (or rebuild from source)docker compose pull# Or if building from source: docker compose build
# 3. Recreate the container — the club_data volume is re-mounted, not recreateddocker compose up -d
# 4. Verifydocker compose logs --tail 20 clubcurl -sf https://packages.example.com/api/v1/healthDatabase migrations run automatically on startup. No manual migration steps are needed.
Building the Image Locally
If you want to build the Docker image from the repository source:
git clone https://github.com/BirjuVachhani/club.gitcd clubdocker build -f docker/Dockerfile -t club:latest .The three-stage Dockerfile:
dart:stable— installs Dart dependencies, runsdart build clito produce an AOT-compiled server binary at/app/build/server/bundle/bin/servernode:22-alpine— installs npm dependencies, builds the SvelteKit frontend to static HTML/JS/CSS viaadapter-staticdebian:bookworm-slim— copies the binary + dynamic libraries and the web build, installs runtime dependencies (ca-certificates,curl,git,unzip,webp) needed by pana for scoring, creates the non-rootclubuser (UID 1000)
Production Checklist
- Generate a strong
JWT_SECRET(openssl rand -hex 32) and back it up - Set
SERVER_URLto your public HTTPS URL - Set up a reverse proxy with TLS (Caddy or nginx)
- Set
TRUST_PROXY=trueso club honours the proxy’s forwarded headers - Complete admin setup at
/setupusing the code from the logs - Bind port 8080 to localhost only (
127.0.0.1:8080:8080) or useexpose:when Caddy is a sidecar - Set up automated backups (daily cron recommended)
- Monitor the health endpoint
- Configure log aggregation
- Consider PostgreSQL for concurrent access
- Consider S3 storage for large or numerous packages
- Set Docker container resource limits (memory, CPU)