Skip to content

Local Development Environment

The Accessible PDF project runs across three clouds β€” Cloudflare (Workers, R2, KV), AWS (Lambda, SQS, DynamoDB, SSM, EC2), and Supabase (Postgres, Auth). The local development environment emulates all of these services in Docker so you can run the entire conversion pipeline end-to-end without internet access or cloud credentials.

Table of Contents


Quick Start

Prerequisites

  • Docker Desktop 24+ with at least 8 GB RAM allocated (Settings β†’ Resources)
  • Node.js 22+ (via nvm or direct install)
  • aws CLI v2 (optional β€” used by seeding scripts and for inspecting LocalStack)

One-Command Setup

Terminal window
./scripts/setup-local.sh

This script:

  1. Verifies Docker and Node.js are installed and running
  2. Copies .env.local.example β†’ .env.node-server (pre-filled for local stack)
  3. Creates .env.grafana with a default admin password
  4. Creates a vertex-ai.json placeholder (required by API node Docker mount)
  5. Runs npm install
  6. Starts all services via docker compose --profile local up -d --build
  7. Waits for each service to pass its health check
  8. Prints a table of service URLs and test credentials

After setup, start the web frontend in a separate terminal:

Terminal window
# First time only: configure the web app for local endpoints
# In apps/web/.env.local, uncomment the local dev lines:
# NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000
# NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG... (the anon key from .env.local.example)
# NEXT_PUBLIC_API_URL=http://localhost:8800
# NEXT_PUBLIC_DEV_AUTO_LOGIN_PASSWORD=localdev123!
npm run dev --filter=web

Test Accounts

These are created automatically by supabase/seed.sql:

EmailPasswordRoleCredits
[email protected]localdev123!Standard user100
[email protected]localadmin123!Admin1000

How It Works

Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Your Machine β”‚
β”‚ β”‚
β”‚ npm run dev --filter=web ──→ Next.js on localhost:3000 β”‚
β”‚ β”‚ β”‚
β”‚ β”‚ auth (JWT) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β†’β”‚ Supabase (Docker) β”‚ β”‚
β”‚ β”‚ β”‚ β”œβ”€ Postgres :54322 β”‚ β”‚
β”‚ β”‚ β”‚ β”œβ”€ GoTrue Auth :8000 β”‚ β”‚
β”‚ β”‚ β”‚ β”œβ”€ PostgREST :8000 β”‚ β”‚
β”‚ β”‚ β”‚ β”œβ”€ Kong :8000 β”‚ β”‚
β”‚ β”‚ β”‚ β”œβ”€ Studio :54323 β”‚ β”‚
β”‚ β”‚ β”‚ └─ Inbucket :54324 β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚
β”‚ β”‚ API calls β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ └─────────────────────→│ Traefik LB :8800 β”‚ β”‚
β”‚ β”‚ β”œβ†’ API Node 1 :8790 β”‚ β”‚
β”‚ β”‚ β””β†’ API Node 2 :8790 β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ β”‚ β”‚ β”‚
β”‚ β–Ό β–Ό β–Ό β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ MinIO :9000β”‚ β”‚ LocalStack :4566β”‚ β”‚ WeasyPrint β”‚ β”‚
β”‚ β”‚ (S3-compat) β”‚ β”‚ β”œβ”€ SQS β”‚ β”‚ :5001β”‚ β”‚
β”‚ β”‚ Console :9001β”‚ β”‚ β”œβ”€ DynamoDB β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚
β”‚ β”‚ β”‚ β”‚ β”œβ”€ SSM β”‚ β”‚ Audiveris β”‚ β”‚
β”‚ β”‚ 6 buckets β”‚ β”‚ └─ SES β”‚ β”‚ :5002β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚
β”‚ β–Ό β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Batch Worker β”‚ β”‚
β”‚ β”‚ (SQS consumer) β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Monitoring (Loki + Grafana + Promtail) :3000 β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Request Flow (PDF Conversion)

  1. User uploads a PDF via the Next.js web app at localhost:3000
  2. Frontend authenticates with Supabase Auth (GoTrue) at localhost:8000 β€” receives a JWT
  3. Frontend calls the API at localhost:8800 with the JWT in the Authorization header
  4. Traefik load-balances the request to one of two API Node containers
  5. API Node validates the JWT against the local Supabase JWT secret
  6. File is stored in MinIO (S3-compatible) at minio:9000 in the accessible-pdf-files bucket
  7. File metadata is written to Supabase Postgres
  8. Credits are deducted via the deduct_credits Supabase RPC function
  9. A pipeline message is enqueued to LocalStack SQS
  10. Batch Worker picks up the SQS message, downloads the PDF from MinIO, runs the conversion pipeline (Puppeteer for rendering, AI APIs for OCR/description if keys are present)
  11. Results are written back to MinIO and progress is tracked in LocalStack DynamoDB
  12. Frontend polls for completion and displays the converted document

What Cannot Be Emulated

AI/LLM APIs (Anthropic Claude, Google Gemini, Mistral, OpenAI, Mathpix) still require real API keys. Add your keys to .env.node-server for full pipeline functionality. If keys are missing, the pipeline gracefully skips those backends β€” you’ll get partial results but no crashes.

Cloudflare Browser Rendering is a production-only Cloudflare binding. The Node server mode uses native Puppeteer (Chrome) instead, which is functionally equivalent and already included in the API Node Docker image.


Service Reference

Ports

PortServicePurpose
3000GrafanaMonitoring dashboards (bound to 127.0.0.1)
4566LocalStackAWS service emulation (SQS, DynamoDB, SSM)
5001WeasyPrintHTML-to-PDF engine (internal, Docker network)
5002AudiverisMusic notation OCR (internal, Docker network)
8000Supabase KongAPI gateway (auth, REST, meta)
8800TraefikLoad balancer β†’ API nodes
8801TraefikAdmin dashboard
9000MinIOS3-compatible API
9001MinIOWeb console
54322PostgresDirect database access
54323Supabase StudioDatabase GUI
54324InbucketEmail capture (magic links)

Docker Containers

ContainerImageProfile
accessible-pdf-converter-api-node-1workers/api/Dockerfile(default)
accessible-pdf-converter-api-node-2workers/api/Dockerfile(default)
accessible-pdf-weasyprintservices/weasyprint/Dockerfile(default)
accessible-pdf-audiverisservices/audiveris/Dockerfile(default)
accessible-pdf-traefiktraefik:v3(default)
accessible-pdf-supabase-dbsupabase/postgres:15.6.1.145local
accessible-pdf-supabase-authsupabase/gotrue:v2.158.1local
accessible-pdf-supabase-restpostgrest/postgrest:v12.2.3local
accessible-pdf-supabase-kongkong:3.5local
accessible-pdf-supabase-metasupabase/postgres-meta:v0.83.2local
accessible-pdf-supabase-studiosupabase/studio:20241202local
accessible-pdf-supabase-inbucketinbucket/inbucket:3.0local
accessible-pdf-supabase-migratepostgres:15 (init)local
accessible-pdf-miniominio/minio:latestlocal
accessible-pdf-minio-initminio/mc (init)local
accessible-pdf-localstacklocalstack/localstack:latestlocal
accessible-pdf-batch-workerworkers/api/Dockerfilelocal
accessible-pdf-lokigrafana/loki:3.4.2monitoring, local
accessible-pdf-grafanagrafana/grafana:11.5.2monitoring, local
accessible-pdf-promtailgrafana/promtail:3.4.2monitoring, local

Using the Environment

Starting

Terminal window
# Full local stack (recommended)
docker compose --profile local up -d
# Or use the setup script (first time)
./scripts/setup-local.sh

Stopping

Terminal window
# Stop containers (preserves data volumes)
docker compose --profile local down
# Stop and delete all data (full reset)
docker compose --profile local down -v

Checking Health

Terminal window
./scripts/local-healthcheck.sh

This checks every HTTP endpoint and Docker container, printing a pass/fail report.

Viewing Logs

Terminal window
# All services
docker compose --profile local logs -f
# Specific service
docker compose --profile local logs -f api-node-1
# Batch worker
docker compose --profile local logs -f batch-worker
# Supabase migrations (init container β€” shows once)
docker compose --profile local logs supabase-migrate

Rebuilding After Code Changes

Terminal window
# Rebuild only the API node containers
docker compose --profile local up -d --build api-node-1 api-node-2
# Rebuild the batch worker
docker compose --profile local up -d --build batch-worker

How Each Cloud Service Is Emulated

Supabase β†’ Local Postgres + GoTrue

Production: Hosted Supabase at vuvwmfxssjosfphzpzim.supabase.co β€” managed Postgres, managed auth, managed REST API.

Local: Six Docker containers replicate the Supabase stack:

  • supabase-db β€” Postgres 15 with the official Supabase image (includes auth schema, extensions, roles). All 57 migrations from supabase/migrations/ are applied on first boot by the supabase-migrate init container.
  • supabase-auth (GoTrue) β€” handles signups, logins, JWT issuance, and magic links. Configured with GOTRUE_MAILER_AUTOCONFIRM=true so no email confirmation is needed. SMTP goes to Inbucket.
  • supabase-rest (PostgREST) β€” auto-generates a REST API from the Postgres schema, respecting RLS policies.
  • supabase-kong β€” API gateway that routes /auth/v1/* to GoTrue and /rest/v1/* to PostgREST. Handles API key validation (anon vs service_role).
  • supabase-studio β€” web-based database GUI at localhost:54323 for inspecting tables, running queries, and managing auth users.
  • supabase-inbucket β€” catches all outgoing emails (magic links, password resets) so they never leave your machine. View captured emails at localhost:54324.

JWT keys: The local stack uses the standard supabase-demo JWT secret and pre-generated anon/service_role keys. These are the same keys used by the official Supabase CLI local development setup, making them safe for local-only use.

How API nodes connect: The API node’s .env.node-server has SUPABASE_URL=http://supabase-kong:8000 β€” all Supabase access goes through Kong on the Docker network. The SUPABASE_JWT_SECRET matches GoTrue’s GOTRUE_JWT_SECRET, so JWT validation works identically to production.

Cloudflare R2 β†’ MinIO

Production: Five R2 buckets accessed via S3-compatible API (from Node server) or native R2 bindings (from Cloudflare Worker).

Local: MinIO provides an S3-compatible API on port 9000. The minio-init container creates six buckets on startup:

BucketReplaces (Production)
accessible-pdf-filesR2: accessible-pdf-files
accessible-photosR2: accessible-photos
accessible-formsR2: accessible-forms
org-chart-filesR2: org-chart-files
convert-email-filesR2: convert-email-files
accessible-pdf-emailAWS S3: email intake bucket

No code changes needed: The API node already uses S3ObjectStorage (AWS SDK v3 S3Client) with a configurable S3_ENDPOINT. Setting S3_ENDPOINT=http://minio:9000 redirects all storage operations to MinIO transparently.

MinIO Console: Browse buckets and objects at http://localhost:9001 (login: minioadmin / minioadmin).

Cloudflare KV β†’ In-Memory Map

Production: The Node server calls the Cloudflare KV REST API (api.cloudflare.com/client/v4/accounts/.../storage/kv/...) for session storage and rate limiting. This requires CF_ACCOUNT_ID and CF_API_TOKEN.

Local: When ENVIRONMENT=local or KV_MODE=memory is set, the server uses InMemoryKvStorage β€” a Map<string, { value, expiresAt }> with lazy TTL expiry. This is defined in workers/api/src/providers/local.ts.

How it’s wired: In server.ts, the isLocalMode flag causes buildSyntheticEnv() to use createLocalKvNamespaceShim() instead of createKvNamespaceShim(). Similarly, createNodeServerStorageContext() in node-server.ts detects missing CF credentials and falls back to InMemoryKvStorage.

Trade-off: KV data is ephemeral β€” it resets when the container restarts. This is fine for local dev since sessions and rate limits are transient. In production, KV persists across requests via Cloudflare’s global network.

AWS SQS β†’ LocalStack SQS

Production: Two SQS queues provisioned by CDK (queue-stack.ts):

  • accessible-pdf-pipeline β€” main work queue (900s visibility, 20s long polling, 3 retries)
  • accessible-pdf-pipeline-dlq β€” dead letter queue (14-day retention)

Local: The infra/localstack/init-aws.sh script creates both queues with identical settings on LocalStack startup. The batch worker’s SQS_QUEUE_URL environment variable points to the LocalStack endpoint.

AWS DynamoDB β†’ LocalStack DynamoDB

Production: Single-table design (accessible-pdf-{env}-data) with pk/sk keys, TTL, and PAY_PER_REQUEST billing.

Local: Created by init-aws.sh with the same schema. The batch worker uses the AWS SDK which respects AWS_ENDPOINT_URL=http://localstack:4566.

AWS SSM Parameter Store β†’ LocalStack SSM

Production: API keys and secrets stored at /accessible-pdf/{env}/* in AWS SSM. Lambda and batch workers load these at cold start.

Local: init-aws.sh pre-populates SSM parameters at /accessible-pdf/local/* with local service URLs and placeholder values. The batch worker’s SSM_PREFIX=/accessible-pdf/local causes it to read from LocalStack instead of AWS.

EC2 Batch Workers β†’ Docker Container

Production: EC2 Auto Scaling Group with spot instances, scaling based on SQS queue depth. Workers pull Docker images from ECR.

Local: A single batch-worker Docker container runs the same code (workers/batch/src/index.ts) with environment variables pointed at LocalStack and MinIO. The SpotMonitor (which polls EC2 instance metadata at 169.254.169.254) is disabled when ENVIRONMENT=local to avoid useless HTTP errors.

AWS Lambda β†’ Direct Execution

Production: Two Lambda functions β€” API (Hono app) and email-intake (SES processor).

Local: Lambda is not deployed to LocalStack (Docker-in-Docker cold starts are too slow for iterative development). Instead:

  • API Lambda: The same Hono app runs directly as a Node.js server in the API node containers β€” functionally identical.
  • Email-intake Lambda: Test directly with npx tsx and a synthetic SES event (see Testing Workflows).

Docker Compose Profiles

The docker-compose.yml uses profiles to let you start only what you need:

ProfileCommandWhat Starts
(none)docker compose upCore: Traefik, API Node x2, WeasyPrint, Audiveris
localdocker compose --profile local upCore + Supabase + MinIO + LocalStack + Batch Worker + Monitoring
monitoringdocker compose --profile monitoring upLoki, Grafana, Promtail

The default profile (no flag) starts only the core services β€” useful when you’re developing against production Supabase and Cloudflare with real credentials in .env.node-server.


Files and Directory Layout

Configuration Files

FilePurpose
docker-compose.ymlAll service definitions with profile annotations
.env.local.examplePre-filled env for local stack β€” copy to .env.node-server
.env.node-server.exampleProduction-oriented env template
infra/supabase/kong.ymlKong declarative config (routes, consumers, API keys)
infra/supabase/init-db.shMigration runner (applies supabase/migrations/*.sql in order)
infra/localstack/init-aws.shCreates SQS queues, DynamoDB table, SSM parameters
supabase/seed.sqlTest users and initial data

Source Code (Local Dev Support)

FileWhat Changed
workers/api/src/providers/local.tsInMemoryKvStorage class and createLocalKvNamespaceShim()
workers/api/src/providers/node-server.tsFalls back to InMemoryKvStorage when CF credentials are absent
workers/api/src/server.tsisLocalMode flag: CF credentials optional, in-memory KV
workers/batch/src/index.tsSkips SpotMonitor.start() when ENVIRONMENT=local

Scripts

ScriptPurpose
scripts/setup-local.shOne-command bootstrap (env files, npm install, docker compose)
scripts/local-healthcheck.shChecks all HTTP endpoints and Docker containers
scripts/seed-local.shUploads sample data to MinIO, verifies LocalStack and Supabase

Working with the Database

Supabase Studio

Open http://localhost:54323 to browse tables, run SQL, and manage auth users. No login required.

Direct Postgres Access

Terminal window
psql -h localhost -p 54322 -U supabase_admin -d postgres

Password: postgres

Applying New Migrations

  1. Create a migration file: supabase/migrations/YYYYMMDD_NNN_description.sql
  2. Restart the migrate container:
    Terminal window
    docker compose --profile local restart supabase-migrate
  3. Or restart the whole stack:
    Terminal window
    docker compose --profile local down && docker compose --profile local up -d

Using Supabase CLI for Migration Diffs

If you prefer supabase db diff for generating migrations:

Terminal window
supabase start # Starts its own Docker containers (separate from compose)
supabase db diff -f my_migration --use-migra
supabase stop

Note: supabase start runs independently from the compose stack. Use one or the other, not both simultaneously.


Working with Object Storage

MinIO Console

Open http://localhost:9001 and log in with minioadmin / minioadmin. You can browse buckets, upload files, and inspect object metadata.

AWS CLI with MinIO

Terminal window
# List buckets
AWS_ACCESS_KEY_ID=minioadmin AWS_SECRET_ACCESS_KEY=minioadmin \
aws --endpoint-url http://localhost:9000 s3 ls
# Upload a file
AWS_ACCESS_KEY_ID=minioadmin AWS_SECRET_ACCESS_KEY=minioadmin \
aws --endpoint-url http://localhost:9000 s3 cp test.pdf s3://accessible-pdf-files/test.pdf
# List objects in a bucket
AWS_ACCESS_KEY_ID=minioadmin AWS_SECRET_ACCESS_KEY=minioadmin \
aws --endpoint-url http://localhost:9000 s3 ls s3://accessible-pdf-files/

Working with AWS Services

All AWS CLI commands work against LocalStack by adding --endpoint-url http://localhost:4566.

SQS

Terminal window
export AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test
# List queues
aws --endpoint-url http://localhost:4566 --region us-east-1 sqs list-queues
# Check queue depth
aws --endpoint-url http://localhost:4566 --region us-east-1 sqs get-queue-attributes \
--queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/accessible-pdf-pipeline \
--attribute-names ApproximateNumberOfMessages
# Send a test message
aws --endpoint-url http://localhost:4566 --region us-east-1 sqs send-message \
--queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/accessible-pdf-pipeline \
--message-body '{"test": true}'

DynamoDB

Terminal window
# Scan the table
aws --endpoint-url http://localhost:4566 --region us-east-1 dynamodb scan \
--table-name accessible-pdf-data
# Get a specific item
aws --endpoint-url http://localhost:4566 --region us-east-1 dynamodb get-item \
--table-name accessible-pdf-data \
--key '{"pk": {"S": "session#123"}, "sk": {"S": "meta"}}'

SSM

Terminal window
# List all local parameters
aws --endpoint-url http://localhost:4566 --region us-east-1 ssm get-parameters-by-path \
--path /accessible-pdf/local --with-decryption

Testing Workflows

Full PDF Conversion Pipeline

  1. Start the stack and web app:
    Terminal window
    docker compose --profile local up -d
    npm run dev --filter=web
  2. Open http://localhost:3000 and sign in as [email protected] / localdev123!
  3. Upload a PDF
  4. Watch processing in real-time:
    Terminal window
    # API node logs (file upload, metadata)
    docker compose --profile local logs -f api-node-1
    # Batch worker logs (SQS consumption, conversion)
    docker compose --profile local logs -f batch-worker
  5. Verify results:
    • MinIO Console (localhost:9001) β€” check for output files
    • Supabase Studio (localhost:54323) β€” check file metadata and credit deductions
    • Grafana (localhost:3000) β€” view structured logs

Email Intake Lambda

The email-intake Lambda isn’t deployed to LocalStack. Test it directly:

Terminal window
cd workers/email-intake
npx tsx -e "
import { handler } from './src/index.js';
const event = {
Records: [{
ses: {
mail: {
messageId: 'test-123',
source: '[email protected]',
commonHeaders: { subject: 'Convert this PDF' }
},
receipt: {
action: { bucketName: 'accessible-pdf-email', objectKey: 'test-email' }
}
}
}]
};
handler(event).then(console.log).catch(console.error);
"

Wrangler Dev (Cloudflare Worker Mode)

To test the Cloudflare Worker path alongside the Docker stack:

Terminal window
cd workers/api
npx wrangler dev # Port 8787, local R2/KV via miniflare

Wrangler dev is self-contained β€” it uses miniflare for R2 and KV emulation. It doesn’t conflict with the Docker stack. Use localhost:8787 instead of localhost:8800 for API calls when testing Worker-specific behavior.


Day-to-Day Commands

Terminal window
# Start everything
docker compose --profile local up -d
# Stop everything (keep data)
docker compose --profile local down
# Full reset (delete all data)
docker compose --profile local down -v
# Check health
./scripts/local-healthcheck.sh
# Rebuild after code changes
docker compose --profile local up -d --build api-node-1 api-node-2
# Tail logs
docker compose --profile local logs -f api-node-1 batch-worker
# Run the web app
npm run dev --filter=web
# Re-run database migrations
docker compose --profile local restart supabase-migrate
# Re-create MinIO buckets
docker compose --profile local restart minio-init
# Re-create LocalStack resources
docker exec accessible-pdf-localstack /etc/localstack/init/ready.d/init-aws.sh

Troubleshooting

Services won’t start

Terminal window
# Check container status
docker compose --profile local ps
# Check logs for a failing service
docker compose --profile local logs supabase-db
docker compose --profile local logs localstack

Common causes:

  • Docker not running β€” start Docker Desktop
  • Port conflict β€” another process is using a required port. Check with lsof -i :8800
  • Insufficient memory β€” increase Docker Desktop RAM allocation to 8+ GB

Supabase migrations fail

Terminal window
docker compose --profile local logs supabase-migrate

If migrations have errors, they may be idempotent (re-creating existing objects). The init script logs warnings but continues. If you need a clean slate, reset the volume:

Terminal window
docker compose --profile local down -v
docker compose --profile local up -d

API nodes can’t connect to Supabase

Check that Kong is running and healthy:

Terminal window
curl http://localhost:8000/auth/v1/health

If not, check the dependency chain: supabase-db β†’ supabase-auth β†’ supabase-kong. The DB must be healthy before auth starts, and auth must be healthy before Kong starts.

KV data lost after restart

Expected behavior. In-memory KV resets when API node containers restart. Sessions will be invalidated β€” users just need to sign in again. In production, Cloudflare KV persists across requests.

Port 3000 conflict (Grafana vs Next.js)

Grafana is bound to 127.0.0.1:3000. The Next.js dev server will auto-detect the conflict and start on port 3001 instead. Access Grafana at http://localhost:3000 and the web app at http://localhost:3001.

LocalStack resources missing after restart

LocalStack init scripts run from /etc/localstack/init/ready.d/ on container startup. If the container restarted but resources are missing:

Terminal window
docker exec accessible-pdf-localstack /etc/localstack/init/ready.d/init-aws.sh

Full reset

Terminal window
docker compose --profile local down -v # Remove all containers and volumes
./scripts/setup-local.sh # Start fresh

Memory Requirements

The full stack uses approximately 8-10 GB of RAM:

ServiceApproximate RAM
Postgres500 MB
GoTrue + Kong + PostgREST + Meta400 MB
MinIO200 MB
LocalStack500 MB
API Node x2 (Puppeteer/Chrome)1 GB each
WeasyPrint300 MB
Batch Worker500 MB
Monitoring (Loki/Grafana/Promtail)500 MB

If your machine is constrained, use profiles to run only what you need. The local profile starts everything; the default profile (no flag) starts only the 5 core services (~3 GB).