E2E And Staging
This page describes the production-grade validation path for Wargrid.
Per-App E2E Layout
Each deployable app owns its own end-to-end entrypoint. This keeps local feedback fast and lets CI fan out work in parallel.
| App | Command | Notes |
|---|---|---|
| API | bun run --cwd apps/api test:e2e | Covers typed REST contracts and fixture reset routes. |
| Multiplayer | bun run --cwd apps/multiplayer test:e2e | Covers room join, messaging, and presence events. |
| Site | bun run --cwd apps/site test:e2e | Covers the marketing shell and legal/navigation entry points. |
| Play | bun run --cwd apps/play test:e2e | Covers auth, deck, social, lobby, and grid preview flows. |
| Admin | bun run --cwd apps/admin test:e2e | Covers admin auth, moderation, telemetry, and queue views. |
| Studio | bun run --cwd apps/studio test:e2e | Covers balancing and board inspection screens. |
| Docs | bun run --cwd apps/docs test:e2e | Covers the published documentation surface. |
| Blog | bun run --cwd apps/blog test:e2e | Covers the Astro blog shell. |
Browser app suites launch Playwright through tools/e2e/playwright/run-playwright-cli.mjs.
This wrapper runs the Playwright CLI under Node.
The self-hosted Linux runner needed this because direct playwright test -c ... calls inside Bun scripts hit .esm.preflight module-resolution failures.
Run the whole matrix with:
bun run test:e2e
Fixture Isolation
apps/play and apps/admin share fixture-backed API state during E2E runs. The tests isolate their writes with the x-wargrid-test-namespace header and reset routes under /api/testing/reset-fixtures.
When you add a new mutable player or admin flow:
- move the mutable state into a testable runtime module
- keep the HTTP layer thin
- preserve namespace-aware reset behavior
CI Gate Order
Pull requests and main both use the same quality gates before any deployment:
bun install --frozen-lockfilebun run vendor-docs:syncbun run check- per-app
test:e2e
bun run check currently expands to:
bun run typecheck
bun run lint
bun run test
bun run storybook:build
bun run docs:build
bun run blog:build
PR To Staging Flow
Wargrid uses a default cooperative staging rollout:
- open a pull request from a branch in the main repository
- let
validateand the per-appe2ematrix finish - let
deploy-stagingdeploy the full stack to staging - verify the change on the staging hosts
- merge the pull request
- let the production workflow deploy from
main
The staging deployment runs from .github/workflows/ci.yml. Production runs from .github/workflows/deploy-wargrid.yml.
Staging Topology
Staging lives on the main server and mirrors production as closely as possible.
| Surface | Production | Staging |
|---|---|---|
| Site | wargrid.app | staging.wargrid.app |
| Play | play.wargrid.app | staging-play.wargrid.app |
| Admin | admin.wargrid.app | staging-admin.wargrid.app |
| API | api.wargrid.app | staging-api.wargrid.app |
| Docs | docs.wargrid.app | staging-docs.wargrid.app |
| Blog | wargrid.blog | staging-blog.wargrid.app |
| Multiplayer | dedicated node | staging-multiplayer.wargrid.app |
Before a staging rollout, the deployment script clones the production Postgres database into wargrid_staging. Then it deploys API, site, play, admin, docs, blog, and multiplayer in sequence. After the API container is up, it runs bun run --cwd packages/db db:push inside that container.
Password Protection
Staging is protected with nginx basic auth inside the Kamal-managed containers.
- Secret name:
STAGING_BASIC_AUTH_PASSWORD - Runtime switch:
BASIC_AUTH_ENABLED=true - Password file generation happens in
infra/docker/entrypoint.sh - Default username:
wargrid
Keep the password in GitHub Actions secrets and local environment only. Do not commit the value.
The staging workflow writes .kamal/secrets-common and .kamal/secrets.staging on the GitHub runner immediately before invoking Kamal. This is required because Kamal destinations load secrets from those dotenv files during deploy.
The current CI and deploy flow runs on the self-hosted runner wargrid-deploy-94 with labels self-hosted, Linux, X64, and wargrid-deploy. The app-specific E2E matrix remains split by app so additional runners can execute it in parallel later without a workflow rewrite.
Browser E2E and the production deploy workflow install Node 22 with actions/setup-node@v4 before running Playwright.
Keep that step in place unless the shared launcher and the runner behavior are reworked together.
Staging API Routing
The staging API is currently served directly at:
- directly at
https://staging-api.wargrid.app
The staging play and admin apps point their PUBLIC_API_URL at that host. If same-origin /api proxying is added later, update the Kamal config, this document, and the AI notes in the same change.
Post-Deploy Smoke
After the Kamal rollout, CI calls infra/scripts/verify-http-surface.mjs staging. The smoke verifies:
staging.wargrid.appstaging-api.wargrid.app/upstaging-api.wargrid.app/api/matchmaking/queuesstaging-play.wargrid.appstaging-admin.wargrid.appstaging-docs.wargrid.appstaging-blog.wargrid.app
This catches deploy-time problems that pure local Playwright runs can miss, such as bad production start commands or preview host allow-lists.
Local Docs Host Binding
apps/docs uses different host bindings for local and container use:
- local
start,preview, andserve:prodbind to127.0.0.1 - Kamal uses
serve:prod:container, which binds to0.0.0.0
Keep this split intact.
It prevents local browsers from opening invalid http://0.0.0.0:4201/ URLs while preserving container reachability on the server.
Required Secrets
The deployment workflows expect these GitHub Actions secrets:
KAMAL_SSH_PRIVATE_KEYWARGRID_DATABASE_PASSWORDBETTER_AUTH_SECRETSMTP_PASSSTAGING_BASIC_AUTH_PASSWORD
Manual Verification Checklist
After staging is live:
- open the staging host and pass basic auth
- sign in with a player account
- sign in with an admin account
- verify the changed surface and any affected public hosts
- check that API-backed state persists and migrations applied cleanly
- merge only after staging behaves like production