Skip to main content

Project Tracker — Deployment & Migration Plan

Strategy: Option B — New Vercel project, existing Supabase database

Branch: feat/pt-migration-step1 → merge to main before deployment

Estimated downtime: 10–15 minutes (freeze-window cutover)

Prerequisites

What I need from you

  1. Supabase connection strings (from the existing PT Vercel project):

    • DATABASE_URL (pooled connection, port 6543)
    • DIRECT_URL (direct connection, port 5432)
    • NEXT_PUBLIC_SUPABASE_URL
    • NEXT_PUBLIC_SUPABASE_ANON_KEY
    • SUPABASE_SERVICE_ROLE_KEY
  2. Vercel access:

    • Team/org where the new Vercel project will be created
    • Confirm: create new project constellation-project-tracker (or preferred name)
  3. Domain decision:

    • Keep existing domain and switch DNS after validation?
    • Or use a new subdomain (e.g., pt.constellation.app)?
  4. Timing:

    • Schedule a maintenance window (15 min is sufficient)
    • Notify users that the app will be briefly unavailable
  5. Supabase database permissions:

    • Confirm the database user can CREATE SCHEMA (needed for identity, projects, audit schemas)
    • If not, we need the Supabase dashboard SQL editor to run migrations manually

Phase 1: Pre-Deployment Preparation (no downtime)

These steps can be done days before the actual cutover.

1.1 Merge PR to main

# Replace <PR_NUMBER> with the deployment-branch PR ID
gh pr merge <PR_NUMBER> --squash
git checkout main && git pull

1.2 Create new Vercel project

# From monorepo root
vercel link # or create via Vercel dashboard

Configure in Vercel dashboard:

  • Framework: Next.js
  • Root Directory: apps/project-tracker
  • Build Command: cd ../.. && npx turbo run build --filter=@constellation/project-tracker
  • Output Directory: apps/project-tracker/.next
  • Install Command: npm install --legacy-peer-deps
  • Node.js Version: 20.x

1.3 Set environment variables in new Vercel project

# Database (same as existing PT)
DATABASE_URL=<from existing project>
DIRECT_URL=<from existing project>

# Auth
AUTH_PROVIDER=supabase
NEXT_PUBLIC_AUTH_PROVIDER=supabase
NEXT_PUBLIC_SUPABASE_URL=<from existing project>
NEXT_PUBLIC_SUPABASE_ANON_KEY=<from existing project>
SUPABASE_SERVICE_ROLE_KEY=<from existing project>

# App
PORT=3000
NODE_ENV=production

# Optional
RESEND_API_KEY=<if using email notifications>
CRON_SECRET=<generate new: openssl rand -hex 32>

1.4 Test deployment (preview)

Push to a branch to get a Vercel preview URL. The app will partially work (old public schema still active, no identity/projects schemas yet). Verify the build succeeds and the app loads.

If you have a staging Supabase instance, run the full migration there first:

# Point to staging database
export DATABASE_URL=<staging-connection-string>
export DIRECT_URL=<staging-direct-connection-string>

# Run migrations
psql "$DIRECT_URL" -f apps/directory/prisma/migrations/001_directory_schema.sql
psql "$DIRECT_URL" -f apps/directory/prisma/migrations/003_add_users_metadata.sql
psql "$DIRECT_URL" -f apps/project-tracker/prisma/migrations/001_projects_schema.sql
psql "$DIRECT_URL" -f apps/project-tracker/prisma/migrations/002_seed_permissions.sql

# Dry-run data migration
cd apps/project-tracker
npx tsx scripts/migrate-data.ts --dry-run

Phase 2: Cutover (10–15 min maintenance window)

2.0 Announce maintenance

Notify users the app will be briefly unavailable.

2.1 Freeze the old deployment

  • Set the old Vercel project to "paused" or put up a maintenance page
  • This prevents data writes during migration

2.2 Apply schema migrations

Connect to the production Supabase database via psql or Supabase SQL Editor. Run in this exact order:

-- Step 1: Create audit schema (referenced by projects schema)
CREATE SCHEMA IF NOT EXISTS audit;

CREATE TABLE IF NOT EXISTS audit.audit_entries (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID,
actor_id UUID,
actor_type TEXT NOT NULL DEFAULT 'user',
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT,
module TEXT NOT NULL DEFAULT 'system',
changes JSONB,
correlation_id UUID DEFAULT gen_random_uuid(),
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

-- Immutability triggers
CREATE OR REPLACE FUNCTION audit.reject_audit_mutation()
RETURNS TRIGGER AS $$
BEGIN
RAISE EXCEPTION 'Audit entries are immutable — UPDATE and DELETE are not allowed';
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trg_audit_no_update
BEFORE UPDATE ON audit.audit_entries
FOR EACH ROW EXECUTE FUNCTION audit.reject_audit_mutation();

CREATE TRIGGER trg_audit_no_delete
BEFORE DELETE ON audit.audit_entries
FOR EACH ROW EXECUTE FUNCTION audit.reject_audit_mutation();
-- Step 2: Create identity schema (Directory module)
-- Run: apps/directory/prisma/migrations/001_directory_schema.sql
-- Then: apps/directory/prisma/migrations/003_add_users_metadata.sql
-- Step 3: Create projects schema
-- Run: apps/project-tracker/prisma/migrations/001_projects_schema.sql
-- Step 4: Seed permissions
-- Run: apps/project-tracker/prisma/migrations/002_seed_permissions.sql

2.3 Run data migration

cd apps/project-tracker

# Dry-run first (reads data, doesn't write)
DATABASE_URL=<production-pooled> \
DIRECT_URL=<production-direct> \
npx tsx scripts/migrate-data.ts --dry-run

# Review the report — check record counts match expectations

# Live migration
DATABASE_URL=<production-pooled> \
DIRECT_URL=<production-direct> \
npx tsx scripts/migrate-data.ts

# Verify: run a second time (should be idempotent — same counts)
DATABASE_URL=<production-pooled> \
DIRECT_URL=<production-direct> \
npx tsx scripts/migrate-data.ts

2.4 Verify data migration

-- Quick verification queries (run in Supabase SQL Editor)
SELECT 'tenants' AS entity, COUNT(*) FROM identity.tenants
UNION ALL SELECT 'users', COUNT(*) FROM identity.users
UNION ALL SELECT 'projects', COUNT(*) FROM projects.projects
UNION ALL SELECT 'tasks', COUNT(*) FROM projects.tasks
UNION ALL SELECT 'stages', COUNT(*) FROM projects.stages;

-- Compare with old tables
SELECT 'old_projects', COUNT(*) FROM public."Project"
UNION ALL SELECT 'new_projects', COUNT(*) FROM projects.projects;

2.5 Deploy new Vercel project

# From your local checkout of the constellation repo, on the main branch:
git push # Vercel auto-deploys from main

Or deploy manually:

vercel --prod

2.6 Validate new deployment

Test these URLs on the new Vercel deployment:

GET / → Login page loads
GET /api/auth/me → Returns current user (after login)
GET /api/projects → Returns projects list
GET /api/programmes → Returns programmes list
GET /api/projects/<id>/tasks → Returns tasks for a project
GET /api/auth/permissions → Returns user permissions
GET /api/people → Returns org members
GET /api/time-entries → Returns time entries

Log in as each mock user and verify:

  • Dashboard loads with correct data
  • Projects show correct tasks, stages, gates
  • Kanban board works (drag and drop)
  • Task detail page loads with comments
  • Org settings page loads member list
  • Search works (Cmd+K)

2.7 DNS cutover (if using custom domain)

  • Point the custom domain to the new Vercel project
  • Or update the DNS CNAME from old Vercel deployment to new

2.8 Announce migration complete

Notify users the app is back online at the new URL (or same URL with new backend).

Phase 3: Post-Cutover Monitoring (48 hours)

3.1 Monitor for issues

  • Watch Vercel runtime logs for errors
  • Check Supabase logs for query failures
  • Monitor for RLS-related empty results (tenant context not set)
  • Watch for PrismaClientKnownRequestError in logs

3.2 Rollback procedure (if needed within 48h)

The old public.* tables are untouched. To rollback:

  1. Revert to the old Vercel deployment (one click in Vercel dashboard)
  2. Or re-enable the old Vercel project
  3. No database rollback needed — old tables are still there

3.3 Cleanup (after 48h rollback window)

Once confident the migration is stable:

-- WARNING: Irreversible! Only run after the 48h rollback window.
-- Drop old public schema tables (they've been copied to projects.*)

-- Run: apps/project-tracker/prisma/migrations/003_rollback.sql
-- (This drops ONLY the projects schema — for emergency rollback)

-- To drop OLD tables (cleanup, NOT rollback):
DROP TABLE IF EXISTS public."TimeEntry" CASCADE;
DROP TABLE IF EXISTS public."TaskFieldValue" CASCADE;
DROP TABLE IF EXISTS public."TaskDependency" CASCADE;
DROP TABLE IF EXISTS public."TaskApproval" CASCADE;
DROP TABLE IF EXISTS public."GateReview" CASCADE;
DROP TABLE IF EXISTS public."GateCriterion" CASCADE;
DROP TABLE IF EXISTS public."Gate" CASCADE;
DROP TABLE IF EXISTS public."Stage" CASCADE;
DROP TABLE IF EXISTS public."Comment" CASCADE;
DROP TABLE IF EXISTS public."FileAttachment" CASCADE;
DROP TABLE IF EXISTS public."NotificationQueue" CASCADE;
DROP TABLE IF EXISTS public."Invitation" CASCADE;
DROP TABLE IF EXISTS public."Feedback" CASCADE;
DROP TABLE IF EXISTS public."ProjectCollaborator" CASCADE;
DROP TABLE IF EXISTS public."ProjectField" CASCADE;
DROP TABLE IF EXISTS public."ProjectStatus" CASCADE;
DROP TABLE IF EXISTS public."ProgrammeDelegate" CASCADE;
DROP TABLE IF EXISTS public."Task" CASCADE;
DROP TABLE IF EXISTS public."Project" CASCADE;
DROP TABLE IF EXISTS public."Programme" CASCADE;
DROP TABLE IF EXISTS public."UserRole" CASCADE;
DROP TABLE IF EXISTS public."RolePermission" CASCADE;
DROP TABLE IF EXISTS public."Role" CASCADE;
DROP TABLE IF EXISTS public."OrganisationMember" CASCADE;
DROP TABLE IF EXISTS public."Organisation" CASCADE;
DROP TABLE IF EXISTS public."User" CASCADE;
DROP TABLE IF EXISTS public."AuditEntry" CASCADE;
DROP TABLE IF EXISTS public."AuthAuditLog" CASCADE;
DROP TABLE IF EXISTS public."_prisma_migrations" CASCADE;

Known Limitations at Launch

These items are documented in .kiro/specs/project-tracker-migration/tasks.md and will be addressed in follow-up PRs:

  1. Permission evaluator: PT still uses local matchesPermission() with 5-min TTL cache instead of platform checkPermissionDetailed(). Functionally equivalent but permission revocations can be stale for up to 5 minutes.

  2. Event system: Event schemas are defined but publishEvent() is a stub (logs to console). No downstream consumers are affected since no other Constellation modules are deployed yet.

  3. Audit writes: Stubbed to console.debug(). Audit data from the old system is migrated to audit.audit_entries but new writes go to console until the platform audit integration is completed.

  4. 15 KanbanBoard tests: Excluded from CI due to monorepo React resolution issue with @hello-pangea/dnd. The DnD functionality works correctly in the browser — this is a test environment issue only.

Checklist

  • Get Supabase connection strings from existing Vercel project
  • Confirm database user can CREATE SCHEMA
  • Create new Vercel project with correct root directory config
  • Set all environment variables in new Vercel project
  • Merge PR #13 to main
  • Schedule maintenance window with users
  • Run dry-run migration against production DB
  • Apply schema migrations (audit → identity → projects → permissions)
  • Run live data migration
  • Verify data counts match
  • Deploy to Vercel production
  • Validate all critical routes
  • Test login + navigation as each user type
  • DNS cutover (if applicable)
  • Announce migration complete
  • Monitor for 48 hours
  • Drop old public tables after rollback window