Common Technology Platform — Constellation
Document Version: 1.0 — Draft for Review Date: 26 February 2026 Status: Draft — Pending Review Before Implementation Scope: Technology decisions for the Constellation platform
1. Purpose
Constellation is a modular enterprise commerce collaboration platform (8 modules) built by a small team (1-2 humans + AI coding agents). It is 100% AI-developed, using TypeScript, PostgreSQL, and multi-tenant RLS. It requires auth, tenancy, events, and audit across all modules.
The Catalog module incorporates AI-first product information management capabilities (formerly developed as a separate "Stella Catalog" product). These features -- including AI-powered embeddings, hybrid semantic search, CPQ, and supplier offer management -- are now built natively within Constellation's Catalog module.
This document establishes the common technology platform -- the set of decisions that apply to all Constellation modules. Individual modules extend these decisions with domain-specific choices but never contradict them.
2. Technology Stack Summary
| Decision | Standard | Notes |
|---|---|---|
| Language | TypeScript (strict) | All modules |
| Database | PostgreSQL 16+ (pgvector, LTREE) | Schema-per-module isolation |
| Tenant isolation | RLS with tenant_id on all tables | Enforced via SET LOCAL app.tenant_id |
| ID generation | UUID v4 | All tables |
| API style | REST-first | Shared response envelope |
| Auth model | JWT-based, external provider | Mock, Supabase, Keycloak providers |
| Validation | Zod 3.23+ (import { z } from 'zod') | All modules and platform packages |
| Application framework | Next.js 16 (App Router) | All modules |
| ORM | Prisma 6 | Apps use Prisma ORM; platform packages use raw SQL |
| Test runner | Vitest + fast-check + Playwright | Property-based testing adopted from Catalog heritage |
| Background jobs | PostgreSQL-native (pg_notify, polling) | No Redis required for base operation |
| Search | pgvector + hybrid semantic search | Catalog module; available to other modules |
| UI | Next.js SSR + Tailwind + shadcn | Modules with UI; Catalog APIs can be consumed headlessly |
| Deployment | Docker + cloud | Docker-first, cloud optional |
| Open-source friendly | Provider abstractions for self-hosting | On-premises deployments supported |
| AI development | 100% AI-assisted, CLAUDE.md per module | One agent per module |
| Monorepo | Turborepo | Single monorepo with workspace protocol |
3. Framework Decision: Next.js 16
With the merger of Stella Catalog into Constellation, the platform standardises on Next.js 16 (App Router) as the single application framework. All modules use Next.js for both UI and API routes.
The Catalog module's AI-heavy API endpoints (embeddings, hybrid search, CPQ) are implemented as Next.js API routes. For modules that are primarily API-driven (e.g., Trade Compliance, Supply Chain Visibility), the app/api/ convention provides sufficient structure without needing a separate backend framework.
Key insight: With a single framework, AI agents never context-switch. All modules share the same middleware patterns, auth wrappers, error handling, and deployment model. Shared packages can use Next.js-specific adapters without maintaining framework-agnostic alternatives.
4. Unified Technology Decisions
These decisions apply to all Constellation modules, effective immediately.
4.1 Database: PostgreSQL 16+ with Shared Extensions
Standard extensions (enabled in every database):
| Extension | Purpose | Used By |
|---|---|---|
uuid-ossp | UUID generation | All |
pgcrypto | Cryptographic functions | All |
pgvector | Vector embeddings + similarity search | Catalog module, available to all modules |
ltree | Hierarchical tree paths (taxonomy) | Catalog module, available to all modules |
pg_trgm | Trigram text search | All (fuzzy text matching) |
Why pgvector and LTREE: The Catalog module needs product taxonomy (configurable industry taxonomy is hierarchical -- LTREE is the right tool). Semantic search over supplier capabilities and product attributes is a core Catalog feature. Other modules (Procurement, Supply Chain Visibility) benefit from these extensions for supplier search and document retrieval.
Standard conventions:
- All tables have
id UUID PRIMARY KEY DEFAULT gen_random_uuid() - All tenant-scoped tables have
tenant_id UUID NOT NULL REFERENCES identity.tenants(id)(Note: Earlier drafts referencedtenancy.tenants— that name is superseded. The tenant registry lives in theidentityschema.) - All tables have
created_at TIMESTAMPTZ DEFAULT now()andupdated_at TIMESTAMPTZ DEFAULT now() - RLS is enabled on every tenant-scoped table, no exceptions
- RLS policy pattern:
USING (tenant_id::text = current_setting('app.tenant_id', true)) - Note: The session variable is
app.tenant_id(set viaSET LOCAL app.tenant_id = $1by@constellation-platform/db). Earlier drafts of this document usedapp.current_tenant— that name is superseded. All implementations useapp.tenant_id. - Schema-per-module separation within a single PostgreSQL instance
- JSONB for flexible attributes (product specs, settings, metadata)
4.2 ORM: Prisma 6 — Data Access Policy (Standardised)
Decision (updated per ADR-004): Prisma 6 is used for connection management, TypeScript type generation, AND domain queries in application modules (apps/*). Raw SQL via $queryRawUnsafe / $executeRawUnsafe is required only for platform packages (packages/platform/*), SQL migration files, and cross-schema reads (e.g., reading from identity.* tables which are outside a module's own Prisma schema).
Database schemas are managed by hand-written SQL migration files, not by prisma db push or Prisma-managed migrations.
Data access rules by layer:
| Layer | Prisma ORM | Raw SQL | Notes |
|---|---|---|---|
apps/* (module code) | ✅ Recommended | ✅ Acceptable | Use Prisma for domain queries; raw SQL for cross-schema reads |
packages/platform/* | ❌ Not used | ✅ Required | Platform packages must not depend on any module's Prisma schema |
| SQL migrations | ❌ | ✅ Required | Hand-written .sql files, not prisma db push |
| Identity schema reads | ❌ | ✅ Required | Prisma cannot query tables outside its own schema |
| Reason | Detail |
|---|---|
| Enterprise-grade RLS control | RLS policies enforced via SET LOCAL app.tenant_id in the Prisma tenant-scoping extension |
| Schema-per-module isolation | Hand-written migrations allow precise control over schema boundaries and cross-schema FKs |
| Type safety | Prisma generates TypeScript types from schema.prisma for compile-time safety |
| Developer velocity | Prisma ORM eliminates boilerplate row mappers for standard domain CRUD |
| Cross-schema reads | Raw SQL required when querying identity.* tables (outside the module's Prisma schema) |
Trade-off acknowledged: Using Prisma ORM in apps means queries depend on the generated client type. This is accepted for the developer velocity gain. Platform packages use raw SQL to remain schema-agnostic.
What Prisma does in Constellation:
- Connection pooling (PgBouncer for serverless, direct for self-hosted)
$transaction()for atomic operations withSET LOCAL app.tenant_idprisma generatefor TypeScript types fromschema.prisma
What Prisma does NOT do in Constellation:
- Schema migrations (
prisma db push/prisma migrate) - Query building (
prisma.user.findMany()) - Relation loading
4.3 Validation: Zod 3.23+ (Standardised)
Decision: Zod 3.23+ for all input validation, API request/response schemas, and event contracts. The import convention is:
// Import convention (all modules and platform packages)
import { z } from 'zod';
This is the version and import path used by all existing platform packages and the Directory module. The project-tracker currently uses Zod v4 (import { z } from 'zod/v4') and must be downgraded to match during its monorepo integration step.
Note: The earlier draft of this document specified Zod v4 as the standard. That decision is superseded -- Constellation standardised on Zod 3.23+ during Phase 0 implementation, and all built code uses import { z } from 'zod'.
4.4 Testing: Vitest + fast-check (Standardised)
Decision: Vitest as the test runner for all modules. fast-check for property-based testing (adopted from Catalog heritage).
| Tool | Purpose |
|---|---|
| Vitest | Unit + integration test runner |
| fast-check | Property-based testing |
| Playwright | E2E browser testing |
| Supertest | API integration testing (headless/API-only modules) |
Why Vitest: Vitest is fast, natively supports TypeScript and ESM, and is compatible with Jest's API. One test runner means one set of mocking patterns, one set of CI configs, one set of AI agent conventions.
4.5 Authentication: Shared JWT + Provider Abstraction
Decision: All modules use the same auth pattern -- external identity provider issues JWTs, applications validate them.
Shared auth contract (JWT claims):
interface ConstellationJWT {
sub: string; // User ID (UUID)
tenant_id: string; // Tenant ID (UUID)
org_id?: string; // Organisation ID (UUID)
roles: string[]; // Role names (e.g., ["procurement_officer", "catalog_manager"])
scope: 'tenant' | 'platform'; // Access scope
clearance?: string; // Enterprise security tier: security clearance level
caveats?: string[]; // Enterprise security tier: access caveats
iat: number;
exp: number;
}
Provider implementations:
| Provider | Use Case | Status |
|---|---|---|
MockAuthProvider | Local development | Exists (project-tracker) |
SupabaseAuthProvider | SaaS deployment | Exists (project-tracker) |
KeycloakAuthProvider | Self-hosted, federation, regulated deployments | Planned |
Key principle: The auth provider interface lives in a shared package (@constellation-platform/auth-core). The Next.js adapter (@constellation-platform/auth-next) provides middleware and context helpers.
4.6 Event Bus: PostgreSQL-Native (Standardised)
Decision: All modules use PostgreSQL LISTEN/NOTIFY + outbox table for event-driven communication. No Redis requirement for events.
Why PostgreSQL-native:
- No additional runtime dependency (simplifies deployment, especially self-hosted and air-gapped)
- PostgreSQL handles background job needs via a polling worker or
pg_cron - All modules share the same event infrastructure
- One fewer technology to operate, monitor, and secure
Background jobs (replacing BullMQ):
| Approach | Use Case |
|---|---|
| PostgreSQL polling worker | Embedding generation, bulk import, media processing |
pg_notify + listener | Real-time event propagation between modules |
platform.job_queue table | Persistent job queue with retry, dead-letter, priority |
Trade-off acknowledged: BullMQ has more sophisticated scheduling (cron, rate limiting, concurrency control). If the Catalog module's embedding/media workload outgrows PostgreSQL-based queuing, Redis can be added as an optional enhancement -- but it should not be required for basic operation.
4.7 Background Job Processing: @platform/jobs
Decision: Create a shared job processing package that works with PostgreSQL as the default backend, with an optional Redis/BullMQ adapter for high-throughput scenarios.
interface JobQueue {
add(jobType: string, payload: unknown, options?: JobOptions): Promise<string>;
process(jobType: string, handler: JobHandler): void;
getStatus(jobId: string): Promise<JobStatus>;
}
interface JobOptions {
priority?: number;
delay?: number; // milliseconds
maxRetries?: number;
backoff?: 'exponential' | 'linear';
}
// Implementations
class PostgresJobQueue implements JobQueue { ... } // Default
class BullMQJobQueue implements JobQueue { ... } // Optional, requires Redis
4.8 API Response Format (Standardised)
Decision: All APIs across all modules use the same response envelope:
// Success
{ "data": T }
{ "data": T[], "meta": { "total": number, "page": number, "pageSize": number } }
// Error
{ "error": { "code": string, "message": string, "details"?: unknown } }
HTTP status codes: Standard REST conventions. 200 success, 201 created, 400 validation, 401 unauthenticated, 403 unauthorized, 404 not found, 409 conflict, 422 unprocessable, 500 internal.
Pagination: Cursor-based by default (?cursor=xxx&limit=50). Offset-based available where needed (?page=1&pageSize=50).
4.9 Error Handling (Standardised)
// Shared error classes
class AppError extends Error {
constructor(
public code: string,
public message: string,
public statusCode: number,
public details?: unknown
) { super(message); }
}
class NotFoundError extends AppError { ... }
class ValidationError extends AppError { ... }
class UnauthorizedError extends AppError { ... }
class ForbiddenError extends AppError { ... }
class ConflictError extends AppError { ... }
4.10 Logging (Standardised)
Decision: Structured JSON logging with correlation IDs, consistent across all modules.
interface LogEntry {
level: 'debug' | 'info' | 'warn' | 'error';
message: string;
timestamp: string;
correlationId: string;
tenantId?: string;
userId?: string;
module: string; // e.g., 'constellation.catalog', 'constellation.directory'
duration?: number; // milliseconds (for request logs)
error?: { name: string; message: string; stack?: string };
}
4.11 Deployment: Docker-First (Standardised)
Decision: Every module must run in a Docker container. Cloud-specific deployments (Vercel, etc.) are optional optimisations, never requirements.
# Standard Docker Compose for local development
services:
postgres:
image: pgvector/pgvector:pg16 # PostgreSQL 16 + pgvector
keycloak:
image: quay.io/keycloak/keycloak:23.0 # Auth provider (optional for local dev)
minio:
image: minio/minio # S3-compatible object storage
No Redis in base stack. PostgreSQL handles jobs and events. Redis is an optional add-on for high-throughput workloads.
4.12 AI Development Conventions (Standardised)
These rules apply to AI agents working on any Constellation module:
| Rule | Detail |
|---|---|
| One agent per module/service | Never modify two apps in one session |
| CLAUDE.md per app | Domain context, commands, conventions, scope boundaries |
| Module file limit | Warn at 120 files, hard limit at 200 |
| Shared packages are read-only | Dedicated platform agent for shared code |
| Event contracts append-only | Existing events never modified without version bump |
| Tests mandatory | No PR merges without passing tests |
| Human review required | AI agents create PRs, humans approve |
5. Shared Package Architecture
All modules consume from a common set of platform packages.
packages/
├── platform/
│ ├── auth-core/ # Framework-agnostic JWT validation, permissions, providers
│ │ └── src/
│ │ ├── index.ts # Public API barrel
│ │ ├── jwt.ts # JWT validation logic
│ │ ├── permissions.ts # Permission checking (action + resource)
│ │ ├── providers.ts # Mock, Supabase, Keycloak implementations
│ │ ├── types.ts # ConstellationJWT, AuthConfig, etc.
│ │ └── test-utils.ts # Auth test helpers
│ ├── auth-next/ # Next.js-specific auth adapter
│ │ └── src/
│ │ ├── index.ts # Public API barrel
│ │ ├── middleware.ts # Next.js auth middleware
│ │ ├── context.ts # Server-side auth context (getCurrentUser)
│ │ └── with-auth.ts # Higher-order auth wrapper
│ ├── db/ # Prisma client factory, RLS helpers, base types
│ │ ├── client.ts # Tenant-aware Prisma client factory
│ │ ├── rls.ts # RLS policy helpers
│ │ └── types.ts # Base model types (id, tenantId, timestamps)
│ ├── events/ # Typed event bus (pg_notify + outbox)
│ │ ├── bus.ts # Event publisher/subscriber
│ │ ├── outbox.ts # Transactional outbox pattern
│ │ └── contracts/ # Event schemas per domain
│ ├── jobs/ # Background job queue
│ │ ├── interface.ts # JobQueue interface
│ │ ├── postgres.ts # PostgreSQL implementation
│ │ └── bullmq.ts # Optional BullMQ implementation
│ ├── audit/ # Immutable audit logging
│ ├── notifications/ # Email, in-app, webhook notifications
│ ├── security/ # Encryption, classification, enterprise security extensions
│ └── errors/ # Shared error classes and handling
├── ai/ # AI/ML utilities (when stabilised)
│ ├── embeddings.ts # Embedding generation with dirty-checking
│ ├── search.ts # Hybrid search (vector + faceted)
│ └── mcp/ # MCP tool framework
├── ui/ # Design system (shadcn/ui + Tailwind)
│ ├── components/ # shadcn/ui + custom components
│ └── tailwind/ # Shared Tailwind config
├── tsconfig/ # Shared TypeScript configurations
├── eslint-config/ # Shared ESLint rules
└── testing/ # Shared test utilities
├── factories.ts # Test data factories
├── db.ts # Test database setup/teardown
└── auth.ts # Mock auth helpers for tests
Note: Earlier drafts of this document showed a single packages/platform/auth/ directory with core.ts, nextjs.ts, and nestjs.ts files. The actual implementation uses two separate packages: @constellation-platform/auth-core (framework-agnostic) and @constellation-platform/auth-next (Next.js adapter). All implementations should use the package names above.
Package Architecture Pattern
Shared packages separate framework-agnostic core logic from Next.js-specific adapters:
// @constellation-platform/auth-core — NO framework imports
// packages/platform/auth-core/src/jwt.ts
export function validateJWT(token: string, config: AuthConfig): ConstellationJWT { ... }
// packages/platform/auth-core/src/permissions.ts
export function checkPermission(user: ConstellationJWT, action: string, resource: string): boolean { ... }
export function extractTenantId(jwt: ConstellationJWT): string { ... }
// @constellation-platform/auth-next — Next.js adapter
// packages/platform/auth-next/src/middleware.ts
import { validateJWT, extractTenantId } from '@constellation-platform/auth-core';
export function authMiddleware(request: NextRequest): NextResponse { ... }
All module agents import from @constellation-platform/auth-next. The core logic in @constellation-platform/auth-core is tested independently and can be reused outside of Next.js if needed (e.g., CLI tools, scripts).
6. Repository Strategy
Constellation uses a single Turborepo monorepo with workspace protocol:
constellation/
├── apps/
│ ├── directory/ # Identity, orgs, users, roles
│ ├── catalog/ # Product cataloguing (includes AI/PIM features)
│ ├── project-tracker/ # Project coordination
│ ├── procurement/ # RFQ, CPQ, scoring, orders
│ └── ... # Future modules
├── packages/
│ ├── platform/ # Shared infrastructure packages
│ │ ├── auth-core/
│ │ ├── auth-next/
│ │ ├── db/
│ │ ├── events/
│ │ ├── jobs/
│ │ ├── audit/
│ │ ├── errors/
│ │ └── security/
│ ├── ai/ # AI/ML utilities (embeddings, search, MCP)
│ ├── ui/ # Design system (shadcn/ui + Tailwind)
│ ├── tsconfig/ # Shared TypeScript configurations
│ ├── eslint-config/ # Shared ESLint rules
│ └── testing/ # Shared test utilities
├── turbo.json # Build orchestration
└── package.json # Workspace root
Why a single monorepo:
- Shared packages always in sync (workspace protocol, no version drift)
- Each module in its own
apps/directory (clear AI agent boundaries) - Single
git clonefor the whole platform - Turborepo handles build ordering across all modules
- For a team of 1-2 humans + AI agents, a single repo minimises operational overhead
7. The @constellation-platform/ai Package: When to Extract
The Catalog module includes AI capabilities (embeddings, hybrid search, MCP tools). Other modules (Procurement, Supply Chain Visibility) will need similar capabilities. The question is when to extract shared AI utilities.
Phase 1: Build in Catalog (Now)
AI features are built natively in the Catalog module. The Catalog module owns embeddings, hybrid search, and AI-powered product enrichment.
Phase 2: Extract When Proven (After 2 Modules Use It)
Once both Catalog and a second module (e.g., Procurement for supplier search) have working vector search + embedding pipelines, extract the common parts into @constellation-platform/ai:
// @constellation-platform/ai/embeddings.ts
interface EmbeddingProvider {
generate(text: string): Promise<number[]>;
dimensions: number;
model: string;
}
class OpenAIEmbeddings implements EmbeddingProvider { ... }
class CohereEmbeddings implements EmbeddingProvider { ... }
class LocalEmbeddings implements EmbeddingProvider { ... }
// Dirty-checking utility
function needsReembedding(content: string, storedHash: string): boolean { ... }
// @constellation-platform/ai/search.ts
interface HybridSearchConfig {
vectorWeight: number; // 0.0 - 1.0
textWeight: number; // 0.0 - 1.0
reranker?: RerankerConfig;
}
function hybridSearch(query: string, config: HybridSearchConfig): Promise<SearchResult[]> { ... }
// @constellation-platform/ai/mcp/
interface MCPTool {
name: string;
description: string;
inputSchema: z.ZodSchema;
execute(input: unknown, context: TenantContext): Promise<unknown>;
}
Phase 3: MCP Tool Framework
Once multiple modules expose MCP tools, extract a shared MCP server framework:
// @constellation-platform/ai/mcp/server.ts
class MCPServer {
registerTool(tool: MCPTool): void;
handleRequest(request: MCPRequest, context: TenantContext): Promise<MCPResponse>;
}
8. Catalog Module Feature Roadmap
The Catalog module absorbs AI-first product information management capabilities. These features are built natively in Constellation using the platform's standard patterns (Next.js, Prisma, Vitest, PostgreSQL-native jobs).
Phase 1: Core Catalog (Foundation)
| Feature | Description |
|---|---|
| Product CRUD | Create, read, update, delete products with tenant scope |
| Configurable industry taxonomy | LTREE-based hierarchical classification system |
| Attribute management | Flexible product attributes via JSONB |
| Media management | Product images and documents via S3-compatible storage |
| Basic search | Full-text search with pg_trgm |
Phase 2: AI-Powered Features
| Feature | Description |
|---|---|
| AI embeddings | Vector embeddings for products with dirty-checking |
| Hybrid semantic search | pgvector + faceted filters for intelligent product search |
| AI-powered enrichment | Automated product description and attribute generation |
| Supplier offer management | Multi-supplier pricing and availability |
Phase 3: Advanced Capabilities
| Feature | Description |
|---|---|
| CPQ (Configure-Price-Quote) | Quote generation with configurable pricing rules |
| Bulk import/export | CSV/Excel import with validation and background processing |
| MCP tool integration | AI agent access to catalog data via MCP protocol |
| Cross-module integration | Procurement and Supply Chain modules consume Catalog APIs |
9. Decision Summary
| Decision | Choice | Rationale |
|---|---|---|
| Framework | Next.js 16 (App Router) for all modules | Single framework, no context-switching |
| Database | PostgreSQL 16 + pgvector + LTREE + RLS | Full-featured, enterprise-grade tenant isolation |
| ORM | Prisma 6 (all modules) | Type safety, AI-friendly, developer velocity |
| Validation | Zod 3.23+ (import { z } from 'zod') | Unified validation across all modules |
| Testing | Vitest + fast-check + Playwright | Unified test runner, property-based testing |
| Auth | Shared JWT contract + provider abstraction | Core package + Next.js adapter |
| Events | PostgreSQL LISTEN/NOTIFY + outbox | No Redis dependency for events |
| Background jobs | @platform/jobs with PostgreSQL default | Redis optional for high-throughput |
| API format | Shared response envelope, error classes | Consistency for AI agents and consumers |
| Logging | Structured JSON with correlation IDs | Unified observability |
| Deployment | Docker-first, cloud optional | All modules must run in Docker |
| Repository | Single Turborepo monorepo | Shared packages always in sync |
| AI/ML | Extract @constellation-platform/ai after 2 modules use it | Avoid premature abstraction |
| Catalog | Merged product (Stella features absorbed into Catalog) | Single platform, unified codebase |
10. Open Questions
Q1: Prisma for Complex AI Queries
The Catalog module's AI features (hybrid search, embedding pipelines) may require complex PostgreSQL queries (CTEs, window functions, pgvector operators). Should we:
- (a) Use Prisma ORM for standard CRUD,
$queryRawfor complex vector/search queries? - (b) Keep all Catalog search queries as raw SQL and only use Prisma for domain CRUD?
Q2: Redis for High-Throughput Workloads
The Catalog module's embedding generation, media processing, and bulk import may benefit from Redis + BullMQ for sophisticated scheduling. Should we:
- (a) Start with PostgreSQL-based jobs and add Redis only if performance requires it?
- (b) Include Redis as optional from day one for the Catalog module?
Q3: Package Registry
Shared packages use the workspace protocol (no publishing). But if a customer wants to use @constellation-platform/auth-core independently (e.g., a third-party integration), we would need to publish. Should we set up a private npm registry from day one?
This document establishes the technology foundation for the Constellation platform. It should be reviewed alongside the Constellation Architecture Spec v1.