OSS-first docs

These docs teach the open system first: contracts, generated surfaces, runtimes, governance, and incremental adoption. Studio shows up as the operating layer on top, not as the source of truth.

AI index

@contractspec/lib.evolution

Analyze production telemetry, surface anomalies, and turn them into AI-reviewed spec proposals that can be approved, rolled out, or reverted.

Installation

bun add @contractspec/lib.evolution

From telemetry to intent

import { SpecAnalyzer } from '@contractspec/lib.evolution/analyzer';
import { EvolutionPipeline } from '@contractspec/lib.observability';

const analyzer = new SpecAnalyzer();
const pipeline = new EvolutionPipeline({
  onIntent: (intent) => console.log('[intent]', intent),
});

// feed telemetry samples from tracing middleware
pipeline.ingest({
  operation: { name: 'billing.createInvoice', version: 4 },
  durationMs: 612,
  success: false,
  timestamp: new Date(),
  errorCode: 'VALIDATION_FAILED',
});

Generate & approve suggestions

import {
  SpecGenerator,
  SpecSuggestionOrchestrator,
  InMemorySpecSuggestionRepository,
} from '@contractspec/lib.evolution';

const generator = new SpecGenerator();
const repository = new InMemorySpecSuggestionRepository();
const orchestrator = new SpecSuggestionOrchestrator({ repository });

const suggestion = generator.generateFromIntent(intentPattern, {
  summary: 'Add PO number requirement for acme.corp',
});

await orchestrator.submit(suggestion, sessionState);

Write approved specs back to git

import { FileSystemSuggestionWriter } from '@contractspec/lib.evolution/approval';

const writer = new FileSystemSuggestionWriter({
  outputDir:
    'packages/libs/contracts-spec/src/generated',
});

await writer.write({
  ...suggestion,
  status: 'approved',
  approvals: { reviewer: 'ops@contractspec', decidedAt: new Date() },
});

Approvals by default

Every suggestion flows through @contractspec/lib.ai-agent's ApprovalWorkflow. Tune auto-approval thresholds per environment.

Pluggable storage

Use the Prisma repository in production, in-memory for tests, or stream serialized suggestions into your own queue.