Platform
CORE in the center of a secure cloud footprint
Pipeline-E's CORE ships as one application your operations team can version, deploy, and monitor—while each engine (a separable module) stays logically distinct so it can scale and release on its own cadence. Microsoft Azure supplies identity, secrets, data, and compute so your teams spend their time on statute, policy, and service quality—not rebuilding foundational infrastructure.
Engine map on the homepageRepresentative Azure services
The list below reflects how CORE is delivered today. Exact service tiers, regions, and network layout follow your agency standards and are managed as infrastructure as code.
- Microsoft Entra ID
- Enterprise sign-in for staff (and portal users where applicable), multifactor authentication, and directory groups mapped to application permissions.
- Azure Database for PostgreSQL
- Primary transactional database with encryption, backup, and strong data rules. Each program area owns its own tables so changes stay scoped and reviewable.
- App Service and monitoring
- Managed hosting with health checks, scaling options, and structured logs so operators can spot issues before users do.
- Key Vault and managed identity
- Secrets and connection information are injected at runtime from your vault—not embedded in source code. Developers use local credentials only for sandbox machines.
- Blob storage for files
- Documents and evidence attach to the correct license or case through the application’s file layer, with Azure Blob Storage used in production for durable, access-controlled storage.
Engines
Azure runs one web application. Inside it, eleven engines—from identity through reporting—each behave as a module with its own screens, permissions, and roadmap. They coordinate through documented handoffs instead of ad hoc data sharing, which keeps policy changes easier to roll out and explain to oversight. Each engine can scale its own workload as volumes change, and store data in the shared CORE database or in a dedicated database when isolation requirements demand it.
Scale and operational discipline
The web tier can scale out for busy filing periods. The central database scales vertically and through careful read patterns, while individual engines can still be sized or separated onto their own data stores when needed. Background work runs on a database-backed job queue so short spikes do not require a separate caching or message cluster unless your enterprise standards call for one.
Design commitments
- Clear engine boundaries — engines coordinate through supported interfaces; they do not reach across each other’s data stores.
- Single deployable application — operations track one release artifact and one runtime footprint.
- U.S. data residency — workloads stay in agency-approved regions.