Day 0 Onboarding Runbook
This runbook is designed for infrastructure and DevOps teams preparing to deploy Conncentric for the first time. It consolidates every requirement into a single pre-flight checklist, provides network and sizing baselines, and addresses common InfoSec questions.
Complete every section before running helm install.
Pre-Flight Checklist
Work through each item in order. Every box must be checked before proceeding to the Quick Start installation guide.
1. Tooling
-
kubectl1.32+ installed and configured for your target cluster -
helm4.1+ installed - Cluster admin credentials available (needed for namespace creation and secret management)
2. Kubernetes Cluster
- Cluster is running Kubernetes 1.30 or newer
- At least 2 nodes provisioned (see Sizing Recommendations below)
- Cluster can pull images from the Conncentric container registry (pull secret configured)
- An Ingress controller is installed (e.g., NGINX Ingress Controller, Traefik, AWS ALB Controller)
- DNS record for the Portal hostname points to the Ingress endpoint
3. PostgreSQL Database
- PostgreSQL 15+ instance is running and reachable from the cluster
- A dedicated database has been created (e.g.,
conncentric) - A dedicated database user exists with permission to create tables in that database
- At least 20 GB of storage allocated
- A Kubernetes Secret containing the database credentials has been created in the target namespace
4. Identity Provider (OIDC)
- An OIDC application is registered with your identity provider (Okta, Azure Entra ID, Auth0, or equivalent)
- The application is configured as a Single-Page Application (SPA)
- The Portal URL is registered as an allowed redirect URI, logout URI, and web origin
- You have the issuer URL (
authority), client ID, and required scopes ready - The OIDC issuer URL is reachable from the end user's browser (not just from within the cluster)
5. Message Broker (If Required)
- If your deployment uses plugins that require Kafka, a Kafka cluster is running and reachable from the Kubernetes cluster
- Bootstrap server addresses, authentication credentials, and TLS configuration are documented
- Required Kafka topics have been pre-created or auto-creation is enabled on the broker
6. Network Connectivity
- All ports in the Network Port Matrix below have been approved and opened by your network team
- Firewall rules allow the required egress from Adapter pods to external counterparty systems
- Ingress controller is configured with HTTPS (TLS termination at the load balancer or Ingress, managed by your infrastructure team)
7. Custom Bundle Hosting (If Applicable)
- If deploying custom configurations or plugins, a
.zipbundle has been packaged per the Custom Bundles & Extensibility guide - The bundle is hosted at a URL reachable from within the Kubernetes cluster
- The URL has been tested from inside the cluster (e.g., using a temporary curl pod)
Network Port Matrix
Submit this table to your network and security teams to request firewall rule approvals before deployment.
Ingress (Into the Cluster)
| Source | Destination | Port | Protocol | Purpose |
|---|---|---|---|---|
| End users (browsers) | Ingress Controller / ALB | 443 | HTTPS | Portal web UI access |
| External systems (counterparties) | Adapter pods (via LoadBalancer or NodePort) | Varies by plugin | TCP | Inbound protocol sessions (e.g., acceptor connections). Port is defined per adapter configuration. |
Egress (From the Cluster)
| Source | Destination | Port | Protocol | Purpose |
|---|---|---|---|---|
| Orchestrator pods | PostgreSQL database | 5432 | TCP | Configuration and state persistence |
| Adapter pods | PostgreSQL database | 5432 | TCP | Event log writes |
| Adapter pods | Kafka brokers (if applicable) | 9092 or 9094 | TCP | Message streaming for Kafka-based plugins |
| Adapter pods | External counterparty systems | Varies by adapter | TCP | Outbound protocol sessions (e.g., initiator connections). Port is defined per adapter configuration. |
| Portal pods (via user browser) | OIDC identity provider | 443 | HTTPS | Authentication token exchange. Note: this traffic flows from the user's browser, not from the pod itself. |
| Installer job (if custom bundle configured) | Bundle hosting URL (S3, HTTP server) | 443 or 80 | HTTPS/HTTP | Download custom bundle during Two-Pass Install |
| All pods | Container image registry | 443 | HTTPS | Pull platform container images |
Internal (Within the Cluster)
| Source | Destination | Port | Protocol | Purpose |
|---|---|---|---|---|
| Adapter pods | Orchestrator service | 8080 | HTTP | Lease heartbeats, session claims, plugin/artifact downloads |
| Portal pods | Orchestrator service | 8080 | HTTP | API calls for configuration and monitoring |
| Installer job | Orchestrator service | 8080 | HTTP | Manifest and plugin uploads during installation |
| Adapter pods | Adapter service | Configured per deployment | HTTP | Internal message routing between adapter pods |
Sizing Recommendations
These baselines eliminate the "blank page problem" for your DevOps team. Start here and tune based on observed usage after deployment.
EKS / Kubernetes Node Groups
| Profile | Node Count | Instance Type (AWS) | vCPU per Node | Memory per Node | Best For |
|---|---|---|---|---|---|
| Starter | 2 | m6i.large | 2 | 8 GB | Proof of concept, fewer than 10 adapters |
| Standard Production | 3 | m6i.xlarge | 4 | 16 GB | 10 to 50 adapters, moderate throughput |
| High Throughput | 4+ | m6i.2xlarge | 8 | 32 GB | 50+ adapters, high message volume |
For non-AWS deployments, select equivalent instance families from your cloud provider (e.g., GCP e2-standard, Azure Standard_D series).
PostgreSQL (RDS)
| Profile | Instance Class (AWS) | vCPU | Memory | Storage | Multi-AZ | Best For |
|---|---|---|---|---|---|---|
| Starter | db.t4g.medium | 2 | 4 GB | 20 GB (gp3) | No | Proof of concept |
| Standard Production | db.r6g.large | 2 | 16 GB | 100 GB (gp3) | Yes | Production workloads with moderate adapter counts |
| High Throughput | db.r6g.xlarge | 4 | 32 GB | 250 GB (gp3) | Yes | Large-scale deployments with high event log volume |
Enable automated backups and set a retention period that aligns with your organization's data retention policy.
Kafka / MSK (If Required)
| Profile | Broker Count | Instance Type (AWS) | Storage per Broker | Best For |
|---|---|---|---|---|
| Starter | 2 | kafka.t3.small | 50 GB | Low-volume testing |
| Standard Production | 3 | kafka.m5.large | 200 GB | Production Kafka-based plugin workloads |
Distribute brokers across availability zones for resilience.
Conncentric Pod Resources (Helm Values)
These are the default Helm resource requests and limits. They are sized for standard production workloads with headroom for peak throughput and plugin execution.
| Component | CPU Request | CPU Limit | Memory Request | Memory Limit |
|---|---|---|---|---|
| Orchestrator | 1000m | 2000m | 1Gi | 2Gi |
| Adapter | 1000m | 2000m | 1Gi | 2Gi |
| Portal | 100m | 500m | 128Mi | 256Mi |
For high-throughput deployments (50+ adapters), consider increasing Adapter replica count and resource limits further. See Scaling for guidance on right-sizing pod counts based on your adapter workload.
InfoSec FAQ
This section addresses common questions from client security and compliance teams. We aim to be transparent about current capabilities and provide practical mitigations where native features are still in development.
Q: Does Conncentric support Role-Based Access Control (RBAC)?
Current state: Native RBAC within the Conncentric Portal is on the product roadmap but is not yet shipped. Today, any authenticated user who can log in through your OIDC provider has full access to the Portal.
Recommended mitigation: Control who can access Conncentric at the identity provider level. Create a dedicated group or application assignment in your OIDC provider (e.g., an "conncentric-users" group in Okta, an App Assignment in Azure Entra ID) and restrict the OIDC application so that only members of that group can authenticate. This gives you coarse-grained access control immediately using infrastructure you already manage.
For environments that require stricter separation (e.g., read-only access for operations staff versus full access for configuration engineers), consider deploying separate Conncentric instances per environment (UAT vs. Production) with different OIDC group assignments for each.
Q: Does Conncentric provide audit logging?
Current state: Native audit logging (tracking which user changed which configuration and when) is on the product roadmap but is not yet implemented within the platform.
Recommended mitigation: You can achieve infrastructure-level audit coverage using the tools your cluster already provides:
- Kubernetes audit logs: Enable the Kubernetes API server audit policy to capture all API calls to the Conncentric namespace. This records pod activity, secret access, and configuration changes at the cluster level.
- Database audit logging: Enable
pgauditor equivalent auditing on your PostgreSQL instance. This captures all SQL statements executed against the Conncentric database, providing a record of data changes. - OIDC provider logs: Your identity provider records all authentication events, including who logged in, when, and from where.
- GitOps trail: If you adopt the recommended GitOps workflow (export configurations from the Portal, commit to Git, deploy via Helm), your Git history serves as a complete, immutable change log for all production configuration changes.
Combined, these layers provide a defensible audit trail while native platform audit logging is under development.
Q: How is data encrypted?
Conncentric does not manage its own encryption layer. Encryption is handled entirely by the infrastructure you provide:
- Portal traffic: TLS is terminated at the Ingress controller or load balancer, managed by your infrastructure team.
- Protocol sessions: Encrypted connections to external venues and counterparties are handled via TLS termination at the load balancer or network layer, configured by your infrastructure team. The platform does not manage certificates directly.
- Kafka and PostgreSQL: Configure encryption on your broker and database connections using your provider's standard mechanisms.
- At rest: Enable encryption at rest on your PostgreSQL instance (RDS encrypts by default) and Kafka brokers (MSK encrypts by default). Kubernetes Secrets should be encrypted using your cloud provider's KMS integration.
Q: What is the platform's attack surface?
The platform exposes two entry points:
- The Portal (HTTPS via Ingress): A web application protected by your OIDC provider. No unauthenticated access is possible.
- Adapter protocol ports: Adapters may expose TCP ports for inbound protocol sessions (e.g., acceptor connections). These ports are defined per adapter configuration and should be restricted to known counterparty IP ranges using Kubernetes NetworkPolicies or cloud security groups.
The Orchestrator API is not exposed externally. It is only accessible from within the cluster by Adapter pods, Portal pods, and the Installer job.
Next Steps
Once every item in the pre-flight checklist is complete:
- Follow the Quick Start guide to run
helm install - Review Authentication Configuration for provider-specific OIDC setup
- Review Helm Reference for the complete list of configuration options
- If deploying custom plugins, follow Custom Bundles & Extensibility