CI/CD Integration
This page shows how to automate the promotion of adapter configurations from staging to production using a standard CI/CD pipeline. The workflow packages adapter manifests and plugins into a bundle, uploads it to a hosting location, and deploys via Helm.
The Promotion Pipeline
Step 1: Export from Staging
After configuring and testing adapters in your staging environment, export them using the Portal's Export Bundle button (under Settings). Commit the resulting JSON manifest files to your Git repository.
If you maintain manifests directly in version control (recommended for production), skip this step and use your committed files.
Step 2: Package the Bundle in CI
When code is pushed to the main branch, the CI pipeline assembles a deployment bundle:
mkdir -p bundle/manifests bundle/plugins bundle/artifacts
# Copy manifests from version control
cp iac/manifests/*.json bundle/manifests/
# Copy custom plugin JARs (if applicable)
cp build/libs/*.jar bundle/plugins/ 2>/dev/null || true
# Package
cd bundle && zip -r ../custom-bundle-$VERSION.zip manifests/ plugins/ artifacts/
Step 3: Upload to a Hosting Location
Upload the bundle to a URL reachable from your Kubernetes cluster:
aws s3 cp custom-bundle-$VERSION.zip s3://your-conncentric-bundles/custom-bundle-$VERSION.zip
Step 4: Deploy via Helm
Update the Helm values with the new bundle URL and deploy:
helm upgrade conncentric ./deployment/charts/conncentric \
-n conncentric \
-f values-production.yaml \
--set installer.customBundleUrls[0]="https://your-conncentric-bundles.s3.amazonaws.com/custom-bundle-$VERSION.zip"
The Two-Pass Installer downloads and applies the bundle automatically after the platform pods start.
GitHub Actions Example
name: Promote to Production
on:
push:
branches: [main]
paths:
- 'iac/manifests/**'
jobs:
promote:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Package bundle
run: |
mkdir -p bundle/manifests bundle/plugins bundle/artifacts
cp iac/manifests/*.json bundle/manifests/
cd bundle && zip -r ../custom-bundle-${{ github.sha }}.zip manifests/ plugins/ artifacts/
- name: Upload to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 cp custom-bundle-${{ github.sha }}.zip \
s3://${{ vars.BUNDLE_BUCKET }}/custom-bundle-${{ github.sha }}.zip
- name: Deploy via Helm
env:
KUBECONFIG: ${{ secrets.KUBECONFIG }}
run: |
helm upgrade conncentric ./deployment/charts/conncentric \
-n conncentric \
-f values-production.yaml \
--set installer.customBundleUrls[0]="https://${{ vars.BUNDLE_BUCKET }}.s3.amazonaws.com/custom-bundle-${{ github.sha }}.zip"
Use GitHub Environments to require manual approval before production deployments.
Authentication
The CI pipeline needs two sets of credentials:
| Credential | Purpose | Where to Store |
|---|---|---|
Kubernetes credentials (KUBECONFIG) | Run helm upgrade against the target cluster | CI secrets manager |
| Cloud credentials (e.g., AWS keys) | Upload the bundle to S3 or equivalent hosting | CI secrets manager |
Never hardcode credentials in scripts or commit them to version control.
Environment-Specific Values
Maintain separate Helm values files per environment. The bundle stays the same; only the installer.env block changes:
# values-production.yaml
installer:
enabled: true
env:
FIX_HOST: "prod-fix-gateway.exchange.com"
FIX_PORT: "5100"
KAFKA_BOOTSTRAP_SERVERS: "prod-kafka-1:9092,prod-kafka-2:9092"
# values-staging.yaml
installer:
enabled: true
env:
FIX_HOST: "staging-fix-gateway.internal"
FIX_PORT: "5100"
KAFKA_BOOTSTRAP_SERVERS: "staging-kafka:9092"
Next Steps
- Custom Bundles & Extensibility: Bundle structure reference and the full deploy workflow
- AWS Deployment: S3 IAM policy and IRSA configuration for bundle hosting