Custom Bundles & Extensibility
Conncentric is designed to be extended without touching Docker, container registries, or infrastructure tooling. If you have custom adapter configurations, compiled Java plugins, or auxiliary files (such as custom FIX data dictionaries), you deploy them using a Custom Bundle, a standard .zip file hosted at any URL your Kubernetes cluster can reach.
How It Works
The platform uses a Two-Pass Installer that runs automatically after every Helm install or upgrade:
- Pass 1, Essentials: The installer applies the base Conncentric distribution. This is built into the platform image and always runs first, ensuring the core components and default configurations are present.
- Pass 2+, Custom Bundles (optional): If custom bundle URLs are configured, the installer downloads each
.zipfile in order and applies its contents on top of the base. Custom configurations are layered additively: they extend the base without replacing it.
This separation guarantees that the platform foundation is always consistent, regardless of what customizations a client applies.
The Custom Bundle Structure
A custom bundle is a .zip archive containing up to three directories:
my-custom-bundle/
manifests/ # Adapter configuration files (JSON)
plugins/ # Compiled Java plugin JARs
artifacts/ # Auxiliary files (FIX dictionaries, schemas)
| Directory | Purpose | Required |
|---|---|---|
manifests/ | JSON adapter configurations. Each file defines one adapter and its pipeline. These are the same files you export from the Portal or write by hand following the Manifest Schema. | At least one of the three directories must be present |
plugins/ | Custom Java .jar files built with the Plugin SDK. These are uploaded to the Orchestrator and made available to all adapter pods at runtime. | Optional |
artifacts/ | Non-sensitive reference files used by your adapters, for example custom FIX data dictionaries (.xml) or schema definitions. These are uploaded to the Orchestrator's artifact store. Do not include certificates or private keys. | Optional |
The directory names must be exact: manifests, plugins, and artifacts. The installer ignores any other top-level entries in the zip.
Step-by-Step Deployment Guide
Step 1: Export Your Configuration
If you have been building and testing adapters in a development or UAT environment, export the configurations using the Portal's Export Bundle feature (available under Settings). This downloads a .zip containing one JSON file per adapter.
Unzip the export and copy the JSON files into the manifests/ folder. These files are your source of truth for adapter configuration.
If you maintain configurations in version control (recommended), skip this step and use your committed JSON files directly.
Step 2: Add Plugins and Artifacts
If you have custom Java plugins, copy the compiled JARs:
cp build/libs/my-custom-processor.jar my-custom-bundle/plugins/
If your adapters reference custom FIX data dictionaries or other auxiliary files, add them to artifacts:
cp FIX44-CUSTOM.xml my-custom-bundle/artifacts/
Step 3: Package the Bundle
cd my-custom-bundle
zip -r ../custom-bundle.zip manifests/ plugins/ artifacts/
The resulting custom-bundle.zip is a portable, self-contained deployment package.
Step 4: Host the Bundle
Upload the .zip to any URL that your Kubernetes cluster can access over HTTP or HTTPS:
- AWS S3: Upload to a bucket and generate a pre-signed URL, or use a public bucket with restricted IP access.
- GitHub Releases: Attach the
.zipas a release asset. Use a personal access token for private repositories. - GitLab Package Registry: Publish as a generic package.
- Internal HTTP server: Any web server on your network that the cluster nodes can reach.
Step 5: Configure Helm
In your Helm values.yaml, enable the installer and set the custom bundle URL:
installer:
enabled: true
customBundleUrls:
- "https://your-domain.com/releases/custom-bundle-v1.2.0.zip"
env:
KAFKA_BOOTSTRAP_SERVERS: "kafka:9092"
The env block defines environment variables that are substituted into your manifest files at apply time. Use ${VAR_NAME} placeholders in your JSON configurations for values that change between environments (hostnames, ports, credentials).
Step 6: Deploy
helm upgrade conncentric ./deployment/charts/conncentric \
-n conncentric \
-f values.yaml
After the main platform pods are running, the installer job starts automatically and logs its progress:
Pass 1: Applying Conncentric Essentials...
Pass 1: Complete.
Pass 2: Downloading Custom Bundle from https://your-domain.com/releases/custom-bundle-v1.2.0.zip...
Pass 2: Applying Custom Bundle...
Pass 2: Complete.
Installer finished successfully.
You can follow the installer logs in real time:
kubectl logs -f job/conncentric-installer-<revision> -n conncentric
The Enterprise GitOps Lifecycle
Promoting changes from a staging or UAT environment to production follows a four-stage pipeline:
-
Build in the Portal: Business analysts configure adapters, pipelines, and connections in the UAT Portal using the visual Pipeline Designer. No code or command-line tools required.
-
Export to Code: When ready for promotion, use the Portal's Export Bundle button or the Export API to generate a complete snapshot of the environment as a
.zip. Commit the exported JSON manifests to your Git repository. These become your versioned source of truth. -
Review & Package: A CI pipeline validates the manifests, packages them into a versioned
.zipbundle (e.g.,custom-bundle-v1.3.0.zip), and publishes it to a hosted URL (AWS S3, GitHub Releases, or any HTTP server your cluster can reach). -
Deploy via Helm: Update the production Helm values with the new bundle URL and run
helm upgrade. The Two-Pass Installer downloads and applies the bundle automatically.
This lifecycle ensures zero-touch production deploys (Helm handles everything) and complete separation between configuration authoring (Portal) and deployment (GitOps).
The installer is idempotent. Unchanged adapters are left untouched, new ones are created, and modified ones are updated in place.
Environment-Specific Deployments
Use the same bundle across environments by parameterizing environment-specific values. In your JSON manifests, use ${VAR_NAME} placeholders:
{
"host": "${FIX_HOST}",
"port": "${FIX_PORT}"
}
Then set different values per environment in Helm:
# Production values
installer:
enabled: true
customBundleUrls:
- "https://releases.yourcompany.com/custom-bundle-v1.2.0.zip"
env:
FIX_HOST: "prod-fix-gateway.exchange.com"
FIX_PORT: "5100"
KAFKA_BOOTSTRAP_SERVERS: "prod-kafka-1:9092,prod-kafka-2:9092"
# Staging values
installer:
enabled: true
customBundleUrls:
- "https://releases.yourcompany.com/custom-bundle-v1.2.0.zip"
env:
FIX_HOST: "staging-fix-gateway.internal"
FIX_PORT: "5100"
KAFKA_BOOTSTRAP_SERVERS: "staging-kafka:9092"
The same bundle, applied to different environments, produces the correct environment-specific configuration.
Troubleshooting
| Symptom | Likely Cause | Resolution |
|---|---|---|
| Installer job fails at Pass 2 with a download error | The bundle URL is unreachable from inside the cluster | Verify the URL is accessible from a pod in the same namespace (kubectl run curl --image=curlimages/curl -- curl -I <URL>) |
| Pass 2 succeeds but adapters do not appear in the Portal | The manifests/ directory is missing or empty in the zip | Unzip the bundle locally and verify the folder structure matches the specification above |
| Adapters appear but fail to start with "Plugin not found" | The plugin JAR is missing from the plugins/ directory | Ensure the JAR is in plugins/ (not a subdirectory) and that the plugin ID in the manifest matches the JAR's metadata |
| Environment variables are not substituted | Variable names in the manifest do not match the installer.env keys | Confirm the placeholder syntax is ${VAR_NAME} (with curly braces) and the key exists in the Helm values |