Skip to main content

AWS Deployment & Plugin Pipeline

This guide covers how to deploy Conncentric on Amazon Web Services (AWS), set up the Plugin SDK for your developers, and build a CI/CD pipeline that delivers custom plugins into your running cluster.

What Is Conncentric in the Cloud?

Think of Conncentric as a team of workers in an office building. Each worker has a specific job:

  • Amazon EKS (Elastic Kubernetes Service) is the building itself. It provides the rooms (containers) where the workers live and makes sure they have power and air conditioning (compute and networking). EKS runs the Orchestrator, Adapter, and Portal pods.
  • Amazon RDS (Relational Database Service) is the filing cabinet. Every adapter configuration, pipeline definition, and plugin record is stored here. If a worker forgets what they were doing, they check the filing cabinet and pick up right where they left off.
  • Amazon MSK (Managed Streaming for Apache Kafka) is the mailroom. When adapters need to pass messages between systems, MSK sorts and delivers those messages reliably, even when traffic spikes.
  • AWS ALB (Application Load Balancer) is the front desk receptionist. When someone visits the Portal or an external system sends a request, the ALB greets them and directs them to the right worker inside the building.

The Orchestrator is the manager who assigns work. The Adapters are the workers who carry out tasks. The Portal is the whiteboard where everyone can see what is happening. Together, they form a complete platform running inside AWS.

Architecture Diagram


Cloud Prerequisites

AWS Services

Before deploying, provision the following AWS resources. These are common, sensible defaults for a production environment.

ServicePurposeRecommended Configuration
EKSRuns all Conncentric podsKubernetes 1.30+, managed node group with m6i.xlarge instances (minimum 2 nodes)
RDS (PostgreSQL)Stores all platform statePostgreSQL 15+, db.r6g.large, Multi-AZ enabled, automated backups
MSKMessage streaming between adaptersApache Kafka 3.5+, kafka.m5.large, 3 brokers across availability zones
ALBIngress to Portal and Orchestrator APIProvisioned automatically by the AWS Load Balancer Controller
S3Hosts custom bundles for the installerStandard bucket with versioning enabled
ECR (optional)Private container registry for platform imagesOne repository per image (orchestrator, adapter, portal)

Networking

  • EKS nodes must have network access to the RDS instance (same VPC or peered VPC, security group rules allowing port 5432).
  • EKS nodes must have network access to MSK brokers (security group rules allowing port 9092 or 9094 for TLS).
  • The ALB must be reachable from your users' network (public or internal, depending on your security posture).
  • If hosting custom bundles on S3, the installer pod must have IAM permissions to access the bucket. See Custom Bundle S3 IAM Policy below.

Custom Bundle S3 IAM Policy

If you host your custom bundle .zip on S3 and use direct S3 access (rather than pre-signed URLs), the installer job needs an IAM policy granting least-privilege access to the bundle bucket. Attach this policy to an IAM Role for Service Accounts (IRSA) associated with the installer's service account in the conncentric namespace, or to the EKS node IAM role if IRSA is not available.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBundleDownload",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-conncentric-bundles/*"
},
{
"Sid": "AllowBucketListing",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::your-conncentric-bundles"
},
{
"Sid": "AllowKMSDecrypt",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:us-east-1:123456789012:key/your-kms-key-id"
}
]
}

Replace your-conncentric-bundles with the name of your S3 bucket. If your bucket uses SSE-KMS encryption (recommended), replace the KMS key ARN with the ARN of the key used to encrypt the bucket. If your bucket uses SSE-S3 (AES-256) encryption instead, you can remove the AllowKMSDecrypt statement.

IRSA (recommended): Create an IAM role with this policy and annotate the installer's Kubernetes service account:

kubectl annotate serviceaccount conncentric-installer \
-n conncentric \
eks.amazonaws.com/role-arn=arn:aws:iam::123456789012:role/conncentric-installer-s3-read

Node role (fallback): If IRSA is not configured, attach the policy directly to the IAM role used by your EKS managed node group. This grants all pods on the node access to the bucket, so IRSA is preferred for tighter scoping.

If you prefer not to grant IAM access, use pre-signed URLs instead. See the Plugin Pipeline section for details on generating pre-signed URLs.

Kubernetes Tooling

  • kubectl configured for your EKS cluster
  • helm 4.1+
  • AWS Load Balancer Controller installed in the cluster (for automatic ALB provisioning via Ingress resources)

Work in Progress

Google Cloud Platform (GCP) deployment guidance (GKE, Cloud SQL, Confluent/Strimzi on GKE) is planned and will be added in a future release.

Work in Progress

Microsoft Azure deployment guidance (AKS, Azure Database for PostgreSQL, Azure Event Hubs or Confluent on AKS) is planned and will be added in a future release.


The SDK and Your Developers

The Conncentric Plugin SDK is how developers build custom processors, connectors, and transformers. The SDK JAR ships inside the Conncentric distribution archive, but it should never be committed directly to a Git repository. Binary files in Git cause repository bloat, make diffs unreadable, and create merge conflicts that are impossible to resolve.

Instead, extract the SDK JAR and publish it to a private artifact registry that your developers reference from their Gradle builds.

Step 1: Extract the SDK JAR

The SDK JAR is located inside the Conncentric distribution archive:

unzip conncentric-distribution-*.zip -d conncentric-dist
ls conncentric-dist/sdk/
# conncentric-sdk.jar

Step 2: Publish to a Private Registry

Choose one of the following registries. Both are common in AWS environments.

Option A: AWS CodeArtifact

Create a CodeArtifact repository if you do not already have one:

aws codeartifact create-repository \
--domain your-domain \
--repository conncentric \
--region us-east-1

Retrieve an authorization token and publish the JAR using Maven coordinates:

export CODEARTIFACT_AUTH_TOKEN=$(aws codeartifact get-authorization-token \
--domain your-domain \
--query authorizationToken \
--output text)

mvn deploy:deploy-file \
-DgroupId=com.connamara \
-DartifactId=conncentric-sdk \
-Dversion=1.0.0 \
-Dpackaging=jar \
-Dfile=conncentric-dist/sdk/conncentric-sdk.jar \
-DrepositoryId=codeartifact \
-Durl=https://your-domain-123456789.d.codeartifact.us-east-1.amazonaws.com/maven/conncentric/

Option B: GitHub Packages

If your organization uses GitHub, publish the JAR to GitHub Packages:

mvn deploy:deploy-file \
-DgroupId=com.connamara \
-DartifactId=conncentric-sdk \
-Dversion=1.0.0 \
-Dpackaging=jar \
-Dfile=conncentric-dist/sdk/conncentric-sdk.jar \
-DrepositoryId=github \
-Durl=https://maven.pkg.github.com/YOUR-ORG/conncentric-sdk

Authenticate with a personal access token that has the write:packages scope.

Step 3: Reference the Registry in Gradle

Once published, developers reference the SDK from the private registry instead of a local file. In build.gradle.kts:

repositories {
maven {
url = uri("https://your-domain-123456789.d.codeartifact.us-east-1.amazonaws.com/maven/conncentric/")
credentials {
username = "aws"
password = System.getenv("CODEARTIFACT_AUTH_TOKEN")
}
}
}

dependencies {
compileOnly("com.connamara:conncentric-sdk:1.0.0")
}

For GitHub Packages, replace the url and credentials with your GitHub repository URL and token.

Do Not Commit the JAR to Git

Committing conncentric-sdk.jar directly to your Git repository causes repository bloat and makes updates painful. Always consume the SDK from a registry. If a developer needs offline access, they can cache the artifact locally with ./gradlew dependencies, which stores it in the standard Gradle cache.


The Plugin Pipeline

This section explains how a developer's custom plugin goes from source code on their laptop to a running component inside your production cluster. The story has four chapters.

Chapter 1: The Developer Builds a Plugin

A developer writes a custom processor, connector, or transformer using the Plugin SDK. They test it locally, then build the JAR:

./gradlew build

The output is a single JAR file, for example my-custom-processor-1.0.0.jar, located in build/libs/.

Chapter 2: CI Packages the Custom Bundle

When the developer pushes code to the main branch, a CI pipeline (GitHub Actions, GitLab CI, Jenkins, or any automation system) performs the following steps:

  1. Build the plugin JAR:
# Example: GitHub Actions step
- name: Build plugin
run: ./gradlew build
  1. Assemble the custom bundle using the structure described in Custom Bundles & Extensibility:
mkdir -p custom-bundle/manifests custom-bundle/plugins custom-bundle/artifacts

# Copy plugin JARs
cp build/libs/my-custom-processor-1.0.0.jar custom-bundle/plugins/

# Copy adapter manifests (from version control or Portal export)
cp manifests/*.json custom-bundle/manifests/

# Copy any auxiliary files
cp dictionaries/*.xml custom-bundle/artifacts/

# Package
cd custom-bundle && zip -r ../custom-bundle-v1.0.0.zip manifests/ plugins/ artifacts/
  1. Upload to S3:
aws s3 cp custom-bundle-v1.0.0.zip s3://your-conncentric-bundles/custom-bundle-v1.0.0.zip

Generate a pre-signed URL if the bucket is not publicly accessible:

aws s3 presign s3://your-conncentric-bundles/custom-bundle-v1.0.0.zip \
--expires-in 604800

This URL is valid for 7 days (604800 seconds). For production, consider using IAM Roles for Service Accounts so the installer pod can access S3 directly without pre-signed URLs.

Chapter 3: Helm Deploys the Bundle

Update your Helm values.yaml with the bundle URL:

installer:
enabled: true
customBundleUrls:
- "https://your-conncentric-bundles.s3.amazonaws.com/custom-bundle-v1.0.0.zip"
env:
KAFKA_BOOTSTRAP_SERVERS: "your-msk-broker-1:9092,your-msk-broker-2:9092"

Then deploy:

helm upgrade conncentric ./deployment/charts/conncentric \
-n conncentric \
-f values.yaml

Chapter 4: The Two-Pass Installer Applies It

After the platform pods start, the installer job runs automatically:

  1. Pass 1 (Essentials): Applies the base Conncentric distribution.
  2. Pass 2 (Custom Bundle): Downloads the .zip from the configured URL, extracts it, and applies manifests, plugins, and artifacts on top of the base.

The installer is idempotent. Unchanged items are left alone, new items are created, and modified items are updated in place.

Monitor the installer job:

kubectl logs -f job/conncentric-installer-<revision> -n conncentric

You should see output like:

Pass 1: Applying Conncentric Essentials...
Pass 1: Complete.
Pass 2: Downloading Custom Bundle from https://your-conncentric-bundles.s3.amazonaws.com/custom-bundle-v1.0.0.zip...
Pass 2: Applying Custom Bundle...
Pass 2: Complete.
Installer finished successfully.

Pipeline Flow Diagram


Next Steps