Skip to main content

Use Cases

Conncentric handles connectivity patterns that range from single-venue integrations to multi-protocol enterprise topologies. This page describes the most common deployment patterns and what each one involves.


Venue Connectivity

Inbound: Receive from a venue

Your firm connects to a trading venue, exchange, or ECN. The venue sends execution reports, trade confirmations, or market data. The platform receives those messages and delivers them to your internal systems.

What the platform handles: Session establishment, heartbeat monitoring, automatic reconnection on failure, message transformation, and delivery acknowledgment.

What you configure: Venue connection details (host, port, session identifiers), message filtering rules, output format, and delivery destination.

Outbound: Send to a venue

Your order management system or trading platform produces orders. The platform picks them up, formats them for the venue's protocol, and delivers them.

What the platform handles: Protocol formatting, session management, sequence number tracking, delivery confirmation, and failover. If the active pod fails, a standby takes over the session without losing messages.

What you configure: Source topic, message transformation rules, venue connection details, and redundancy mode.

Acceptor: Receive inbound connections

A counterparty, prime broker, or client connects to your firm. Instead of building and maintaining a protocol server, the platform listens for their connection and routes their messages into your systems.

What the platform handles: Port management, session acceptance, counterparty authentication via session identifiers, inbound message routing, and failover.

What you configure: Listening port, expected session identifiers, message routing rules, and firewall approvals for the inbound port.


Protocol Bridging

Connect two systems that speak different protocols

One system sends FIX. Another expects messages on a Kafka topic. A third speaks a proprietary binary format. The platform sits in the middle, translating and routing between them.

Each adapter handles one leg of the connection. Kafka acts as the durable buffer between them, decoupling the two sides so that a failure on one side does not affect the other.

What the platform handles: Protocol translation in both directions, message buffering, independent failover per leg, and format conversion through the pipeline.

What you configure: Two adapters (one per leg), the shared Kafka topic, and the transformation rules for each direction.

Bridge with fan-out

A single inbound feed can be routed to multiple destinations. The pipeline splits the message stream using conditions, delivering different message types to different systems.

What the platform handles: Condition evaluation per message, parallel delivery to multiple targets, independent acknowledgment per route.

What you configure: Pipeline routes with conditions (e.g., message type filters), one target per route.


Multi-Venue Aggregation

Consolidate feeds from many venues into one place

Firms that connect to multiple exchanges, ECNs, or liquidity providers often need a single normalized feed for downstream consumption. Each venue has its own protocol quirks, session rules, and message formats.

Each venue gets its own adapter with its own session configuration and transformation pipeline. All adapters output a normalized format to the same destination. Adding a new venue is a new adapter configuration, not a new integration project.

What the platform handles: Per-venue session management, per-venue transformation pipelines, independent failover, and venue onboarding without code changes.

What you configure: One adapter per venue. Each adapter's pipeline normalizes the venue's message format to your internal standard.


Drop Copy and Compliance

Duplicate a feed for compliance or surveillance

Regulatory requirements often mandate that a copy of all trading activity is captured independently from the primary trading path. The platform can duplicate any message stream to a separate compliance destination without affecting the primary flow.

The pipeline routes every message to both the primary destination and the compliance archive. Each route operates independently: a failure in the compliance path does not block the trading path.

What the platform handles: Parallel delivery, independent acknowledgment, message immutability (the compliance copy is identical to the original).

What you configure: Two routes in the pipeline: one for the primary destination, one for the compliance archive. No conditions needed if you want a full copy; add conditions to capture only specific message types.


Market Data Distribution

Fan out a market data feed to multiple consumers

A single market data feed from an exchange contains equities, options, indices, and reference data. Different teams need different slices. The platform receives the full feed once and distributes filtered subsets to each consumer.

What the platform handles: Single connection to the exchange (minimizing session count), per-route filtering, parallel delivery, and independent consumer isolation.

What you configure: One adapter, multiple pipeline routes with message type filters, one target per route.


Multi-Environment Promotion

Manage connections across dev, staging, and production

Enterprise deployments typically maintain multiple environments. The platform supports a GitOps workflow where adapter configurations are authored once and promoted through environments using Helm and environment variables.

The same bundle is deployed to every environment. Only the environment variables change (hostnames, ports, broker addresses). Configuration drift between environments is eliminated because the source of truth is a single versioned artifact.

What the platform handles: Environment variable substitution at deploy time, idempotent installation (unchanged adapters are left alone), and automated rollout via Helm.

What you configure: One bundle, one Helm values file per environment with the environment-specific variables. See Custom Bundles & Extensibility and CI/CD Integration.


Custom Protocol Integration

Connect to a system the platform doesn't natively support

Not every system speaks FIX or Kafka. Internal binary feeds, proprietary APIs, legacy message queues, and custom TCP protocols are common in financial firms. The Plugin SDK lets your developers build a custom connector that integrates with the platform's pipeline, monitoring, and failover, the same way the built-in plugins do.

What the platform handles: Plugin loading, configuration form generation in the Portal, lifecycle management (start, stop, failover), metrics collection, and log aggregation. Your plugin handles the protocol; the platform handles everything around it.

What you build: A Processor, Transformer, or Connector using the Plugin SDK. See Getting Started with the Plugin SDK.


Choosing a Pattern

I need to...PatternAdapters
Receive from one venueVenue Connectivity (Inbound)1
Send to one venueVenue Connectivity (Outbound)1
Accept a counterparty's connectionVenue Connectivity (Acceptor)1
Connect two systems via KafkaProtocol Bridging2
Route different message types to different placesFan-out1
Aggregate feeds from many venuesMulti-Venue Aggregation1 per venue
Duplicate a feed for complianceDrop Copy1
Fan out market data to teamsMarket Data Distribution1
Promote configs across environmentsMulti-Environment PromotionN/A (ops workflow)
Connect to a proprietary protocolCustom Protocol Integration1 + custom plugin

For step-by-step setup instructions for FIX-specific patterns, see Common FIX Use Cases.