Skip to main content

Logging & Log Aggregation

All Conncentric components (Orchestrator, Adapter) write structured JSON logs to stdout, captured by the standard Kubernetes container runtime log driver. This page defines the log schema, shows example log lines, and explains how to configure your log aggregation stack to parse and index Conncentric logs.


JSON Log Schema

Every log line is a single JSON object. The following fields are present in all log entries:

FieldTypeDescription
@timestampstring (ISO 8601)Time the log event was produced, e.g. 2026-04-14T18:32:01.482Z
levelstringLog severity: TRACE, DEBUG, INFO, WARN, ERROR
loggerstringThe internal component that produced the log entry
messagestringHuman-readable description of the event
threadstringThe thread name that produced the entry
componentstringWhich platform component emitted the log: orchestrator or adapter

Adapter-Specific Fields

Log entries produced by an Adapter pod include additional contextual fields when an adapter session is active:

FieldTypeDescription
adapter_idstring (UUID)The logical adapter ID. Use this field to filter logs for a specific adapter.
adapter_namestringThe human-readable display name of the adapter
plugin_keystringThe plugin that produced the log entry (e.g., the connector or processor key). Present only for plugin-originated log lines.

Error Fields

When an exception occurs, two additional fields appear:

FieldTypeDescription
error_typestringThe exception class name
stack_tracestringThe full stack trace, newline-delimited

Example Log Lines

Normal Adapter Event (INFO)

{
"@timestamp": "2026-04-14T18:32:01.482Z",
"level": "INFO",
"logger": "com.connamara.adapter",
"message": "Pipeline started successfully",
"thread": "pool-3-thread-1",
"component": "adapter",
"adapter_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"adapter_name": "Production Gateway"
}

Orchestrator Lease Event (INFO)

{
"@timestamp": "2026-04-14T18:32:05.110Z",
"level": "INFO",
"logger": "com.connamara.orchestrator",
"message": "Lease granted to instance conncentric-adapter-7b9f4 for adapter a1b2c3d4",
"thread": "pool-1-thread-1",
"component": "orchestrator"
}

Adapter Error with Stack Trace

{
"@timestamp": "2026-04-14T18:32:12.903Z",
"level": "ERROR",
"logger": "com.connamara.adapter",
"message": "Connection refused by remote host",
"thread": "pool-3-thread-1",
"component": "adapter",
"adapter_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"adapter_name": "Production Gateway",
"plugin_key": "fix-initiator",
"error_type": "ConnectException",
"stack_trace": "ConnectException: Connection refused\n\tat ..."
}

Collecting Logs in Kubernetes

Conncentric does not ship a log forwarder. Use your cluster's standard log collection mechanism. Because all logs are structured JSON written to stdout, any Kubernetes-native log collector will capture them without custom parsing.

Common options:

  • Fluent Bit / Fluentd: Most common choice for forwarding to Elasticsearch, OpenSearch, CloudWatch, or S3.
  • Promtail: If you use Grafana Loki.
  • Datadog Agent: If you use Datadog log management.
  • Splunk Connect for Kubernetes: If you use Splunk.

Log Aggregation Configuration

Fluent Bit Example

The following Fluent Bit configuration collects JSON logs from the conncentric namespace, parses the structured fields, and forwards them to an Elasticsearch cluster. Deploy Fluent Bit as a DaemonSet in your cluster.

[INPUT]
Name tail
Tag conncentric.*
Path /var/log/containers/*conncentric*.log
Parser docker
Refresh_Interval 5

[FILTER]
Name kubernetes
Match conncentric.*
Kube_Tag_Prefix conncentric.var.log.containers.
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On

[FILTER]
Name grep
Match conncentric.*
Regex $adapter_id .+

[OUTPUT]
Name es
Match conncentric.*
Host elasticsearch.logging.svc.cluster.local
Port 9200
Index conncentric-logs
Type _doc
Logstash_Format Off

Key points:

  • The Merge_Log On directive tells Fluent Bit to parse the JSON payload from stdout and promote the structured fields (adapter_id, level, component, etc.) to top-level keys in the output record.
  • The grep filter in the example above selects only log lines that contain an adapter_id field, which isolates adapter session logs from general platform logs. Remove this filter if you want all logs.
  • Replace the [OUTPUT] block with your backend of choice (CloudWatch, Loki, Datadog, Splunk).

Datadog Agent

If you use the Datadog Agent as a DaemonSet, Conncentric logs are collected automatically from stdout. Add the following pod annotations to enable JSON parsing and source tagging:

annotations:
ad.datadoghq.com/conncentric-adapter.logs: |
[{
"source": "conncentric",
"service": "conncentric-adapter",
"log_processing_rules": [{
"type": "multi_line",
"name": "stack_traces",
"pattern": "\\{\"@timestamp"
}]
}]

The Datadog Agent automatically parses JSON logs. Once ingested, use @adapter_id as a facet in Log Explorer to filter by adapter.


Filtering by Adapter ID

The adapter_id field is the primary key for isolating logs related to a specific adapter session. Use it in your log aggregation platform:

PlatformQuery Example
Elasticsearch / Kibanaadapter_id: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
Grafana LokiSee Loki example below
Datadog@adapter_id:a1b2c3d4-e5f6-7890-abcd-ef1234567890
Splunkindex=conncentric adapter_id="a1b2c3d4-e5f6-7890-abcd-ef1234567890"
CloudWatch Logs Insightsfields @timestamp, message | filter adapter_id = "a1b2c3d4"

Grafana Loki example:

{namespace="conncentric"} | json | adapter_id="a1b2c3d4-e5f6-7890-abcd-ef1234567890"

Adjusting Log Levels

Log verbosity can be configured via Helm values:

orchestrator:
env:
- name: LOGGING_LEVEL_COM_CONNAMARA
value: DEBUG

adapter:
env:
- name: LOGGING_LEVEL_COM_CONNAMARA
value: DEBUG
LevelWhen to Use
ERRORProduction default if you only want failures. May miss important context.
WARNCaptures warnings and errors. Useful for production monitoring.
INFORecommended production default. Includes lifecycle events, lease transitions, and pipeline status changes.
DEBUGIncludes detailed internal state transitions. Use for troubleshooting only, as it produces high log volume.
TRACEMaximum verbosity. Use only when directed by Connamara support.

  • Set the production log level to INFO. This provides visibility into adapter lifecycle events without excessive volume.
  • Index the adapter_id, component, and level fields as facets in your log platform. These three fields cover the most common filtering patterns.
  • Set up alerts on level:ERROR with component:adapter to catch adapter failures. Cross-reference with the adapter_id to identify which adapter is affected.
  • Retain logs for at least 30 days to support incident investigation. Adjust based on your organization's retention policy.