Logging & Log Aggregation
All Conncentric components (Orchestrator, Adapter) write structured JSON logs to stdout, captured by the standard Kubernetes container runtime log driver. This page defines the log schema, shows example log lines, and explains how to configure your log aggregation stack to parse and index Conncentric logs.
JSON Log Schema
Every log line is a single JSON object. The following fields are present in all log entries:
| Field | Type | Description |
|---|---|---|
@timestamp | string (ISO 8601) | Time the log event was produced, e.g. 2026-04-14T18:32:01.482Z |
level | string | Log severity: TRACE, DEBUG, INFO, WARN, ERROR |
logger | string | The internal component that produced the log entry |
message | string | Human-readable description of the event |
thread | string | The thread name that produced the entry |
component | string | Which platform component emitted the log: orchestrator or adapter |
Adapter-Specific Fields
Log entries produced by an Adapter pod include additional contextual fields when an adapter session is active:
| Field | Type | Description |
|---|---|---|
adapter_id | string (UUID) | The logical adapter ID. Use this field to filter logs for a specific adapter. |
adapter_name | string | The human-readable display name of the adapter |
plugin_key | string | The plugin that produced the log entry (e.g., the connector or processor key). Present only for plugin-originated log lines. |
Error Fields
When an exception occurs, two additional fields appear:
| Field | Type | Description |
|---|---|---|
error_type | string | The exception class name |
stack_trace | string | The full stack trace, newline-delimited |
Example Log Lines
Normal Adapter Event (INFO)
{
"@timestamp": "2026-04-14T18:32:01.482Z",
"level": "INFO",
"logger": "com.connamara.adapter",
"message": "Pipeline started successfully",
"thread": "pool-3-thread-1",
"component": "adapter",
"adapter_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"adapter_name": "Production Gateway"
}
Orchestrator Lease Event (INFO)
{
"@timestamp": "2026-04-14T18:32:05.110Z",
"level": "INFO",
"logger": "com.connamara.orchestrator",
"message": "Lease granted to instance conncentric-adapter-7b9f4 for adapter a1b2c3d4",
"thread": "pool-1-thread-1",
"component": "orchestrator"
}
Adapter Error with Stack Trace
{
"@timestamp": "2026-04-14T18:32:12.903Z",
"level": "ERROR",
"logger": "com.connamara.adapter",
"message": "Connection refused by remote host",
"thread": "pool-3-thread-1",
"component": "adapter",
"adapter_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"adapter_name": "Production Gateway",
"plugin_key": "fix-initiator",
"error_type": "ConnectException",
"stack_trace": "ConnectException: Connection refused\n\tat ..."
}
Collecting Logs in Kubernetes
Conncentric does not ship a log forwarder. Use your cluster's standard log collection mechanism. Because all logs are structured JSON written to stdout, any Kubernetes-native log collector will capture them without custom parsing.
Common options:
- Fluent Bit / Fluentd: Most common choice for forwarding to Elasticsearch, OpenSearch, CloudWatch, or S3.
- Promtail: If you use Grafana Loki.
- Datadog Agent: If you use Datadog log management.
- Splunk Connect for Kubernetes: If you use Splunk.
Log Aggregation Configuration
Fluent Bit Example
The following Fluent Bit configuration collects JSON logs from the conncentric namespace, parses the structured fields, and forwards them to an Elasticsearch cluster. Deploy Fluent Bit as a DaemonSet in your cluster.
[INPUT]
Name tail
Tag conncentric.*
Path /var/log/containers/*conncentric*.log
Parser docker
Refresh_Interval 5
[FILTER]
Name kubernetes
Match conncentric.*
Kube_Tag_Prefix conncentric.var.log.containers.
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
[FILTER]
Name grep
Match conncentric.*
Regex $adapter_id .+
[OUTPUT]
Name es
Match conncentric.*
Host elasticsearch.logging.svc.cluster.local
Port 9200
Index conncentric-logs
Type _doc
Logstash_Format Off
Key points:
- The
Merge_Log Ondirective tells Fluent Bit to parse the JSON payload from stdout and promote the structured fields (adapter_id,level,component, etc.) to top-level keys in the output record. - The
grepfilter in the example above selects only log lines that contain anadapter_idfield, which isolates adapter session logs from general platform logs. Remove this filter if you want all logs. - Replace the
[OUTPUT]block with your backend of choice (CloudWatch, Loki, Datadog, Splunk).
Datadog Agent
If you use the Datadog Agent as a DaemonSet, Conncentric logs are collected automatically from stdout. Add the following pod annotations to enable JSON parsing and source tagging:
annotations:
ad.datadoghq.com/conncentric-adapter.logs: |
[{
"source": "conncentric",
"service": "conncentric-adapter",
"log_processing_rules": [{
"type": "multi_line",
"name": "stack_traces",
"pattern": "\\{\"@timestamp"
}]
}]
The Datadog Agent automatically parses JSON logs. Once ingested, use @adapter_id as a facet in Log Explorer to filter by adapter.
Filtering by Adapter ID
The adapter_id field is the primary key for isolating logs related to a specific adapter session. Use it in your log aggregation platform:
| Platform | Query Example |
|---|---|
| Elasticsearch / Kibana | adapter_id: "a1b2c3d4-e5f6-7890-abcd-ef1234567890" |
| Grafana Loki | See Loki example below |
| Datadog | @adapter_id:a1b2c3d4-e5f6-7890-abcd-ef1234567890 |
| Splunk | index=conncentric adapter_id="a1b2c3d4-e5f6-7890-abcd-ef1234567890" |
| CloudWatch Logs Insights | fields @timestamp, message | filter adapter_id = "a1b2c3d4" |
Grafana Loki example:
{namespace="conncentric"} | json | adapter_id="a1b2c3d4-e5f6-7890-abcd-ef1234567890"
Adjusting Log Levels
Log verbosity can be configured via Helm values:
orchestrator:
env:
- name: LOGGING_LEVEL_COM_CONNAMARA
value: DEBUG
adapter:
env:
- name: LOGGING_LEVEL_COM_CONNAMARA
value: DEBUG
| Level | When to Use |
|---|---|
ERROR | Production default if you only want failures. May miss important context. |
WARN | Captures warnings and errors. Useful for production monitoring. |
INFO | Recommended production default. Includes lifecycle events, lease transitions, and pipeline status changes. |
DEBUG | Includes detailed internal state transitions. Use for troubleshooting only, as it produces high log volume. |
TRACE | Maximum verbosity. Use only when directed by Connamara support. |
Recommended Practices
- Set the production log level to
INFO. This provides visibility into adapter lifecycle events without excessive volume. - Index the
adapter_id,component, andlevelfields as facets in your log platform. These three fields cover the most common filtering patterns. - Set up alerts on
level:ERRORwithcomponent:adapterto catch adapter failures. Cross-reference with theadapter_idto identify which adapter is affected. - Retain logs for at least 30 days to support incident investigation. Adjust based on your organization's retention policy.