Troubleshooting
Structured Diagnostic Workflow
If you encounter an issue, follow this checklist in order to isolate the root cause:
1. The "Golden Signals" of the Platform
Before diving deep, verify these health signals:
- Orchestrator Liveness:
curl -f http://orchestrator/healthshould returnUP. - Database Connectivity: Check Orchestrator logs for successful database connection messages.
- Adapter Heartbeats: The Portal's Adapters list should show adapters as
Activewith no entries inERRORstate.
Where to Look First
Check What's Running
kubectl get pods -n conncentric
All three pods (Orchestrator, Adapter, and Portal) should show Running. If one is in CrashLoopBackOff or Error, check its logs:
kubectl logs <pod-name> -n conncentric --tail=200
Check Adapter Status
Open the Portal and navigate to the Adapters list. This shows a summary of all adapters: total count, how many are ACTIVE, and any in ERROR state. You can see per-adapter operational status. An adapter stuck in CLAIMING for more than a few seconds means something is preventing a pod from picking it up.
Common Issues
Adapter stuck in CLAIMING
What it means: No adapter pod has successfully started this session.
Common causes:
- All adapter pods are down (check
kubectl get pods -n conncentric) - The adapter is set to
disabled(check the admin state in the Portal) - There aren't enough adapter pods to take on more sessions
Fix: Make sure at least one adapter pod is running and the adapter is enabled. If pods are saturated, increase adapter.replicaCount.
Connection established but session never activates
What it means: The adapter is running and can reach the external system, but the protocol session never completes initialization.
Common causes:
- Incorrect session credentials or identifiers
- Protocol version mismatch
- Encryption misconfiguration at the load balancer or network layer
Fix:
- Double-check the connector settings in the Portal
- Test connectivity from inside the pod:
kubectl exec -it <adapter-pod> -n conncentric -- nc -zv <host> <port> - Check the event log for the specific rejection message from the external system
- See the Official Plugins documentation for protocol-specific troubleshooting
Database connection refused at startup
What it means: The Orchestrator can't connect to PostgreSQL. You'll see Connection refused in its logs.
Common causes:
- The hostname in
database.hostis wrong - PostgreSQL isn't reachable from inside the cluster
- Wrong credentials
Fix:
- Confirm DNS resolves:
kubectl exec -it <orchestrator-pod> -n conncentric -- nslookup <db-host> - Test the connection:
kubectl exec -it <orchestrator-pod> -n conncentric -- psql -h <db-host> -U <username> -d <database>
Portal shows "Unable to reach the server"
What it means: The Portal loaded but can't talk to the Orchestrator API.
Common causes:
- The Orchestrator pod is down
- The Ingress isn't routing
/apitraffic to the Orchestrator service
Fix:
- Check Orchestrator status:
kubectl get pods -l app=conncentric-orchestrator -n conncentric - Check the Ingress config:
kubectl describe ingress -n conncentric
Adapter active but not processing messages
What it means: The adapter shows ACTIVE but no messages are flowing.
Common causes:
- The source has no new messages
- The source endpoint is misconfigured or unreachable
- Pipeline filters are dropping all messages
Fix:
- Verify the source has messages available using the external system's admin interface
- Test connectivity:
kubectl exec -it <adapter-pod> -n conncentric -- nc -zv <host> <port> - Check pipeline conditions in the Portal, as a filter may be discarding everything
- See the Official Plugins documentation for protocol-specific troubleshooting
Getting Help
When contacting support, include:
- Pod description:
kubectl describe pod <pod-name> -n conncentric - Full logs with timestamps:
kubectl logs <pod-name> -n conncentric --timestamps > pod.log - Adapter summary from the Portal's Adapters list (screenshot or export)
- Your Helm values (with passwords removed)
- Kubernetes version:
kubectl version