I ma trying to start the signoz cluster via the he...
# support
v
I ma trying to start the signoz cluster via the helm chart but the signoz-otel-collector pod keeps crashing with the following error : collector server run finished with error: failed to get config: cannot unmarshal the configuration: error reading exporters configuration for "clickhouse": 1 error(s) decoding: * '' has invalid keys: datasource Any idea what might be wrong ?
1
The error seems to mean there is something wrong in the otelCollector config Here is what helm generate when using
helm template debug
The error about invalid key for datasource doesn't make sense to me, everything looks fine here :
Copy code
---
# Source: signoz/templates/otel-collector/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-signoz-otel-collector
  labels:
    <http://helm.sh/chart|helm.sh/chart>: signoz-0.0.17
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: signoz
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: release-name
    <http://app.kubernetes.io/component|app.kubernetes.io/component>: otel-collector
    <http://app.kubernetes.io/version|app.kubernetes.io/version>: "0.8.1"
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
data:
  otel-collector-config.yaml: |-
    exporters:
      clickhouse:
        datasource: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_TRACE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
      clickhousemetricswrite:
        endpoint: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
        resource_to_telemetry_conversion:
          enabled: true
      clickhousetraces:
        datasource: tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_TRACE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}
      prometheus:
        endpoint: 0.0.0.0:8889
    extensions:
      health_check: {}
      oidc:
        audience: <https://api.everysens.com/>
        issuer_url: <https://auth.review.everysens.com/>
      zpages: {}
    processors:
      batch:
        send_batch_size: 1000
        timeout: 10s
      signozspanmetrics/prometheus:
        dimensions:
        - default: default
          name: service.namespace
        - default: default
          name: deployment.environment
        dimensions_cache_size: 10000
        latency_histogram_buckets:
        - 100us
        - 1ms
        - 2ms
        - 6ms
        - 10ms
        - 50ms
        - 100ms
        - 250ms
        - 500ms
        - 1000ms
        - 1400ms
        - 2000ms
        - 5s
        - 10s
        - 20s
        - 40s
        - 60s
        metrics_exporter: prometheus
    receivers:
      hostmetrics:
        collection_interval: 30s
        scrapers:
          cpu: {}
          disk: {}
          filesystem: {}
          load: {}
          memory: {}
          network: {}
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250
          thrift_http:
            endpoint: 0.0.0.0:14268
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
      otlp/auth:
        protocols:
          http:
            auth:
              authenticator: oidc
            endpoint: 0.0.0.0:4317
      otlp/spanmetrics:
        protocols:
          grpc:
            endpoint: localhost:12345
    service:
      extensions:
      - health_check
      - zpages
      - oidc
      pipelines:
        metrics:
          exporters:
          - clickhousemetricswrite
          processors:
          - batch
          receivers:
          - otlp/auth
          - hostmetrics
        metrics/spanmetrics:
          exporters:
          - prometheus
          receivers:
          - otlp/spanmetrics
        traces:
          exporters:
          - clickhouse
          processors:
          - signozspanmetrics/prometheus
          - batch
          receivers:
          - jaeger
          - otlp/auth
---
p
Are you using older version of values.yaml? In recent release, we had migrated
clickhouse
=>
clickhousetraces
Otel Config should look something like this:
Copy code
exporters:
      clickhousetraces:
        datasource: <tcp://$%7BCLICKHOUSE_HOST%7D:$%7BCLICKHOUSE_PORT%7D/?database%3D$%7BCLICKHOUSE_TRACE_DATABASE%7D%26username%3D$%7BCLICKHOUSE_USER%7D%26password%3D$%7BCLICKHOUSE_PASSWORD%7D|tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_TRACE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}>
      clickhousemetricswrite:
        endpoint: <tcp://$%7BCLICKHOUSE_HOST%7D:$%7BCLICKHOUSE_PORT%7D/?database%3D$%7BCLICKHOUSE_DATABASE%7D%26username%3D$%7BCLICKHOUSE_USER%7D%26password%3D$%7BCLICKHOUSE_PASSWORD%7D|tcp://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT}/?database=${CLICKHOUSE_DATABASE}&username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD}>
        resource_to_telemetry_conversion:
          enabled: true
      prometheus:
        endpoint: "0.0.0.0:8889"
    service:
      extensions: [health_check, zpages]
      pipelines:
        traces:
          receivers: [jaeger, otlp]
          processors: [signozspanmetrics/prometheus, batch]
          exporters: [clickhousetraces]
        metrics:
          receivers: [otlp, hostmetrics]
          processors: [batch]
          exporters: [clickhousemetricswrite]
        metrics/spanmetrics:
          receivers: [otlp/spanmetrics]
          exporters: [prometheus]
v
Thanks @Prashant Shahi this was the cause of my issue
👍 1