Hi, just testing out Signoz integration with Vecto...
# support
k
Hi, just testing out Signoz integration with Vector. We use Vector for log shipping and currently I have Signoz self hosted on our K8s cluster. It looks like Vector has added opentelemetry sinks recently (https://vector.dev/docs/reference/configuration/sinks/opentelemetry/) and I'm trying to get this working but it doesn't appear to be successfully sending to Signoz, this is what I'm trying for my Vector sink. Any ideas?
Copy code
service-signoz-sink:
  type: opentelemetry
  inputs: [ "service-transform" ]
  protocol:
    type: http
    uri: <http://signoz-otel-collector.signoz.svc.cluster.local:4318/v1/logs>
    method: post
    encoding:
      codec: json
    framing:
      method: newline_delimited
    headers:
      content-type: application/json
  healthcheck:
    enabled: true
d
Im also using vector aggregator and facing the same issue. No data is being ingested to signoz otel collector.
@Prashant Shahi @Srikanth Chekuri can we directly push from vector aggregator to signoz clickhouse, if yes which fields are necessary to be in the sent payload?
I see the signoz-otel-collector is receiving the requests and even giving a response status 200. Still, no logs are getting ingested in signoz-clickhouse.
Copy code
trace_id: ea054fa5b90019b9ee037cadafe8c14d span_id: c5165c0a3c9c0f8a
                                        Status{Code=Unset, description=""}
                                        Attributes:{http.method=POST, http.request_content_length=10172423, http.response_content_length=21, http.scheme=http, http.status_code=200, http.target=/v1/logs, net.host.name=signoz-otel-collector.platform.svc.cluster.local, net.host.port=4318, net.protocol.version=1.1, net.sock.peer.addr=10.10.243.43, net.sock.peer.port=55318, user_agent.original=Vector/0.44.0 (aarch64-unknown-linux-gnu 3cdc7c3 2025-01-13 21:26:04.735691656)}
I also tried adding debug exporter, but it is not printing anything on the console. I'm using this config for otel-collector:
Copy code
config:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
            # max_recv_msg_size_mib: 500
          http:
            endpoint: 0.0.0.0:4318
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250
          thrift_http:
            endpoint: 0.0.0.0:14268
            # Uncomment to enable thift_company receiver.
            # You will also have set set enable it in `otelCollector.ports
            # thrift_compact:
            #   endpoint: 0.0.0.0:6831
      httplogreceiver/heroku:
        # endpoint specifies the network interface and port which will receive data
        endpoint: 0.0.0.0:8081
        source: heroku
      httplogreceiver/json:
        # endpoint specifies the network interface and port which will receive data
        endpoint: 0.0.0.0:8082
        source: json
    processors:
      # Batch processor config.
      # ref: <https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md>
      batch:
        send_batch_size: 10000
        timeout: 1s
      # Memory Limiter processor.
      # If not set, will be overridden with values based on k8s resource limits.
      # ref: <https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md>
      # memory_limiter: null
      signozspanmetrics/delta:
        metrics_exporter: clickhousemetricswrite
        latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s]
        dimensions_cache_size: 100000
        dimensions:
          - name: service.namespace
            default: default
          - name: deployment.environment
            default: default
          - name: signoz.collector.id
        aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
      zpages:
        endpoint: 0.0.0.0:55679
      pprof:
        endpoint: 0.0.0.0:1777
    exporters:
      debug:
        verbosity: detailed
      clickhousetraces:
        datasource: tcp://${env:CLICKHOUSE_USER}:${env:CLICKHOUSE_PASSWORD}@${env:CLICKHOUSE_HOST}:${env:CLICKHOUSE_PORT}/${env:CLICKHOUSE_TRACE_DATABASE}
        low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
        use_new_schema: true
      clickhousemetricswrite:
        endpoint: tcp://${env:CLICKHOUSE_USER}:${env:CLICKHOUSE_PASSWORD}@${env:CLICKHOUSE_HOST}:${env:CLICKHOUSE_PORT}/${env:CLICKHOUSE_DATABASE}
        timeout: 15s
        resource_to_telemetry_conversion:
          enabled: true
      clickhouselogsexporter:
        dsn: tcp://${env:CLICKHOUSE_USER}:${env:CLICKHOUSE_PASSWORD}@${env:CLICKHOUSE_HOST}:${env:CLICKHOUSE_PORT}/${env:CLICKHOUSE_LOG_DATABASE}
        timeout: 10s
        use_new_schema: true
      metadataexporter:
        dsn: tcp://${env:CLICKHOUSE_USER}:${env:CLICKHOUSE_PASSWORD}@${env:CLICKHOUSE_HOST}:${env:CLICKHOUSE_PORT}/signoz_metadata
        timeout: 10s
        tenant_id: ${env:TENANT_ID}
        cache:
          provider: in_memory
    service:
      telemetry:
        logs:
          encoding: json
          level: debug
        metrics:
          address: 0.0.0.0:8888
          level: detailed
      extensions: [health_check, zpages, pprof]
      pipelines:
        traces:
          receivers: [otlp, jaeger]
          processors: [batch]
          exporters: [clickhousetraces, metadataexporter]
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [clickhousemetricswrite, metadataexporter]
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [debug, clickhouselogsexporter, metadataexporter]