Would anybody know if we can us the open source ve...
# support
c
Would anybody know if we can us the open source version of the otel-collector/contrib for Clickhouse versus needing to use a Signoz forked version?
h
I believe the fork is what actually knows about the special schema Signoz's UI needs: https://github.com/SigNoz/signoz-otel-collector/commit/a17d0fc65a2e454fce411262a099d32f444891f0 You can always put the open source collector in-between your app and the Signoz one if you need a newer pipeline library.
k
@Cory E Adams As Hien Le mentioned, Signoz maintains it's own otel collector to receive metrics/logs/traces , process, and import them into clickhouse. But on your own servers where your application runs, you use the open source otel-collector/contrib package to collect metrics/logs/traces on you end, and then export them to the Signoz otel instance. Here is a basic otel config that works with the open source otel collector:
Copy code
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  filelog:
    include:
      - /var/log/nginx/access.log
      - /var/log/nginx/error.log

processors:
  resource:
    attributes:
      - key: deployment.environment
        value: production
        action: upsert
      - key: service.name
        value: myapp
        action: upsert
  resourcedetection:
    detectors: [env, system, ec2]
    timeout: 2s
    system:
      hostname_sources: [os]
  batch:
    send_batch_size: 1000
    timeout: 5s

extensions:
  health_check: {}
  zpages: {}

exporters:
  otlp:
    endpoint: "SIGNOZ_OP:PORT"
    tls:
      insecure: true
    timeout: 5s

service:
  extensions: [health_check, zpages]
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [resource, resourcedetection, batch]
      exporters: [otlp]
    traces:
      receivers: [otlp]
      processors: [resource, resourcedetection, batch]
      exporters: [otlp]
    logs:
      receivers: [otlp, filelog]
      processors: [resource, resourcedetection, batch]
      exporters: [otlp]
c
Thanks for the responses. It would be interesting if Signoz could contribute it's data access code back to the otel-collector project as were hoping to avoid maintaining a 3rd layer of otel-collectors. I noticed that the Signoz forked version does not keep up with many of the contrib items that the main otel-collector contrib project has. This means we will maintain otel-collectors with our apps for host metrics then a centralized otel-collector cluster at scale which will send to kafka, etc as well as the Signoz otel-collectors which are acting as gateways to Clickhouse. I would have liked to have the ability to collapse the collector from 3 layers down to 2.
h
It's a schema specifically for a commercial product (Signoz) with its own release cycle, e.g. various Signoz versions have required migrating Clickhouse or piping to two versions of a schema. I don't think think OTel could meaningfully manage the signoz-clickhouse-exporter release cycles or provide any stewardship over pull requests.
c
Understood Hien. Thank you for the explanation.
h
I've always been confused about the layout of the OTel Collector Contrib repo, if it was anything like the client side I would've expected each Processor to be a standalone Go library but that doesn't seem to be the case. The Signoz version seems extremely stripped down and maybe even forked for some of the basic processors?
but if you skim the readme of those processors it hints at Collector Contrib not being modularized enough or missing some public APIs
I'm using the OTel Operator with sidecar pods so the pod -> otel-contrib has just always been a part of life. Tracking versions isn't as painful anymore since we switched to Signoz Cloud.
c
I did not know that the otel-collectors had a k8s operator? We are planning on using the Altinity operator for Clickhouse.
We noticed that the Signoz collector was stripped down.
h
We may be running an oddball config, we use this Operator. So we just annotate deployments:
Copy code
values:
    podAnnotations:
      instrumentation.opentelemetry.io/inject-nodejs: "true"
      sidecar.opentelemetry.io/inject: =otel-sidecard-dev
That mounts autoinstrumentation into all our pods, then starts an
otc-container
sidecar with an otel-collector-contrib. So we can write standard processor pipelines there, and that sidecar is what forwards to the signoz collector (Cloud in our case).