I am using signoz for the first time. My setup is ...
# support
s
I am using signoz for the first time. My setup is running docker standalone locally. Created a sample kotlin application to test the integration and currently trying to publish a gauge metrics from the application. Here is the code. https://github.com/iamsubratp/Distributed-tracing-ktor/blob/main/src/main/kotlin/org/humbleshuttler/plugins/Routing.kt#LL31C33-L31C33 For some reason, the gauge metrics isn't published.
s
If they are not getting published then it might be problem with exporting. Can you share the logs of collector?
s
I see 2 collectors.
sad part is, i dont see any relevant logs in any of the collector when emitting metrics. clickhouse-setup-otel-collector-1
Copy code
2023-06-21 21:21:09 2023-06-22T01:21:09.124Z    warn    internal/warning.go:51  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-06-21 21:21:09 2023-06-22T01:21:09.124Z    info    otlpreceiver@v0.76.1/otlp.go:112        Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4318"}
2023-06-21 21:21:09 2023-06-22T01:21:09.124Z    warn    internal/warning.go:51  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-06-21 21:21:09 2023-06-22T01:21:09.126Z    info    adapter/receiver.go:56  Starting stanza receiver        {"kind": "receiver", "name": "filelog/dockercontainers", "data_type": "logs"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    info    otlpreceiver@v0.76.1/otlp.go:94 Starting GRPC server    {"kind": "receiver", "name": "otlp/spanmetrics", "data_type": "metrics", "endpoint": "localhost:12345"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:243      Scrape job added        {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "otel-collector"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    warn    internal/warning.go:51  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "jaeger", "data_type": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    warn    internal/warning.go:51  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "jaeger", "data_type": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    info    healthcheck/handler.go:129      Health Check state change       {"kind": "extension", "name": "health_check", "status": "ready"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    info    service/service.go:146  Everything is ready. Begin running and processing data.
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:255      Starting discovery manager      {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-06-21 21:21:09 2023-06-22T01:21:09.127Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:289      Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-06-21 21:21:09 2023-06-22T01:21:09.329Z    info    fileconsumer/file.go:198        Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'       {"kind": "receiver", "name": "filelog/dockercontainers", "data_type": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/34df5e2ce183e81331dd34a79fcb02ed9753127fe9a9406fa20920327803d3a4/34df5e2ce183e81331dd34a79fcb02ed9753127fe9a9406fa20920327803d3a4-json.log"}
2023-06-21 21:21:35 2023-06-22T01:21:35.327Z    info    fileconsumer/file.go:196        Started watching file   {"kind": "receiver", "name": "filelog/dockercontainers", "data_type": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/deb53ac2a861c0b95ac6332380bcbbf5a0775d1d911b05f3438fe39805045730/deb53ac2a861c0b95ac6332380bcbbf5a0775d1d911b05f3438fe39805045730-json.log"}
clickhouse-setup-otel-collector-metrics-1
Copy code
2023-06-21 21:21:05 time="2023-06-22T01:21:05Z" level=info msg="Executing:\nALTER TABLE signoz_metrics.time_series_v2 ON CLUSTER cluster MODIFY SETTING ttl_only_drop_parts = 1;\n" component=clickhouse
2023-06-21 21:21:06 2023-06-22T01:21:06.251Z    info    service/service.go:129  Starting signoz-otel-collector...       {"Version": "latest", "NumCPU": 4}
2023-06-21 21:21:06 2023-06-22T01:21:06.251Z    info    extensions/extensions.go:41     Starting extensions...
2023-06-21 21:21:06 2023-06-22T01:21:06.251Z    info    extensions/extensions.go:44     Extension is starting...        {"kind": "extension", "name": "health_check"}
2023-06-21 21:21:06 2023-06-22T01:21:06.251Z    info    healthcheckextension@v0.76.3/healthcheckextension.go:45 Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"Path":"/","ResponseBody":null,"CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2023-06-21 21:21:06 2023-06-22T01:21:06.252Z    warn    internal/warning.go:51  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "extension", "name": "health_check", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    extensions/extensions.go:48     Extension started.      {"kind": "extension", "name": "health_check"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    extensions/extensions.go:44     Extension is starting...        {"kind": "extension", "name": "zpages"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    zpagesextension@v0.76.1/zpagesextension.go:64   Registered zPages span processor on tracer provider     {"kind": "extension", "name": "zpages"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    zpagesextension@v0.76.1/zpagesextension.go:74   Registered Host's zPages        {"kind": "extension", "name": "zpages"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    zpagesextension@v0.76.1/zpagesextension.go:86   Starting zPages extension       {"kind": "extension", "name": "zpages", "config": {"TCPAddr":{"Endpoint":"0.0.0.0:55679"}}}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    extensions/extensions.go:48     Extension started.      {"kind": "extension", "name": "zpages"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    extensions/extensions.go:44     Extension is starting...        {"kind": "extension", "name": "pprof"}
2023-06-21 21:21:06 2023-06-22T01:21:06.253Z    info    pprofextension@v0.76.3/pprofextension.go:71     Starting net/http/pprof server  {"kind": "extension", "name": "pprof", "config": {"TCPAddr":{"Endpoint":"0.0.0.0:1777"},"BlockProfileFraction":0,"MutexProfileFraction":0,"SaveToFile":""}}
2023-06-21 21:21:06 2023-06-22T01:21:06.255Z    info    extensions/extensions.go:48     Extension started.      {"kind": "extension", "name": "pprof"}
2023-06-21 21:21:06 2023-06-22T01:21:06.256Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:243      Scrape job added        {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "otel-collector-metrics"}
2023-06-21 21:21:06 2023-06-22T01:21:06.256Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:255      Starting discovery manager      {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
2023-06-21 21:21:06 2023-06-22T01:21:06.256Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:243      Scrape job added        {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "signozspanmetrics-collector"}
2023-06-21 21:21:06 2023-06-22T01:21:06.256Z    info    healthcheck/handler.go:129      Health Check state change       {"kind": "extension", "name": "health_check", "status": "ready"}
2023-06-21 21:21:06 2023-06-22T01:21:06.257Z    info    service/service.go:146  Everything is ready. Begin running and processing data.
2023-06-21 21:21:06 2023-06-22T01:21:06.257Z    info    prometheusreceiver@v0.76.3/metrics_receiver.go:289      Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
bumping this up as i still could not resolve the issue.
Also, I am a little confused as there are 2 signoz otel collector running. Can someone provide an architecture doc that explains the flow?