Dhairya Patel
01/04/2025, 9:40 AMextensions:
health_check: {}
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
hostmetrics:
collection_interval: 60s
scrapers:
cpu: {}
disk: {}
load: {}
filesystem: {}
memory: {}
network: {}
process:
mute_process_name_error: true
mute_process_exe_error: true
mute_process_io_error: true
processors:
batch:
send_batch_size: 500
send_batch_max_size: 600
timeout: 10s
memory_limiter:
check_interval: 1s
limit_percentage: 75
spike_limit_percentage: 15
resourcedetection:
detectors: [env, system]
timeout: 2s
exporters:
otlp:
endpoint: 0.0.0.0:9000
tls:
insecure: true
clickhouse:
endpoint: <tcp://localhost:9000>
database: signoz_traces
username: default
password: default123
timeout: 10s
service:
telemetry:
metrics:
address: 0.0.0.0:8888
logs:
level: info
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp, clickhouse]
metrics:
receivers: [otlp, hostmetrics]
processors: [memory_limiter, resourcedetection, batch]
exporters: [otlp, clickhouse]
logs:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp, clickhouse]
Main questions:
1. What's the correct data flow path? Should it be:
Option A: App β Local OTel collector β SigNoz
OR
Option B: App β Local OTel collector β SigNoz's Docker OTel collector β SigNoz
2. If Option A is correct, what changes do I need in my local OTel collector config to send data directly to SigNoz's ClickHouse?
3. If Option B is recommended, how should I configure the collectors to avoid port conflicts and ensure proper data flow?
Current errors I'm seeing:
Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: http2: frame too large\"", "interval": "3.79851812s"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: panic: runtime error: invalid memory address or nil pointer dereference
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x625b9613ca3c]
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: goroutine 185 [running]:
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: go.opentelemetry.io/collector/pdata/pcommon.Value.Type(...)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: go.opentelemetry.io/collector/pdata@v1.22.0/pcommon/value.go:183
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: go.opentelemetry.io/collector/pdata/pcommon.Value.AsString({0x0?, 0xc002684380?})
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: go.opentelemetry.io/collector/pdata@v1.22.0/pcommon/value.go:370 +0x1c
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.(*sumMetrics).insert.func1(0x0?)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/sum_metrics.go:129 +0x365
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.doWithTx({0x625ba27ea808?, 0xc0006cb8f0?}, 0x0?, 0xc0017baf40)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/metrics_model.go:210 +0xcd
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.(*sumMetrics).insert(0xc0014a3b60, {0x625ba27ea808, 0xc0006cb8f0}, 0xc001655520)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/sum_metrics.go:105 +0xc9
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.InsertMetrics.func1({0x625ba2797e98?, 0xc0014a3b60?}, 0xc002684390)
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/metrics_model.go:100 +0x43
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: created by github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter/internal.InsertMetrics in goroutine 193
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419808]: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/clickhouseexporter@v0.116.0/internal/metrics_model.go:99 +0xcc
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: otelcol-contrib.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: otelcol-contrib.service: Failed with result 'exit-code'.
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: otelcol-contrib.service: Scheduled restart job, restart counter is at 2.
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC systemd[1]: Started otelcol-contrib.service - OpenTelemetry Collector.
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.290+0530 info service@v0.116.0/service.go:164 Setting up own telemetry...
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.290+0530 warn service@v0.116.0/service.go:213 service::telemetry::metrics::address is being deprecated in favor of service::telemetry::metrics::readers
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.290+0530 info telemetry/metrics.go:70 Serving metrics {"address": "0.0.0.0:8888", "metrics level": "Normal"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.291+0530 info memorylimiter@v0.116.0/memorylimiter.go:151 Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "total_memory_mib": 31459, "limit_percentage": 75, "spike_limit_percentage": 15}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.291+0530 info memorylimiter@v0.116.0/memorylimiter.go:75 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "logs", "limit_mib": 23594, "spike_limit_mib": 4718, "check_interval": 1}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.293+0530 info service@v0.116.0/service.go:230 Starting otelcol-contrib... {"Version": "0.116.0", "NumCPU": 16}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.293+0530 info extensions/extensions.go:39 Starting extensions...
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.295+0530 warn grpc@v1.68.1/clientconn.go:1384 [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "0.0.0.0:9000", ServerName: "0.0.0.0:9000", }. Err: connection error: desc = "error reading server preface: http2: frame too large" {"grpc_log": true}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.309+0530 info internal/resourcedetection.go:126 began detecting resource information {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530 info internal/resourcedetection.go:140 detected resource information {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics", "resource": {"host.name":"abhiyanta-HP-285-Pro-G6-Microtower-PC","os.type":"linux"}}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530 warn internal@v0.116.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530 info otlpreceiver@v0.116.0/otlp.go:112 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530 warn internal@v0.116.0/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.310+0530 info otlpreceiver@v0.116.0/otlp.go:169 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.316+0530 warn grpc@v1.68.1/clientconn.go:1384 [core] [Channel #6 SubChannel #7]grpc: addrConn.createTransport failed to connect to {Addr: "0.0.0.0:9000", ServerName: "0.0.0.0:9000", }. Err: connection error: desc = "error reading server preface: http2: frame too large" {"grpc_log": true}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.317+0530 warn grpc@v1.68.1/clientconn.go:1384 [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "0.0.0.0:9000", ServerName: "0.0.0.0:9000", }. Err: connection error: desc = "error reading server preface: http2: frame too large" {"grpc_log": true}
Jan 04 15:08:02 abhiyanta-HP-285-Pro-G6-Microtower-PC otelcol-contrib[419831]: 2025-01-04T15:08:02.323+0530 info service@v0.116.0/service.go:253 Everything is ready. Begin running and processing data.
Any guidance on the correct approach would be really helpful! πJ Gr
01/05/2025, 4:50 PMShubhendra Kushwaha
01/05/2025, 5:52 PMsignoz-otel-collector
.
However, if the application is hosted on a different machine or VM, you can set up a local OTel Collector on that machine. The application will then send data to its local collector, which can forward the traces in batches to the signoz-otel-collector
on the SigNoz machine.Dhairya Patel
01/06/2025, 4:56 AMDhairya Patel
01/06/2025, 5:02 AM{"level":"info","timestamp":"2025-01-06T05:01:29.973Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-01-06T05:01:30.974Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-01-06T05:01:31.974Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-01-06T05:01:32.975Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-01-06T05:01:33.975Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
Shubhendra Kushwaha
01/06/2025, 5:02 AMDhairya Patel
01/06/2025, 5:02 AMDhairya Patel
01/06/2025, 5:03 AMShubhendra Kushwaha
01/06/2025, 5:07 AMDhairya Patel
01/06/2025, 5:10 AMreceivers:
# tcplog/docker:
# listen_address: "0.0.0.0:2255"
# operators:
# - type: regex_parser
# regex: '^<([0-9]+)>(?P<timestamp>[A-Z][a-z]{2}\s+\d+\s+\d{2}:\d{2}:\d{2})\s+(?P<container_name>[^\s]+)\s+(?P<container_id>[^\s]+)\s+(?P<body>.*)$'
# timestamp:
# parse_from: attributes.timestamp
# layout_type: strptime
# layout: '%b %d %H:%M:%S'
# - type: move
# from: attributes.body
# to: body
# - type: remove
# field: attributes.timestamp
# # please remove names from below if you want to collect logs from them
# - type: filter
# id: signoz_logs_filter
# expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|clickhouse|zookeeper)"'
# opencensus:
# endpoint: 0.0.0.0:55678
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# jaeger:
# protocols:
# grpc:
# endpoint: 0.0.0.0:14250
# thrift_http:
# endpoint: 0.0.0.0:14268
# thrift_compact:
# endpoint: 0.0.0.0:6831
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
timeout: 2s
# signozspanmetrics/delta:
# metrics_exporter: clickhousemetricswrite
# metrics_flush_interval: 60s
# latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
# dimensions_cache_size: 100000
# aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
# enable_exp_histogram: true
# dimensions:
# - name: service.namespace
# default: default
# - name: deployment.environment
# default: default
# # This is added to ensure the uniqueness of the timeseries
# # Otherwise, identical timeseries produced by multiple replicas of
# # collectors result in incorrect APM metrics
# - name: signoz.collector.id
# - name: service.version
# - name: browser.platform
# - name: browser.mobile
# - name: k8s.cluster.name
# - name: k8s.node.name
# - name: k8s.namespace.name
# - name: host.name
# - name: host.type
# - name: container.name
# extensions:
# health_check:
# endpoint: 0.0.0.0:13133
# zpages:
# endpoint: 0.0.0.0:55679
# pprof:
# endpoint: 0.0.0.0:1777
# exporters:
# clickhousetraces:
# datasource: <tcp://default:default123@clickhouse:9000/signoz_traces>
# low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
# use_new_schema: true
# clickhousemetricswrite:
# endpoint: <tcp://default:default123@clickhouse:9000/signoz_metrics>
# resource_to_telemetry_conversion:
# enabled: true
# clickhousemetricswrite/prometheus:
# endpoint: <tcp://default:default123@clickhouse:9000/signoz_metrics>
# clickhousemetricswritev2:
# dsn: <tcp://default:default123@clickhouse:9000/signoz_metrics>
# clickhouselogsexporter:
# dsn: <tcp://default:default123@clickhouse:9000/signoz_logs>
# timeout: 10s
# use_new_schema: true
# logging: {}
exporters:
otlp:
# endpoint: "signoz-otel-collector:4317" # Using Docker service name
endpoint: "localhost:4317"
tls:
insecure: true
service:
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
# extensions:
# - health_check
# - zpages
# - pprof
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
metrics/hostmetrics:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [otlp]
# pipelines:
# traces:
# receivers: [jaeger, otlp]
# processors: [signozspanmetrics/delta, batch]
# exporters: [clickhousetraces]
# metrics:
# receivers: [otlp]
# processors: [batch]
# exporters: [clickhousemetricswrite, clickhousemetricswritev2]
# metrics/hostmetrics:
# receivers: [hostmetrics]
# processors: [resourcedetection, batch]
# exporters: [clickhousemetricswrite, clickhousemetricswritev2]
# metrics/prometheus:
# receivers: [prometheus]
# processors: [batch]
# exporters: [clickhousemetricswrite/prometheus, clickhousemetricswritev2]
# logs:
# receivers: [otlp, tcplog/docker]
# processors: [batch]
# exporters: [clickhouselogsexporter]
Shubhendra Kushwaha
01/06/2025, 5:13 AMDhairya Patel
01/06/2025, 5:14 AMDhairya Patel
01/06/2025, 5:14 AMShubhendra Kushwaha
01/06/2025, 5:22 AMDhairya Patel
01/06/2025, 5:34 AMDhairya Patel
01/06/2025, 6:29 AM