Slackbot
01/13/2023, 1:07 AMTravis Chambers
01/13/2023, 1:09 AM<hostname>:9100/metrics
endpoint from the machine SigNoz is runnig on, i see all the metrics output:
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.464e-05
go_gc_duration_seconds{quantile="0.25"} 2.888e-05
go_gc_duration_seconds{quantile="0.5"} 3.2991e-05
go_gc_duration_seconds{quantile="0.75"} 3.8511e-05
go_gc_duration_seconds{quantile="1"} 7.2621e-05
go_gc_duration_seconds_sum 0.129597327
go_gc_duration_seconds_count 3379
...
...
so i know the Node Exporter is working and is accessible to SigNoz.Travis Chambers
01/13/2023, 1:10 AMSrikanth Chekuri
01/13/2023, 1:12 AMTravis Chambers
01/13/2023, 1:15 AMSrikanth Chekuri
01/13/2023, 1:15 AMTravis Chambers
01/13/2023, 1:18 AMreceivers:
otlp:
protocols:
grpc:
http:
prometheus:
config:
scrape_configs:
# otel-collector-metrics internal metrics
- job_name: otel-collector-metrics
scrape_interval: 60s
static_configs:
- targets: ["localhost:8888", "pacific:9100"]
labels:
job_name: otel-collector-metrics
# SigNoz span metrics
- job_name: signozspanmetrics-collector
scrape_interval: 60s
static_configs:
- targets:
- otel-collector:8889
pacific:9100
is the machine i want to hit. it's using tailscale's MagicDNS.
OH. well now i know i should have checked the clickhouse-setup_otel-collector-metrics_1
logs.
$ docker logs clickhouse-setup_otel-collector-metrics_1
...
...
2023-01-13T01:03:18.173Z info prometheusreceiver@v0.66.0/metrics_receiver.go:288 Starting scrape manager {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"}
2023-01-13T01:04:19.900Z warn internal/transaction.go:120 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "scrape_timestamp": 1673571859900, "target_labels": "{__name__=\"up\", instance=\"pacific:9100\", job=\"otel-collector-metrics\", job_name=\"otel-collector-metrics\"}"}
Srikanth Chekuri
01/13/2023, 1:19 AMTravis Chambers
01/13/2023, 1:20 AMSrikanth Chekuri
01/13/2023, 1:23 AMSrikanth Chekuri
01/13/2023, 1:28 AMTravis Chambers
01/13/2023, 1:30 AMTravis Chambers
01/13/2023, 9:07 PMfailed to scrape prometheus endpoint
. in the logs.
2023-01-13T21:06:31.894Z warn internal/transaction.go:120 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "scrape_timestamp": 1673643981893, "target_labels": "{__name__=\"up\", hostname=\"pacific\", instance=\"<http://pacific.numbersstation.ai:9090\|pacific.numbersstation.ai:9090\>", job=\"otel-collector-metrics\", job_name=\"node_exporter\"}"}
however, i've verified the clickhouse-setup_otel-collector-metrics_1
container has access to the endpoint.
$ docker exec -it clickhouse-setup_otel-collector-metrics_1 sh
/ $ ping <http://pacific.numbersstation.ai:9090|pacific.numbersstation.ai:9090>
PING <http://pacific.numbersstation.ai:9090|pacific.numbersstation.ai:9090> (100.87.98.118): 56 data bytes
64 bytes from 100.87.98.118: seq=0 ttl=42 time=0.068 ms
64 bytes from 100.87.98.118: seq=1 ttl=42 time=0.075 ms
64 bytes from 100.87.98.118: seq=2 ttl=42 time=0.065 ms
Srikanth Chekuri
01/14/2023, 2:29 AMlogging:
loglevel: debug
Travis Chambers
01/17/2023, 4:57 PMlogging
key go?
'service.telemetry' has invalid keys: logging
2023/01/17 16:56:13 application run finished with error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:
* 'service.telemetry' has invalid keys: logging
i think i've tried nesting it under all of them -- service, telemetry, or metrics, with no luck.
service:
telemetry:
logging:
loglevel: debug
metrics:
address: 0.0.0.0:8888
Srikanth Chekuri
01/18/2023, 4:45 AMservice:
telemetry:
logs:
level: debug
Travis Chambers
03/09/2023, 8:04 PM$ sudo ufw allow 9100
🤦♂️SigNoz is an open-source APM. It helps developers monitor their applications & troubleshoot problems, an open-source alternative to DataDog, NewRelic, etc.
Powered by