:rotating_light: Hi Team, since yesterday when I u...
# support
d
🚨 Hi Team, since yesterday when I updated the app to latest version, the data is not visible on our dashboard for any service. Need some help to debug this
Here is the
docker ps
s
collector container is restarting, what does it’s logs show?
d
Copy code
2023-05-11T02:40:20.274Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "prometheus", "pipeline": "metrics"}
2023-05-11T02:40:20.274Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "hostmetrics", "pipeline": "metrics"}
2023-05-11T02:40:20.274Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "hostmetrics", "pipeline": "metrics"}
2023-05-11T02:40:20.274Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp", "pipeline": "metrics"}
2023-05-11T02:40:20.274Z	warn	internal/warning.go:51	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "otlp", "pipeline": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-05-11T02:40:20.274Z	info	otlpreceiver@v0.66.0/otlp.go:71	Starting GRPC server	{"kind": "receiver", "name": "otlp", "pipeline": "traces", "endpoint": "0.0.0.0:4317"}
2023-05-11T02:40:20.274Z	warn	internal/warning.go:51	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "otlp", "pipeline": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-05-11T02:40:20.274Z	info	otlpreceiver@v0.66.0/otlp.go:89	Starting HTTP server	{"kind": "receiver", "name": "otlp", "pipeline": "traces", "endpoint": "0.0.0.0:4318"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp", "pipeline": "metrics"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2023-05-11T02:40:20.275Z	warn	internal/warning.go:51	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "jaeger", "pipeline": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-05-11T02:40:20.275Z	warn	internal/warning.go:51	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "jaeger", "pipeline": "traces", "documentation": "<https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks>"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp", "pipeline": "traces"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp", "pipeline": "traces"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "otlp", "pipeline": "logs"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "otlp", "pipeline": "logs"}
2023-05-11T02:40:20.275Z	info	pipelines/pipelines.go:102	Receiver is starting...	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs"}
2023-05-11T02:40:20.275Z	info	adapter/receiver.go:55	Starting stanza receiver	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs"}
2023-05-11T02:40:20.276Z	info	prometheusreceiver@v0.66.0/metrics_receiver.go:254	Starting discovery manager	{"kind": "receiver", "name": "prometheus", "pipeline": "metrics"}
2023-05-11T02:40:20.276Z	info	prometheusreceiver@v0.66.0/metrics_receiver.go:288	Starting scrape manager	{"kind": "receiver", "name": "prometheus", "pipeline": "metrics"}
2023-05-11T02:40:20.290Z	info	pipelines/pipelines.go:106	Receiver started.	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs"}
2023-05-11T02:40:20.290Z	info	healthcheck/handler.go:129	Health Check state change	{"kind": "extension", "name": "health_check", "status": "ready"}
2023-05-11T02:40:20.290Z	info	service/service.go:106	Everything is ready. Begin running and processing data.
2023-05-11T02:40:20.511Z	info	fileconsumer/file.go:161	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/32a433a1545cda0552822daab9aed14f2c5c9cd829a0214585a6f2502fa68c0f/32a433a1545cda0552822daab9aed14f2c5c9cd829a0214585a6f2502fa68c0f-json.log"}
this is the log from
otelcollector
agent
I just did a
install.sh
and seems to have worked, can see the data, but lasts nights data is lost
s
It was restarting for some reason, and the log you shared doesn’t give anything unusual. Would have been great if you could point to error message and why is it exiting.
d
When I filter for only error in that container this is the only error
docker logs --tail=100 -f container_id 2>&1 | grep --color=always -E "error|ERROR|Error"
Copy code
info	clickhousetracesexporter/clickhouse_factory.go:128	Clickhouse Migrate finished	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "error": "no change"}
here are more, increasing the tail to 1000
Copy code
2023/05/11 02:32:09 application run finished with error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:33:09 Error creating clickhouse client: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
Error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:34:10 application run finished with error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
Error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:35:11 application run finished with error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:36:12 Error creating clickhouse client: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:37:12 Error creating clickhouse client: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
Error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:38:13 application run finished with error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": error connecting to primary db: code: 999, message: Cannot use any of provided ZooKeeper nodes (Bad arguments)
2023/05/11 02:39:14 Error creating clickhouse client: code: 999, message: All connection tries failed while connecting to ZooKeeper. nodes: 172.18.0.4:2181
2023-05-11T02:40:17.598Z	info	clickhousetracesexporter/clickhouse_factory.go:128	Clickhouse Migrate finished	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "error": "no change"}
2023-05-11T02:53:50.094Z	info	clickhousetracesexporter/clickhouse_factory.go:128	Clickhouse Migrate finished	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "error": "no change"}
s
Yes, this is helpful. ClickHouse had issues using your Zookeeper instance, and since it’s unable to create a connection, it exits.