Hi, seeing something interesting with the SigNoz h...
# support
s
Hi, seeing something interesting with the SigNoz helm chart applied to our cluster: • our SigNoz deployment stopped receiving all opentelemetry traces • we see some interesting missing resource notifications here:
Copy code
$ kubectl -n signoz get pod
E0120 11:51:42.372328  337080 memcache.go:255] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0120 11:51:42.486643  337080 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0120 11:51:42.592106  337080 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
E0120 11:51:42.694443  337080 memcache.go:106] couldn't get resource list for <http://metrics.k8s.io/v1beta1|metrics.k8s.io/v1beta1>: the server is currently unable to handle the request
NAME                                                READY   STATUS                   RESTARTS       AGE
chi-signoz-clickhouse-cluster-0-0-0                 1/1     Running                  3 (2d9h ago)   3d
signoz-alertmanager-0                               1/1     Running                  2 (2d2h ago)   3d
signoz-clickhouse-operator-65c47bc974-v2nbh         2/2     Running                  0              3d
signoz-frontend-5457df5cbf-vnxvr                    1/1     Running                  1 (2d2h ago)   3d
signoz-k8s-infra-otel-agent-swq2t                   1/1     Running                  0              3d1h
signoz-k8s-infra-otel-agent-w86pc                   1/1     Running                  1 (2d2h ago)   3d
signoz-k8s-infra-otel-deployment-56c49b84bb-m25nq   1/1     Running                  1 (2d2h ago)   3d
signoz-otel-collector-78c665499-g9dvh               1/1     Running                  4 (2d2h ago)   3d
signoz-otel-collector-metrics-5894fb8cc6-5gkwh      0/1     Error                    0              2d9h
signoz-otel-collector-metrics-5894fb8cc6-fgc5r      0/1     ContainerStatusUnknown   1              3d
signoz-otel-collector-metrics-5894fb8cc6-jc8m6      0/1     Completed                7 (2d9h ago)   2d10h
signoz-otel-collector-metrics-5894fb8cc6-rrpr9      1/1     Running                  0              25h
signoz-otel-collector-metrics-5894fb8cc6-sxvsg      0/1     ContainerStatusUnknown   1 (26h ago)    2d2h
signoz-query-service-0                              1/1     Running                  1 (2d2h ago)   3d
signoz-zookeeper-0                                  1/1     Running                  0              3d
Re-applying the SigNoz helm chart cured this for now. ... but has this sort of thing been seen before? Any ideas what was going on?
s
AFAIK this is the first time someone has brought this up to our notice. This seems more like a Kubernetes thing than signoz/collector.
p
@Prashant Shahi FYI
p
@Shaun Daley That's a strange one. It is most likely a temporary issue with K8s cluster. More container logs or events would have been helpful to debug further.
s
Thanks for that! We haven't yet seen it recur, but we'll share here if we do find out more.