https://signoz.io logo
#contributing-frontend
Title
# contributing-frontend
r

rakesh shah

04/15/2022, 10:05 AM
Copy code
NAME                                             READY   STATUS             RESTARTS   AGE
chi-signoz-cluster-0-0-0                         1/1     Running            0          46h
clickhouse-operator-8cff468-t4p9x                2/2     Running            0          46h
signoz-alertmanager-0                            1/1     Running            0          46h
signoz-frontend-7d79d95cc4-62nf9                 1/1     Running            0          46h
signoz-otel-collector-8487c9f7b4-mgwsp           0/1     CrashLoopBackOff   8          19m
signoz-otel-collector-metrics-7797bcc95b-9c6jv   1/1     Running            0          46h
signoz-query-service-0                           1/1     Running            0          46h
signoz-zookeeper-0                               1/1     Running            0          46h
[3:35 PM] Name:         signoz-otel-collector-8487c9f7b4-mgwsp
Namespace:    spx-test
Priority:     0
Node:         ip-172-26-61-142.ec2.internal/172.26.61.142
Start Time:   Fri, 15 Apr 2022 15:13:55 +0530
Labels:       <http://app.kubernetes.io/component=otel-collector|app.kubernetes.io/component=otel-collector>
              <http://app.kubernetes.io/instance=signoz|app.kubernetes.io/instance=signoz>
              <http://app.kubernetes.io/name=signoz|app.kubernetes.io/name=signoz>
              pod-template-hash=8487c9f7b4
Annotations:  checksum/config: 59e5817602f2e1b47a6523f9270d9f90da6a89940abcfeeee3fc343efccb1da4
              <http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
Status:       Running
IP:           10.34.0.6
IPs:
  IP:           10.34.0.6
Controlled By:  ReplicaSet/signoz-otel-collector-8487c9f7b4
Init Containers:
  signoz-otel-collector-init:
    Container ID:  <docker://8ba5e7a13ad55ef6b80ac9a7e0587cd72cb1c0971365a20d4694fed580fff86>2
    Image:         <http://docker.io/busybox:1.35|docker.io/busybox:1.35>
    Image ID:      <docker-pullable://busybox@sha256:59603>d4d14e08e4a60643588af127f3c3dddf8417bf6b575dd93a3cbe3e48593
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until wget --spider -q signoz-clickhouse:8123/ping; do echo -e "waiting for clickhouseDB"; sleep 5; done; echo -e "clickhouse ready, starting otel collector now";
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Apr 2022 15:13:56 +0530
      Finished:     Fri, 15 Apr 2022 15:13:56 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gthcs (ro)
Containers:
  signoz-otel-collector:
    Container ID:  <docker://9051d7b6ae4a8b85bd781f038cbe022a08465e215fe8520fb2f9b50bb4a0b4b>9
    Image:         <http://docker.io/signoz/otelcontribcol:0.43.0|docker.io/signoz/otelcontribcol:0.43.0>
    Image ID:      <docker-pullable://signoz/otelcontribcol@sha256:c2fab65133bfa4c95bd50240aa7cf703599bc33697fa84c3f8fa94a1aa7e6357>
    Port:          <none>
    Host Port:     <none>
    Command:
      /otelcontribcol
      --config=/conf/otel-collector-config.yaml
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 15 Apr 2022 15:30:02 +0530
      Finished:     Fri, 15 Apr 2022 15:30:02 +0530
    Ready:          False
    Restart Count:  8
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:        200m
      memory:     400Mi
    Environment:  <none>
    Mounts:
      /conf from otel-collector-config-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gthcs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  otel-collector-config-vol:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      signoz-otel-collector
    Optional:  false
  kube-api-access-gthcs:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason     Age                 From                                    Message
  ----     ------     ----                ----                                    -------
  Normal   Scheduled  21m                 default-scheduler                       Successfully assigned spx-test/signoz-otel-collector-8487c9f7b4-mgwsp to ip-172-26-61-142.ec2.internal
  Normal   Pulled     21m                 kubelet, ip-172-26-61-142.ec2.internal  Container image "<http://docker.io/busybox:1.35|docker.io/busybox:1.35>" already present on machine
  Normal   Created    21m                 kubelet, ip-172-26-61-142.ec2.internal  Created container signoz-otel-collector-init
  Normal   Started    21m                 kubelet, ip-172-26-61-142.ec2.internal  Started container signoz-otel-collector-init
  Normal   Pulled     21m                 kubelet, ip-172-26-61-142.ec2.internal  Successfully pulled image "<http://docker.io/signoz/otelcontribcol:0.43.0|docker.io/signoz/otelcontribcol:0.43.0>" in 141.855009ms
  Normal   Pulled     21m                 kubelet, ip-172-26-61-142.ec2.internal  Successfully pulled image "<http://docker.io/signoz/otelcontribcol:0.43.0|docker.io/signoz/otelcontribcol:0.43.0>" in 107.908758ms
  Normal   Pulled     20m                 kubelet, ip-172-26-61-142.ec2.internal  Successfully pulled image "<http://docker.io/signoz/otelcontribcol:0.43.0|docker.io/signoz/otelcontribcol:0.43.0>" in 95.917437ms
  Normal   Created    20m (x4 over 21m)   kubelet, ip-172-26-61-142.ec2.internal  Created container signoz-otel-collector
  Normal   Started    20m (x4 over 21m)   kubelet, ip-172-26-61-142.ec2.internal  Started container signoz-otel-collector
  Normal   Pulling    20m (x4 over 21m)   kubelet, ip-172-26-61-142.ec2.internal  Pulling image "<http://docker.io/signoz/otelcontribcol:0.43.0|docker.io/signoz/otelcontribcol:0.43.0>"
  Normal   Pulled     20m                 kubelet, ip-172-26-61-142.ec2.internal  Successfully pulled image "<http://docker.io/signoz/otelcontribcol:0.43.0|docker.io/signoz/otelcontribcol:0.43.0>" in 105.808476ms
  Warning  BackOff    55s (x94 over 21m)  kubelet, ip-172-26-61-142.ec2.internal  Back-off restarting failed container
p

Prashant Shahi

04/15/2022, 10:22 AM
Hi @User ! That's strange, all except otel-collector is down. Did you make update any otel-collector configuration? Either ways, could you please share the logs of the previously exited otel-collector pod if possible?
r

rakesh shah

04/15/2022, 10:23 AM
Let me try
We deleted the pod which was doing crashloopback
Hence this data is coming post deleting the otel-collector pod
We tried looking at if any condition is required - prior to start the pod.
Any challenge due to resource crunch or port unavailability?
We have just namespace change - @User it never started for us
p

Prashant Shahi

04/15/2022, 11:51 AM
Any challenge due to resource crunch or port unavailability?
port unavailability, I don't think that might be the problem resource crunch. yes, possible. could you share info about the k8s cluster?
We deleted the pod which was doing crashloopback
previous pod logs would have helped to debug the root cause of this
r

rakesh shah

04/15/2022, 11:52 AM
ok
kubectl cluster-info dump - are you looking for?
This is 4 MB file @User
p

Prashant Shahi

04/15/2022, 12:04 PM
We have just namespace change - it never started for us
so none of the pods spins up in another namespace? That would indicate insufficient resource
I was curious about information about the k8s cluster like size, version, and cloud provider if any. But yeah, cluster info dump helps as well. Can you share dump of the following? replace
platform
with namespace where SigNoz was installed
Copy code
kubectl cluster-info dump --namespace=platform
r

rakesh shah

04/15/2022, 1:08 PM
Sure
This will help about the node info within the cluster
p

Prashant Shahi

04/16/2022, 12:01 PM
@User there seems to be sufficient resources on the cluster to run SigNoz and other applications. I see otel-collector being deployed on arm64 node. We recommend running on k8s cluster with amd nodes.
@User can you share log output of
kubectl -n platform logs <otel-collector-pod-name>
if any?
r

rakesh shah

04/18/2022, 4:26 AM
OK
Copy code
standard_init_linux.go:228: exec user process caused: exec format error
This is an error while executing the command
Is it feasible to have a screen sharing call today?
p

Prashant Shahi

04/18/2022, 9:22 AM
yup.. that error is caused by ARM node. we haven't tested SigNoz in ARM node k8s cluster. Hence, recommend running on AMD node.
If you cannot or do not want to create a new k8s cluster with all AMD nodes, I suggest that you make use of
nodeSelector
r

rakesh shah

04/18/2022, 11:29 AM
OK - finally installed successfully, we have used ARM image for otel collector. - FYI
👍 1
from the docker hub
Thank you for your persistent support.
🙂 1
p

Prashant Shahi

04/18/2022, 2:42 PM
finally installed successfully, we have used ARM image for otel collector
That's great to hear!! 🎉 @rakesh shah it would be nice could you please how exactly you resolved it. That would help others in the community as well.
r

rakesh shah

04/19/2022, 9:04 AM
HI Prashant, Since you mentioned the arm architecture is not tested. But in our environment default is configured as Arm Architecture for Node. Hence instead of changing the node, searched image specific to arm architecture by signoz. We found it from dockerhub and we applied new image.
p

Prashant Shahi

04/19/2022, 11:10 AM
Oh, I see! Thanks for sharing this. Are you sure SigNoz is running properly? SigNoz otel collector version
0.43.0
is synced to OpenTelemetry-Collector version
0.43.0
. Though you can use the OpenTelemety Collector of same version and send data to SigNoz, we cannot use it as replacement for SigNoz otel-collector. Here is the signoz image: https://hub.docker.com/layers/otelcontribcol/signoz/otelcontribcol/0.43.0/images/sha[…]719aee83fe9a4240bf13a8ce84e138b5408a007f87616?context=explore
5 Views