https://signoz.io logo
#support
Title
# support
n

Narasimhamurthi Kota

02/13/2023, 11:01 AM
Hi Signoz, I am trying to install signoz in DigitalOcean k8s .few pods are stuck at init and pending status.could someone help us!
a

Ankit Nayan

02/13/2023, 11:16 AM
@Prashant Shahi please look into this when you get time.
n

Narasimhamurthi Kota

02/13/2023, 12:30 PM
@Prashant Shahi I see that the problem while creating pvc in DO.
a

Ankit Nayan

02/13/2023, 2:54 PM
@Narasimhamurthi Kota possible to share DO creds with @Prashant Shahi for debugging? A new DO account says it will take 2 days to verify and confirm
p

Prashant Shahi

02/13/2023, 10:54 PM
@Narasimhamurthi Kota I tested with DO K8s v1.25, it works as expected. we just need to make sure there is enough resources in the cluster. Or you could reduce the default
resource.request
from all components.
n

Narasimhamurthi Kota

02/14/2023, 1:28 PM
Sure .I will run it again and update you
@Prashant Shahi have you updated any values in the chart while deploying in DO?
p

Prashant Shahi

02/14/2023, 4:24 PM
Copy code
global:
  storageClass: do-block-storage-retain

frontend:
  resources:
    requests:
      cpu: 50m
      memory: 50Mi

queryService:
  resources:
    requests:
      cpu: 50m
      memory: 50Mi

otelCollector:
  resources:
    requests:
      cpu: 100m
      memory: 100Mi

otelCollectorMetrics:
  resources:
    requests:
      cpu: 50m
      memory: 50Mi

alertmanager:
  resources:
    requests:
      cpu: 50m
      memory: 50Mi

clickhouse:
  persistence:
    size: 20Gi
  
  resources:
    requests:
      cpu: 50m
      memory: 50Mi
  
  zookeeper:
    resources:
      requests:
        cpu: 50m
        memory: 50Mi

# optionally disable k8s-infra
k8s-infra:
  enabled: false
@Narasimhamurthi Kota I used the value above. You could do the same. But if you have enough resources in your cluster, you should not face any issue with default installation.
do-block-storage-retain
<= this storage class is not mandatory. It just makes sure to retain the volumes even after PVs and PVCs are removed.
n

Narasimhamurthi Kota

02/15/2023, 6:03 AM
@Prashant Shahi it is not working with default values due to issue while creating pvc
IMG_3849.png,IMG_3848.jpg
p

Prashant Shahi

02/15/2023, 6:06 AM
@Narasimhamurthi Kota Can you share output of the following?
Copy code
kubectl -n monitoring describe pod chi-realoq-dev-clickhouse-cluster-0-0-0
I see that zookeeper is running properly now.
It's likely the same resource issue.
n

Narasimhamurthi Kota

02/15/2023, 6:15 AM
Click house log
Copy code
[root@jenkins ~]# kubectl describe pod chi-realoq-dev-clickhouse-cluster-0-0-0 -n monitoring
Name:             chi-realoq-dev-clickhouse-cluster-0-0-0
Namespace:        monitoring
Priority:         0
Service Account:  realoq-dev-clickhouse
Node:             <none>
Labels:           <http://app.kubernetes.io/component=clickhouse|app.kubernetes.io/component=clickhouse>
                  <http://app.kubernetes.io/instance=realoq-dev|app.kubernetes.io/instance=realoq-dev>
                  <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
                  <http://app.kubernetes.io/name=clickhouse|app.kubernetes.io/name=clickhouse>
                  <http://app.kubernetes.io/version=22.8.8|app.kubernetes.io/version=22.8.8>
                  <http://clickhouse.altinity.com/app=chop|clickhouse.altinity.com/app=chop>
                  <http://clickhouse.altinity.com/chi=realoq-dev-clickhouse|clickhouse.altinity.com/chi=realoq-dev-clickhouse>
                  <http://clickhouse.altinity.com/cluster=cluster|clickhouse.altinity.com/cluster=cluster>
                  <http://clickhouse.altinity.com/namespace=monitoring|clickhouse.altinity.com/namespace=monitoring>
                  <http://clickhouse.altinity.com/ready=yes|clickhouse.altinity.com/ready=yes>
                  <http://clickhouse.altinity.com/replica=0|clickhouse.altinity.com/replica=0>
                  <http://clickhouse.altinity.com/settings-version=a0b5649f5ea9121accf6ecc528db9f761f7f1768|clickhouse.altinity.com/settings-version=a0b5649f5ea9121accf6ecc528db9f761f7f1768>
                  <http://clickhouse.altinity.com/shard=0|clickhouse.altinity.com/shard=0>
                  <http://clickhouse.altinity.com/zookeeper-version=b77cf4e8cc66db2fcc814cafedf597b67675d420|clickhouse.altinity.com/zookeeper-version=b77cf4e8cc66db2fcc814cafedf597b67675d420>
                  controller-revision-hash=chi-realoq-dev-clickhouse-cluster-0-0-5659fcbf67
                  <http://helm.sh/chart=clickhouse-23.8.6|helm.sh/chart=clickhouse-23.8.6>
                  <http://statefulset.kubernetes.io/pod-name=chi-realoq-dev-clickhouse-cluster-0-0-0|statefulset.kubernetes.io/pod-name=chi-realoq-dev-clickhouse-cluster-0-0-0>
Annotations:      <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: realoq-dev
                  <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: monitoring
                  <http://signoz.io/path|signoz.io/path>: /metrics
                  <http://signoz.io/port|signoz.io/port>: 9363
                  <http://signoz.io/scrape|signoz.io/scrape>: true
Status:           Pending
IP:
IPs:              <none>
Controlled By:    StatefulSet/chi-realoq-dev-clickhouse-cluster-0-0
Init Containers:
  realoq-dev-clickhouse-init:
    Image:      <http://docker.io/busybox:1.35|docker.io/busybox:1.35>
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      set -x
      wget -O /tmp/histogramQuantile <https://github.com/SigNoz/signoz/raw/develop/deploy/docker/clickhouse-setup/user_scripts/histogramQuantile>
      mv /tmp/histogramQuantile  /var/lib/clickhouse/user_scripts/histogramQuantile
      chmod +x /var/lib/clickhouse/user_scripts/histogramQuantile
    Environment:  <none>
    Mounts:
      /var/lib/clickhouse/user_scripts from shared-binary-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vl25m (ro)
Containers:
  clickhouse:
    Image:       <http://docker.io/clickhouse/clickhouse-server:22.8.8-alpine|docker.io/clickhouse/clickhouse-server:22.8.8-alpine>
    Ports:       8123/TCP, 9000/TCP, 9009/TCP, 9000/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      /bin/bash
      -c
      /usr/bin/clickhouse-server --config-file=/etc/clickhouse-server/config.xml
    Requests:
      cpu:        100m
      memory:     200Mi
    Liveness:     http-get http://:http/ping delay=60s timeout=1s period=3s #success=1 #failure=10
    Readiness:    http-get http://:http/ping delay=10s timeout=1s period=3s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/clickhouse-server/conf.d/ from chi-realoq-dev-clickhouse-deploy-confd-cluster-0-0 (rw)
      /etc/clickhouse-server/config.d/ from chi-realoq-dev-clickhouse-common-configd (rw)
      /etc/clickhouse-server/functions from custom-functions-volume (rw)
      /etc/clickhouse-server/users.d/ from chi-realoq-dev-clickhouse-common-usersd (rw)
      /var/lib/clickhouse from data-volumeclaim-template (rw)
      /var/lib/clickhouse/user_scripts from shared-binary-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vl25m (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data-volumeclaim-template:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-volumeclaim-template-chi-realoq-dev-clickhouse-cluster-0-0-0
    ReadOnly:   false
  shared-binary-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  custom-functions-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      realoq-dev-clickhouse-custom-functions
    Optional:  false
  chi-realoq-dev-clickhouse-common-configd:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      chi-realoq-dev-clickhouse-common-configd
    Optional:  false
  chi-realoq-dev-clickhouse-common-usersd:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      chi-realoq-dev-clickhouse-common-usersd
    Optional:  false
  chi-realoq-dev-clickhouse-deploy-confd-cluster-0-0:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      chi-realoq-dev-clickhouse-deploy-confd-cluster-0-0
    Optional:  false
  kube-api-access-vl25m:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason             Age                   From                Message
  ----     ------             ----                  ----                -------
  Warning  FailedScheduling   13m                   default-scheduler   0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Warning  FailedScheduling   13m                   default-scheduler   0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Warning  FailedScheduling   8m47s                 default-scheduler   0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Normal   NotTriggerScaleUp  3m23s (x11 over 13m)  cluster-autoscaler  pod didn't trigger scale-up:
[root@jenkins ~]#
p

Prashant Shahi

02/15/2023, 6:16 AM
It works fine for me in DO.
n

Narasimhamurthi Kota

02/15/2023, 6:17 AM
I am using the 16gb ram and 8vcpu node
p

Prashant Shahi

02/15/2023, 6:18 AM
I am using 2 nodes with 2vCPU 4 GB RAM.
@Narasimhamurthi Kota Perhaps we could get on call to get this sorted out.
n

Narasimhamurthi Kota

02/15/2023, 6:18 AM
Shall we connect now ?
p

Prashant Shahi

02/15/2023, 6:18 AM
Can you share your email over DM?
n

Narasimhamurthi Kota

02/15/2023, 6:19 AM
p

Prashant Shahi

02/15/2023, 6:19 AM
It will not be possible to connect now. Does 3pm IST work for you?
n

Narasimhamurthi Kota

02/15/2023, 6:19 AM
Sure
Hi @Prashant Shahi
Shall we connect at 0330pm?
My present meeting May take few more minutes
p

Prashant Shahi

02/15/2023, 9:24 AM
okay
n

Narasimhamurthi Kota

02/15/2023, 9:26 AM
Thanks @Prashant Shahi
@Prashant Shahi
Shall we connect now ?
@Prashant Shahi I am in the call
@Prashant Shahi I am in the call .awaiting you to join
p

Prashant Shahi

02/15/2023, 10:07 AM
joining
I was in another call
n

Narasimhamurthi Kota

02/15/2023, 12:17 PM
Hi @Prashant Shahi I have raised a case with DO.awaiting response.what is the service name that we need to expose to access signoz.
p

Prashant Shahi

02/15/2023, 1:46 PM
what is the service name that we need to expose to access signoz
SigNoz UI? That would be
frontend
n

Narasimhamurthi Kota

02/15/2023, 6:56 PM
Perfect.I can hit the signoz front end in the webpage.
@Prashant Shahi do we have default admin user/password to login or we need to create admin CRED’s ?
I can login now.but what am I not able to see the all the pod logs ? I can see only signoz pods logs which are the signoz name space only. How to get the all the pod logs in all other name spaces in k8s?
p

Prashant Shahi

02/15/2023, 7:31 PM
By default, logs of all containers in the same K8s cluster is collected.
For collecting logs from any other external K8s cluster, you will have to install
k8s-infra
chart in that cluster: Instructions: https://signoz.io/docs/tutorial/kubernetes-infra-metrics/
n

Narasimhamurthi Kota

02/16/2023, 4:23 AM
@Prashant Shahi I would like to have discussion in api access for signoz and more ? Shall we have 30min discussion before 3pm ist if possible for your ?
@Prashant Shahi May I create multiple users ?
@Prashant Shahi you there?
p

Prashant Shahi

02/16/2023, 9:42 AM
May I create multiple users ?
you can create new users from setting.
n

Narasimhamurthi Kota

02/16/2023, 10:15 AM
@Prashant Shahi I would like to discuss on few features in signoz.shall we have 30min discussion?
p

Prashant Shahi

02/16/2023, 10:49 AM
unfortunately I am caught up with work and not available for any calls
feel free to post your queries in the thread or in #support channel. I or someone from the team should be able to respond on the same.
n

Narasimhamurthi Kota

02/16/2023, 11:34 AM
Shall we get custom metrics form signoz and use it for horizontal pod auto scale ? @Prashant Shahi
p

Prashant Shahi

02/16/2023, 12:09 PM
That is possible using KEDA
n

Narasimhamurthi Kota

02/16/2023, 12:19 PM
KEDA?
Got it.so we don’t need to install Prometheus if we use KEDA.is it ?
@Prashant Shahi pod logs in Signoz showing multiple events,,what is it not showing the logs in single view?
p

Prashant Shahi

02/16/2023, 6:56 PM
cc @nitya-signoz
n

Narasimhamurthi Kota

02/17/2023, 4:27 AM
Hi @nitya-signoz pod logs in Signoz showing multiple events,,what is it not showing the logs in single view?
n

nitya-signoz

02/17/2023, 4:40 AM
Do you mean you are getting duplicate logs?
n

Narasimhamurthi Kota

02/17/2023, 7:32 AM
@nitya-signoz in the below screen shot ,I see 2 rows for en same log file.
n

nitya-signoz

02/17/2023, 9:40 AM
Hey, From the screenshot it seems that the logs are not parsed properly in your case. If you have multiline logs, please configure your filelog receiver accordingly https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver#multiline-configuration
n

Narasimhamurthi Kota

02/17/2023, 10:24 AM
@nitya-signoz May I know I’m which component I need to configure file log receiver?
@nitya-signoz can you give the multiline key details in Signoz helm chart.
n

nitya-signoz

02/18/2023, 4:46 PM
Let me know if you were able to solve the issue.
n

Narasimhamurthi Kota

02/20/2023, 6:16 AM
Hi @nitya-signoz It is not working.
@nitya-signoz it is not solved
n

nitya-signoz

02/20/2023, 8:47 AM
By not working do you mean you are getting an error or the nothing changed in the logs? You can also create a local repo with sample logs and try to parse it using the operator and it will be easier to debug that way.
n

Narasimhamurthi Kota

02/20/2023, 11:21 AM
n

nitya-signoz

02/21/2023, 5:25 AM
Yeah, but it will be easier to test with a small set of sample logs in local deployment and then add it to k8s deployment.
5 Views