Hello, I'm trying to upgrade from 0.75.0 to 0.76.0...
# support
r
Hello, I'm trying to upgrade from 0.75.0 to 0.76.0 on Kubernetes and I'm following the migration guide. At the point where I install the new Helm chart with migration enabled, my otel-collector pod doesn't seem to initiate properly. I'll be happy to provide more detail if I knew what to look for.
Copy code
60m         Normal    Pulled                   pod/prod-signoz-schema-migrator-sync-g5xdd                 Successfully pulled image "<http://docker.io/signoz/signoz-schema-migrator:0.111.30|docker.io/signoz/signoz-schema-migrator:0.111.30>" in 936ms (936ms including waiting). Image size: 9813293 bytes.
60m         Normal    Created                  pod/prod-signoz-schema-migrator-sync-g5xdd                 Created container: schema-migrator
60m         Normal    Started                  pod/prod-signoz-schema-migrator-sync-g5xdd                 Started container schema-migrator
60m         Normal    Scheduled                pod/prod-signoz-0                                          Successfully assigned platform/prod-signoz-0 to ip-172-16-10-236.ec2.internal
60m         Normal    SuccessfulAttachVolume   pod/prod-signoz-0                                          AttachVolume.Attach succeeded for volume "pvc-4df4dc25-0167-4890-85c8-f6e179334294"
60m         Normal    Killing                  pod/prod-signoz-alertmanager-0                             Stopping container alertmanager
60m         Normal    Killing                  pod/prod-signoz-frontend-74585649f6-j8td8                  Stopping container frontend
60m         Normal    Killing                  pod/prod-signoz-otel-collector-58c8b9bb65-k52rd            Stopping container collector
60m         Normal    SuccessfulDelete         replicaset/prod-signoz-otel-collector-58c8b9bb65           Deleted pod: prod-signoz-otel-collector-58c8b9bb65-k52rd
60m         Normal    Scheduled                pod/prod-signoz-otel-collector-67c6cd4779-v8q2s            Successfully assigned platform/prod-signoz-otel-collector-67c6cd4779-v8q2s to ip-172-16-10-236.ec2.internal
60m         Normal    SuccessfulCreate         replicaset/prod-signoz-otel-collector-67c6cd4779           Created pod: prod-signoz-otel-collector-67c6cd4779-v8q2s
60m         Normal    Killing                  pod/prod-signoz-otel-collector-metrics-5d7c7fb7b8-bzq5k    Stopping container collector
60m         Normal    ScalingReplicaSet        deployment/prod-signoz-otel-collector                      Scaled down replica set prod-signoz-otel-collector-58c8b9bb65 from 1 to 0
60m         Normal    Killing                  pod/prod-signoz-query-service-0                            Stopping container query-service
60m         Normal    Scheduled                pod/prod-signoz-schema-migrator-async-kp4sf                Successfully assigned platform/prod-signoz-schema-migrator-async-kp4sf to ip-172-16-10-236.ec2.internal
60m         Normal    SuccessfulCreate         job/prod-signoz-schema-migrator-async                      Created pod: prod-signoz-schema-migrator-async-kp4sf
60m         Normal    Completed                job/prod-signoz-schema-migrator-async                      Job completed
61m         Normal    Scheduled                pod/prod-signoz-schema-migrator-sync-g5xdd                 Successfully assigned platform/prod-signoz-schema-migrator-sync-g5xdd to ip-172-16-10-236.ec2.internal
61m         Normal    SuccessfulCreate         job/prod-signoz-schema-migrator-sync                       Created pod: prod-signoz-schema-migrator-sync-g5xdd
60m         Normal    Completed                job/prod-signoz-schema-migrator-sync                       Job completed
60m         Normal    SuccessfulCreate         statefulset/prod-signoz                                    create Pod prod-signoz-0 in StatefulSet prod-signoz successful
s
Hi @Richard Cooney, it's not clear what is the issue you are running into. Can you share more detail?
r
Whenever I run
helm upgrade --install prod signoz/signoz --namespace platform --values values.yaml
with my new values, including the migration container, otel-collector fails and prevents the deployment. I'm not sure why:
60m         Warning   Unhealthy                pod/prod-signoz-otel-collector-67c6cd4779-v8q2s            Liveness probe failed: Get "<http://172.16.10.34:13133/>": dial tcp 172.16.10.34:13133: connect: connection refused
My values (not complete):
Copy code
signoz:
  service:
    # -- Annotations to use by service associated to Frontend
    annotations: {}
  initContainers:
    migration:
      enabled: true
      image:
        registry: <http://docker.io|docker.io>
        repository: busybox
        tag: 1.35
      command:
        - /bin/sh
        - -c
        - |
          echo "Running migration..."
          cp -pv /var/lib/old-signoz/signoz.db /var/lib/signoz/signoz.db
          echo "Migration complete..."
      additionalVolumes:
        - name: old-signoz-db
          persistentVolumeClaim:
            claimName: signoz-db-prod-signoz-query-service-0
      additionalVolumeMounts:
        - name: old-signoz-db
          mountPath: /var/lib/old-signoz

otelCollector:
  ports:
    syslog:
      enabled: true
      containerPort: 54527
      servicePort: 54527
      nodePort: ""
      protocol: TCP
    otlp:
      # -- Whether to enable service port for OTLP gRPC
      enabled: false
      # -- Container port for OTLP gRPC
#      containerPort: 4317
      # -- Service port for OTLP gRPC
#      servicePort: 4317
      # -- Node port for OTLP gRPC
#      nodePort: ""
      # -- Protocol to use for OTLP gRPC
#      protocol: TCP
    otlp-http:
      # -- Whether to enable service port for OTLP HTTP
      enabled: true
      # -- Container port for OTLP HTTP
      containerPort: 4318
      # -- Service port for OTLP HTTP
      servicePort: 4318
      # -- Node port for OTLP HTTP
      nodePort: ""
      # -- Protocol to use for OTLP HTTP
      protocol: TCP
s
Can you share the logs of otel collector?
r
Copy code
{"level":"error","timestamp":"2025-06-04T16:39:12.301Z","caller":"opamp/server_client.go:130","msg":"Failed to connect to the server: %v","component":"opamp-server-client","error":"dial tcp 10.100.130.14:4320: connect: connection refused","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).Start.func2|github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).Start.func2>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:130\ngithub.com/open-telemetry/opamp-go/client/types.CallbacksStruct.OnConnectFailed\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/types/callbacks.go:150\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).tryConnectOnce\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:127\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).ensureConnected\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:165\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:202\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"error","timestamp":"2025-06-04T16:39:12.301Z","caller":"client/wsclient.go:170","msg":"Connection failed (dial tcp 10.100.130.14:4320: connect: connection refused), will retry.","component":"opamp-server-client","stacktrace":"<http://github.com/open-telemetry/opamp-go/client.(*wsClient).ensureConnected|github.com/open-telemetry/opamp-go/client.(*wsClient).ensureConnected>\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:170\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:202\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"info","timestamp":"2025-06-04T16:39:13.219Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:14.219Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:15.219Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:16.220Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:17.221Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:18.221Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:19.221Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:20.223Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"error","timestamp":"2025-06-04T16:39:20.242Z","caller":"opamp/server_client.go:130","msg":"Failed to connect to the server: %v","component":"opamp-server-client","error":"dial tcp 10.100.130.14:4320: connect: connection refused","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).Start.func2|github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).Start.func2>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:130\ngithub.com/open-telemetry/opamp-go/client/types.CallbacksStruct.OnConnectFailed\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/types/callbacks.go:150\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).tryConnectOnce\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:127\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).ensureConnected\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:165\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:202\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"error","timestamp":"2025-06-04T16:39:20.242Z","caller":"client/wsclient.go:170","msg":"Connection failed (dial tcp 10.100.130.14:4320: connect: connection refused), will retry.","component":"opamp-server-client","stacktrace":"<http://github.com/open-telemetry/opamp-go/client.(*wsClient).ensureConnected|github.com/open-telemetry/opamp-go/client.(*wsClient).ensureConnected>\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:170\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:202\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"info","timestamp":"2025-06-04T16:39:21.224Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:22.224Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
{"level":"info","timestamp":"2025-06-04T16:39:23.224Z","caller":"opamp/server_client.go:171","msg":"Waiting for initial remote config","component":"opamp-server-client"}
s
Can you share the logs of
signoz
pod? This pods runs a server which collector connects to but that doesn't doesn't seem to reachable.
r
I don't seem to have any:
Copy code
ssm-user@ip-172-16-8-43:~$ kubectl logs -f prod-signoz-0
Defaulted container "signoz" out of: signoz, prod-signoz-init (init), prod-signoz-migration (init)
Sorry, I guess the dependency is that signoz-0 isn't starting.
s
right
can you look at the container logs for migration?
r
Here are my pods
Copy code
NAME                                          READY   STATUS                   RESTARTS        AGE
chi-prod-clickhouse-cluster-0-0-0             1/1     Running                  0               18h
prod-clickhouse-operator-cf747649-vg4ff       2/2     Running                  2 (18h ago)     18h
prod-signoz-0                                 0/1     Pending                  0               4m52s
prod-signoz-otel-collector-58c8b9bb65-nnc44   1/1     Running                  0               96m
prod-signoz-otel-collector-5b88f84cf8-ppw25   0/1     ContainerStatusUnknown   1 (3h24m ago)   18h
prod-signoz-otel-collector-67c6cd4779-vhrg6   0/1     Running                  3 (22s ago)     4m52s
prod-signoz-schema-migrator-async-wnrh5       0/1     Completed                0               4m52s
prod-signoz-schema-migrator-sync-nllpn        0/1     Completed                0               5m9s
prod-zookeeper-0                              1/1     Running                  1 (3h24m ago)   18h
ok
the
schema-migrator
?
or the container in the signoz pod?
s
no, the init container for `prod-signoz-0
r
ok
No logs
nothing in
prod-signoz-init
too.
I just saw this event, if that helps:
0s (x3 over 10m)         Warning   FailedScheduling         Pod/prod-signoz-0                                          0/2 nodes are available: 2 node(s) didn't match PersistentVolume's node affinity. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
s
can you list and descibe the pv(c)s associated with signoz?
r
Copy code
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                                                  STORAGECLASS    VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-436ef6e6-eec9-4264-b8cb-56e86567687f   8Gi        RWO            Retain           Released   platform/data-prod-zookeeper-0                                         gp2-resizable   <unset>                          19h
pvc-4df4dc25-0167-4890-85c8-f6e179334294   1Gi        RWO            Retain           Bound      platform/signoz-db-prod-signoz-0                                       gp2-resizable   <unset>                          158m
pvc-4e65e00f-caa9-46c5-b1dc-e3067d57f737   8Gi        RWO            Retain           Bound      platform/data-prod-zookeeper-0                                         gp2-resizable   <unset>                          18h
pvc-4fe663f6-73a6-43f7-bc84-7edd98c77b96   1Gi        RWO            Retain           Released   platform/signoz-db-prod-signoz-0                                       gp2-resizable   <unset>                          18h
pvc-8d72dda5-7926-4dfa-9ce2-d51976660d70   1Gi        RWO            Retain           Bound      platform/signoz-db-prod-signoz-query-service-0                         gp2-resizable   <unset>                          18h
pvc-9865e271-6250-499f-a0fb-ca0a654372ee   1Gi        RWO            Retain           Released   platform/signoz-db-prod-signoz-0                                       gp2-resizable   <unset>                          19h
pvc-9a43b33c-cd80-4766-8ef6-7abfae182cfd   512Gi      RWO            Retain           Released   platform/data-volumeclaim-template-chi-prod-clickhouse-cluster-0-0-0   gp2-resizable   <unset>                          18h
pvc-9c1f36b0-2101-43b7-8c5c-d186a2f20bb5   512Gi      RWO            Retain           Released   platform/data-volumeclaim-template-chi-prod-clickhouse-cluster-0-0-0   gp2-resizable   <unset>                          21h
pvc-a9f4926c-9a48-4ddc-ab0d-e9deb1fa8609   8Gi        RWO            Retain           Released   platform/data-prod-zookeeper-0                                         gp2-resizable   <unset>                          18h
pvc-bf2a249e-8fc2-4035-aa4c-1abdef83e5b6   1Gi        RWO            Retain           Released   platform/signoz-db-prod-signoz-0                                       gp2-resizable   <unset>                          21h
pvc-cda5fdaa-fd5b-43cd-9955-7ace213613fe   512Gi      RWO            Retain           Bound      platform/data-volumeclaim-template-chi-prod-clickhouse-cluster-0-0-0   gp2-resizable   <unset>                          18h
pvc-d2b55787-0b25-44f2-8eda-fceefe45c10c   1Gi        RWO            Retain           Bound      platform/storage-prod-signoz-alertmanager-0                            gp2-resizable   <unset>                          18h
pvc-f3cb1dd6-c138-4269-8a1c-f6be0fb9586c   8Gi        RWO            Retain           Released   platform/data-prod-zookeeper-0                                         gp2-resizable   <unset>                          21h
Copy code
ssm-user@ip-172-16-8-43:~$ kubectl get pvc
NAME                                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
data-prod-zookeeper-0                                         Bound    pvc-4e65e00f-caa9-46c5-b1dc-e3067d57f737   8Gi        RWO            gp2-resizable   <unset>                 18h
data-volumeclaim-template-chi-prod-clickhouse-cluster-0-0-0   Bound    pvc-cda5fdaa-fd5b-43cd-9955-7ace213613fe   512Gi      RWO            gp2-resizable   <unset>                 18h
signoz-db-prod-signoz-0                                       Bound    pvc-4df4dc25-0167-4890-85c8-f6e179334294   1Gi        RWO            gp2-resizable   <unset>                 159m
signoz-db-prod-signoz-query-service-0                         Bound    pvc-8d72dda5-7926-4dfa-9ce2-d51976660d70   1Gi        RWO            gp2-resizable   <unset>                 18h
storage-prod-signoz-alertmanager-0                            Bound    pvc-d2b55787-0b25-44f2-8eda-fceefe45c10c   1Gi        RWO            gp2-resizable   <unset>                 18h
s
Copy code
kubectl describe pv pvc-4df4dc25-0167-4890-85c8-f6e179334294
And this
Copy code
kubectl get nodes --show-labels
r
Copy code
ssm-user@ip-172-16-8-43:~$ kubectl describe pv pvc-4df4dc25-0167-4890-85c8-f6e179334294
Name:              pvc-4df4dc25-0167-4890-85c8-f6e179334294
Labels:            <http://topology.kubernetes.io/region=us-east-1|topology.kubernetes.io/region=us-east-1>
                   <http://topology.kubernetes.io/zone=us-east-1d|topology.kubernetes.io/zone=us-east-1d>
Annotations:       <http://pv.kubernetes.io/migrated-to|pv.kubernetes.io/migrated-to>: <http://ebs.csi.aws.com|ebs.csi.aws.com>
                   <http://pv.kubernetes.io/provisioned-by|pv.kubernetes.io/provisioned-by>: <http://kubernetes.io/aws-ebs|kubernetes.io/aws-ebs>
                   <http://volume.kubernetes.io/provisioner-deletion-secret-name|volume.kubernetes.io/provisioner-deletion-secret-name>:
                   <http://volume.kubernetes.io/provisioner-deletion-secret-namespace|volume.kubernetes.io/provisioner-deletion-secret-namespace>:
Finalizers:        [<http://kubernetes.io/pv-protection|kubernetes.io/pv-protection> external-attacher/ebs-csi-aws-com]
StorageClass:      gp2-resizable
Status:            Bound
Claim:             platform/signoz-db-prod-signoz-0
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          1Gi
Node Affinity:
  Required Terms:
    Term 0:        <http://topology.kubernetes.io/zone|topology.kubernetes.io/zone> in [us-east-1d]
                   <http://topology.kubernetes.io/region|topology.kubernetes.io/region> in [us-east-1]
Message:
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   vol-0858b31400c1f6f34
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         <none>
Copy code
ssm-user@ip-172-16-8-43:~$ kubectl get nodes --show-labels
NAME                            STATUS   ROLES    AGE   VERSION               LABELS
ip-172-16-10-236.ec2.internal   Ready    <none>   21h   v1.32.3-eks-473151a   <http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.large,beta.kubernetes.io/os=linux,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup-image=ami-0f2e4735b924be9d0,eks.amazonaws.com/nodegroup=signoz-node-group,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f19d30b2825b0ba5,eks.amazonaws.com/sourceLaunchTemplateVersion=1,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1d,k8s.io/cloud-provider-aws=efb450a0099cb7ec4710ee5a8ad64b56,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-16-10-236.ec2.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.large,topology.ebs.csi.aws.com/zone=us-east-1d,topology.k8s.aws/zone-id=use1-az4,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1d|beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.large,beta.kubernetes.io/os=linux,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup-image=ami-0f2e4735b924be9d0,eks.amazonaws.com/nodegroup=signoz-node-group,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f19d30b2825b0ba5,eks.amazonaws.com/sourceLaunchTemplateVersion=1,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1d,k8s.io/cloud-provider-aws=efb450a0099cb7ec4710ee5a8ad64b56,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-16-10-236.ec2.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.large,topology.ebs.csi.aws.com/zone=us-east-1d,topology.k8s.aws/zone-id=use1-az4,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1d>
ip-172-16-9-147.ec2.internal    Ready    <none>   21h   v1.32.3-eks-473151a   <http://beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.large,beta.kubernetes.io/os=linux,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup-image=ami-0f2e4735b924be9d0,eks.amazonaws.com/nodegroup=signoz-node-group,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f19d30b2825b0ba5,eks.amazonaws.com/sourceLaunchTemplateVersion=1,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,k8s.io/cloud-provider-aws=efb450a0099cb7ec4710ee5a8ad64b56,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-16-9-147.ec2.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.large,topology.ebs.csi.aws.com/zone=us-east-1a,topology.k8s.aws/zone-id=use1-az6,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1a|beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.large,beta.kubernetes.io/os=linux,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup-image=ami-0f2e4735b924be9d0,eks.amazonaws.com/nodegroup=signoz-node-group,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f19d30b2825b0ba5,eks.amazonaws.com/sourceLaunchTemplateVersion=1,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,k8s.io/cloud-provider-aws=efb450a0099cb7ec4710ee5a8ad64b56,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-16-9-147.ec2.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.large,topology.ebs.csi.aws.com/zone=us-east-1a,topology.k8s.aws/zone-id=use1-az6,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1a>
s
Copy code
signoz-db-prod-signoz-0                                       Bound    pvc-4df4dc25-0167-4890-85c8-f6e179334294   1Gi        RWO            gp2-resizable   <unset>                 159m
has node affinity requiring
us-east-1d
i guess pod trying to schedule on the
us-east-1a
node, but the PV can only attach to nodes in
us-east-1d
r
Oh yes, I see that now.
Thank you. I will try to resolve this.