in the signoz k8s chart, how do i run the `job/sig...
# support
h
in the signoz k8s chart, how do i run the
job/signoz-schema-migrator-upgrade
again? it seems it didnt run properly and
Copy code
kubectl create job signoz-schema-migrator-upgrade-lolol --from=job/signoz-schema-migrator-upgrade
is not working, giving
error: unknown object type *v1.Job
the reason for error is
but I need to run the migrator job again
but how?
because im getting
Copy code
PrepareBatch:code: 16, message: No such column scope_name in table signoz_logs.distributed_logs
a
@Prashant Shahi please take a look
p
@hans Can you share the version of your Kubernetes cluster/client? it is most definitely due to older version.
It says so in the offical docs, ill check version in a bit, but its not more than a few months old eks cluster
Copy code
❯ kubectl create job signoz-schema-migrator-upgrade-lolol --from=job/signoz-schema-migrator-upgrade

error: unknown object type *v1.Job

❯ kubectl version                                                                 
Client Version: v1.28.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.9-eks-036c24b
if i change
<http://docker.io/signoz/signoz-otel-collector:0.102.2|docker.io/signoz/signoz-otel-collector:0.102.2>
to
0.102.0
it works ingesting logs again.. but thats not a nice solution
p
That's strange.
Did the job previously worked in the same cluster?
And have done any new changes to the K8s job resource?
h
Copy code
at 12:28:50 ❯ kubectl get jobs
NAME                             COMPLETIONS   DURATION   AGE
signoz-schema-migrator-upgrade   1/1           19m        3d17h
p
can you share this one?
Copy code
kubectl get jobs -o yaml signoz-schema-migrator-upgrade
And also the one that you are trying to create manually?
h
Copy code
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    <http://helm.sh/hook|helm.sh/hook>: post-upgrade
    <http://helm.sh/hook-delete-policy|helm.sh/hook-delete-policy>: before-hook-creation
    <http://helm.sh/hook-weight|helm.sh/hook-weight>: "1"
  creationTimestamp: "2024-07-15T16:47:55Z"
  generation: 1
  labels:
    <http://app.kubernetes.io/component|app.kubernetes.io/component>: schema-migrator-upgrade
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: signoz
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: signoz
  name: signoz-schema-migrator-upgrade
  namespace: signoz-apm
  resourceVersion: "66370160"
  uid: 70970f20-4b8d-4590-a3bc-18cffca121ec
spec:
  backoffLimit: 6
  completionMode: NonIndexed
  completions: 1
  parallelism: 1
  selector:
    matchLabels:
      <http://batch.kubernetes.io/controller-uid|batch.kubernetes.io/controller-uid>: 70970f20-4b8d-4590-a3bc-18cffca121ec
  suspend: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        <http://app.kubernetes.io/component|app.kubernetes.io/component>: schema-migrator-upgrade
        <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: signoz
        <http://app.kubernetes.io/name|app.kubernetes.io/name>: signoz
        <http://batch.kubernetes.io/controller-uid|batch.kubernetes.io/controller-uid>: 70970f20-4b8d-4590-a3bc-18cffca121ec
        <http://batch.kubernetes.io/job-name|batch.kubernetes.io/job-name>: signoz-schema-migrator-upgrade
        controller-uid: 70970f20-4b8d-4590-a3bc-18cffca121ec
        job-name: signoz-schema-migrator-upgrade
    spec:
      containers:
      - args:
        - --dsn
        - tcp://$(CLICKHOUSE_USER):$(CLICKHOUSE_PASSWORD)@signoz-clickhouse:9000
        env:
        - name: CLICKHOUSE_HOST
          value: signoz-clickhouse
        - name: CLICKHOUSE_PORT
          value: "9000"
        - name: CLICKHOUSE_HTTP_PORT
          value: "8123"
        - name: CLICKHOUSE_CLUSTER
          value: cluster
        - name: CLICKHOUSE_USER
          value: admin
        - name: CLICKHOUSE_PASSWORD
          value: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
        - name: CLICKHOUSE_SECURE
          value: "false"
        image: <http://docker.io/signoz/signoz-schema-migrator:0.102.0|docker.io/signoz/signoz-schema-migrator:0.102.0>
        imagePullPolicy: IfNotPresent
        name: schema-migrator
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - until wget --user "$(CLICKHOUSE_USER):$(CLICKHOUSE_PASSWORD)" --spider -q
          signoz-clickhouse:8123/ping; do echo -e "waiting for clickhouseDB"; sleep
          5; done; echo -e "clickhouse ready, starting schema migrator now";
        env:
        - name: CLICKHOUSE_HOST
          value: signoz-clickhouse
        - name: CLICKHOUSE_PORT
          value: "9000"
        - name: CLICKHOUSE_HTTP_PORT
          value: "8123"
        - name: CLICKHOUSE_CLUSTER
          value: cluster
        - name: CLICKHOUSE_USER
          value: admin
        - name: CLICKHOUSE_PASSWORD
          value: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
        - name: CLICKHOUSE_SECURE
          value: "false"
        image: <http://docker.io/busybox:1.35|docker.io/busybox:1.35>
        imagePullPolicy: IfNotPresent
        name: signoz-schema-migrator-init
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - command:
        - sh
        - -c
        - |
          echo "Running clickhouse ready check"
          while true
          do
            version="$(CLICKHOUSE_VERSION)"
            shards="$(CLICKHOUSE_SHARDS)"
            replicas="$(CLICKHOUSE_REPLICAS)"
            current_version="$(clickhouse client --host ${CLICKHOUSE_HOST} --port ${CLICKHOUSE_PORT} --user "${CLICKHOUSE_USER}" --password "${CLICKHOUSE_PASSWORD}" -q "SELECT version()")"
            if [ -z "$current_version" ]; then
              echo "waiting for clickhouse to be ready"
              sleep 5
              continue
            fi
            if [ -z "$(echo "$current_version" | grep "$version")" ]; then
              echo "expected version: $version, current version: $current_version"
              echo "waiting for clickhouse with correct version"
              sleep 5
              continue
            fi
            current_shards="$(clickhouse client --host ${CLICKHOUSE_HOST} --port ${CLICKHOUSE_PORT} --user "${CLICKHOUSE_USER}" --password "${CLICKHOUSE_PASSWORD}" -q "SELECT count(DISTINCT(shard_num)) FROM system.clusters WHERE cluster = '${CLICKHOUSE_CLUSTER}'")"
            if [ -z "$current_shards" ]; then
              echo "waiting for clickhouse to be ready"
              sleep 5
              continue
            fi
            if [ "$current_shards" -ne "$shards" ]; then
              echo "expected shard count: $shards, current shard count: $current_shards"
              echo "waiting for clickhouse with correct shard count"
              sleep 5
              continue
            fi
            current_replicas="$(clickhouse client --host ${CLICKHOUSE_HOST} --port ${CLICKHOUSE_PORT} --user "${CLICKHOUSE_USER}" --password "${CLICKHOUSE_PASSWORD}" -q "SELECT count(DISTINCT(replica_num)) FROM system.clusters WHERE cluster = '${CLICKHOUSE_CLUSTER}'")"
            if [ -z "$current_replicas" ]; then
              echo "waiting for clickhouse to be ready"
              sleep 5
              continue
            fi
            if [ "$current_replicas" -ne "$replicas" ]; then
              echo "expected replica count: $replicas, current replica count: $current_replicas"
              echo "waiting for clickhouse with correct replica count"
              sleep 5
              continue
            fi
            break
          done
          echo "clickhouse ready, starting schema migrator now"
        env:
        - name: CLICKHOUSE_HOST
          value: signoz-clickhouse
        - name: CLICKHOUSE_PORT
          value: "9000"
        - name: CLICKHOUSE_HTTP_PORT
          value: "8123"
        - name: CLICKHOUSE_CLUSTER
          value: cluster
        - name: CLICKHOUSE_USER
          value: admin
        - name: CLICKHOUSE_PASSWORD
          value: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
        - name: CLICKHOUSE_SECURE
          value: "false"
        - name: CLICKHOUSE_VERSION
          value: 24.6.2.17
        - name: CLICKHOUSE_SHARDS
          value: "1"
        - name: CLICKHOUSE_REPLICAS
          value: "1"
        image: <http://docker.io/clickhouse/clickhouse-server:24.1.2-alpine|docker.io/clickhouse/clickhouse-server:24.1.2-alpine>
        imagePullPolicy: IfNotPresent
        name: signoz-schema-migrator-ch-ready
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: OnFailure
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  completionTime: "2024-07-15T17:07:34Z"
  conditions:
  - lastProbeTime: "2024-07-15T17:07:34Z"
    lastTransitionTime: "2024-07-15T17:07:34Z"
    status: "True"
    type: Complete
  failed: 1
  ready: 0
  startTime: "2024-07-15T16:47:55Z"
  succeeded: 1
  uncountedTerminatedPods: {}
im just trying to restart that job
make sure it creates that column that is missing
p
helm hooks doesn't work in your case?
h
it worked on 1/3 clusters
we have signoz on 3 clusters
i mean, the migration worked
p
you can just delete the existing job and run
helm upgrade ...
command to re-create the job
h
i dont like to run helm manually, but maybe i can. we try to use 100% pulumi (like terraform)
p
you can delete the job and re-create using the following upgrade job:
Copy code
kubectl delete job -n signoz-apm signoz-schema-migrator-upgrade
kubectl apply -n signoz-apm -f custom-upgrade-job.yaml
custom-upgrade-job.yaml
Copy code
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    <http://helm.sh/hook|helm.sh/hook>: post-upgrade
    <http://helm.sh/hook-delete-policy|helm.sh/hook-delete-policy>: before-hook-creation
    <http://helm.sh/hook-weight|helm.sh/hook-weight>: "1"
  creationTimestamp: null
  generation: 1
  labels:
    <http://app.kubernetes.io/component|app.kubernetes.io/component>: schema-migrator-upgrade
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: signoz
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: signoz
  name: signoz-schema-migrator-upgrade
  namespace: signoz-apm
spec:
  backoffLimit: 6
  completionMode: NonIndexed
  completions: 1
  parallelism: 1
  suspend: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        <http://app.kubernetes.io/component|app.kubernetes.io/component>: schema-migrator-upgrade
        <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: signoz
        <http://app.kubernetes.io/name|app.kubernetes.io/name>: signoz
        <http://batch.kubernetes.io/job-name|batch.kubernetes.io/job-name>: signoz-schema-migrator-upgrade
        job-name: signoz-schema-migrator-upgrade
    spec:
      containers:
      - args:
        - --dsn
        - tcp://$(CLICKHOUSE_USER):$(CLICKHOUSE_PASSWORD)@signoz-clickhouse:9000
        env:
        - name: CLICKHOUSE_HOST
          value: signoz-clickhouse
        - name: CLICKHOUSE_PORT
          value: "9000"
        - name: CLICKHOUSE_HTTP_PORT
          value: "8123"
        - name: CLICKHOUSE_CLUSTER
          value: cluster
        - name: CLICKHOUSE_USER
          value: admin
        - name: CLICKHOUSE_PASSWORD
          value: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
        - name: CLICKHOUSE_SECURE
          value: "false"
        image: <http://docker.io/signoz/signoz-schema-migrator:0.102.2|docker.io/signoz/signoz-schema-migrator:0.102.2>
        imagePullPolicy: IfNotPresent
        name: schema-migrator
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - until wget --user "$(CLICKHOUSE_USER):$(CLICKHOUSE_PASSWORD)" --spider -q
          signoz-clickhouse:8123/ping; do echo -e "waiting for clickhouseDB"; sleep
          5; done; echo -e "clickhouse ready, starting schema migrator now";
        env:
        - name: CLICKHOUSE_HOST
          value: signoz-clickhouse
        - name: CLICKHOUSE_PORT
          value: "9000"
        - name: CLICKHOUSE_HTTP_PORT
          value: "8123"
        - name: CLICKHOUSE_CLUSTER
          value: cluster
        - name: CLICKHOUSE_USER
          value: admin
        - name: CLICKHOUSE_PASSWORD
          value: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
        - name: CLICKHOUSE_SECURE
          value: "false"
        image: <http://docker.io/busybox:1.35|docker.io/busybox:1.35>
        imagePullPolicy: IfNotPresent
        name: signoz-schema-migrator-init
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - command:
        - sh
        - -c
        - |
          echo "Running clickhouse ready check"
          while true
          do
            version="$(CLICKHOUSE_VERSION)"
            shards="$(CLICKHOUSE_SHARDS)"
            replicas="$(CLICKHOUSE_REPLICAS)"
            current_version="$(clickhouse client --host ${CLICKHOUSE_HOST} --port ${CLICKHOUSE_PORT} --user "${CLICKHOUSE_USER}" --password "${CLICKHOUSE_PASSWORD}" -q "SELECT version()")"
            if [ -z "$current_version" ]; then
              echo "waiting for clickhouse to be ready"
              sleep 5
              continue
            fi
            if [ -z "$(echo "$current_version" | grep "$version")" ]; then
              echo "expected version: $version, current version: $current_version"
              echo "waiting for clickhouse with correct version"
              sleep 5
              continue
            fi
            current_shards="$(clickhouse client --host ${CLICKHOUSE_HOST} --port ${CLICKHOUSE_PORT} --user "${CLICKHOUSE_USER}" --password "${CLICKHOUSE_PASSWORD}" -q "SELECT count(DISTINCT(shard_num)) FROM system.clusters WHERE cluster = '${CLICKHOUSE_CLUSTER}'")"
            if [ -z "$current_shards" ]; then
              echo "waiting for clickhouse to be ready"
              sleep 5
              continue
            fi
            if [ "$current_shards" -ne "$shards" ]; then
              echo "expected shard count: $shards, current shard count: $current_shards"
              echo "waiting for clickhouse with correct shard count"
              sleep 5
              continue
            fi
            current_replicas="$(clickhouse client --host ${CLICKHOUSE_HOST} --port ${CLICKHOUSE_PORT} --user "${CLICKHOUSE_USER}" --password "${CLICKHOUSE_PASSWORD}" -q "SELECT count(DISTINCT(replica_num)) FROM system.clusters WHERE cluster = '${CLICKHOUSE_CLUSTER}'")"
            if [ -z "$current_replicas" ]; then
              echo "waiting for clickhouse to be ready"
              sleep 5
              continue
            fi
            if [ "$current_replicas" -ne "$replicas" ]; then
              echo "expected replica count: $replicas, current replica count: $current_replicas"
              echo "waiting for clickhouse with correct replica count"
              sleep 5
              continue
            fi
            break
          done
          echo "clickhouse ready, starting schema migrator now"
        env:
        - name: CLICKHOUSE_HOST
          value: signoz-clickhouse
        - name: CLICKHOUSE_PORT
          value: "9000"
        - name: CLICKHOUSE_HTTP_PORT
          value: "8123"
        - name: CLICKHOUSE_CLUSTER
          value: cluster
        - name: CLICKHOUSE_USER
          value: admin
        - name: CLICKHOUSE_PASSWORD
          value: 27ff0399-0d3a-4bd8-919d-17c2181e6fb9
        - name: CLICKHOUSE_SECURE
          value: "false"
        - name: CLICKHOUSE_VERSION
          value: 24.6.2.17
        - name: CLICKHOUSE_SHARDS
          value: "1"
        - name: CLICKHOUSE_REPLICAS
          value: "1"
        image: <http://docker.io/clickhouse/clickhouse-server:24.1.2-alpine|docker.io/clickhouse/clickhouse-server:24.1.2-alpine>
        imagePullPolicy: IfNotPresent
        name: signoz-schema-migrator-ch-ready
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: OnFailure
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
I have bumped up schema migrator version to 0.102.2
so, make sure to bump up signoz otel-collector as well to 0.102.2.
h
ooh, so thats wrong in the signoz chart?
or hmm, the helm failed
it was 0.102.0 here
but otel-collector is running 0.102.2
p
yes, most likely that caused the issue.
In the usual scenario where helm hook works as expected, the schema migrator job is re-created and applied before signoz-otel-collector.
h
that job seems to create a lot of issues, it also caused us "waiting for migrator" issue on otel-collector didnt want to start
p
Do check with Pulumi folks regarding Helm hooks support in Job.
Yeah, that might require you to manually intervene.
You could disable schema migrator temporarily to get around that but this should ideally be resolved with the Job helm hooks support.
Copy code
schemaMigrator:
  enabled: false
h
i would need the migrator now though, cause
No such column scope_name in table signoz_logs.distributed_logs
right ?
ill look into pulumi and helm-hooks though
p
cool, do let us know if you get any permanent solution to support pulumi
> i would need the migrator now though, cause
No such column scope_name in table signoz_logs.distributed_logs
right ? yes, you are right. In this scenerio, you do need it. It is not always the case with every new collector version release. So, I meant it can be useful in those cases or when you deploy any changes causing those pods to restart.
some additional info, when i upgrade the chart through pulumi, i get
* Helm release "signoz-apm/signoz" failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release signoz-apm/signoz: cannot patch "signoz-query-service" with kind StatefulSet: StatefulSet.apps "signoz-query-service" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden
this error, and then rerun it and it dissapears
but it seems the cluster where that error happens, the migrator fails
And I have no values related to storageclass
wish it said which field its trying to update
p
yeah, that's a common issue if you update immutable fields.
you can get the current query-service statefulset yaml, and compare with the new one that is to be applied.
h
smart, i compared it with the cluster where it works, seems I at some point tried to use
signoz-db-existing-claim
and thats causing trouble, cause the helm values dont want that, but the live version has that
image.png
p
Perhaps you set it previously.
Try setting it now:
Copy code
queryService:
  persistence:
    existingClaim: signoz-db-existing-claim
h
yeah, and then removed it, but it never actually got removet
yeah, what does that actually do, i cant really remember
im updating with that now
i wanted to keep users when i delete old cluster and make new cluster, so wanted it to use the same aws-volume
but i found anothre solution for that, just export the sqlite db before i delete the cluster
p
Btw, it's just a single SQLite DB file in query-service. So you just need to keep backup of that and use it across different signoz clusters.
👍 1
h
in the last cluster, its a different issue
Copy code
✦ at 13:50:35 ❯ kubectl logs signoz-schema-migrator-upgrade-9ddh5
Defaulted container "schema-migrator" out of: schema-migrator, signoz-schema-migrator-init (init), signoz-schema-migrator-ch-ready (init)
{"level":"info","timestamp":"2024-07-19T11:50:26.822Z","caller":"signozschemamigrator/migrate.go:89","msg":"Setting env var SIGNOZ_CLUSTER","component":"migrate cli","cluster-name":"cluster"}
{"level":"info","timestamp":"2024-07-19T11:50:26.822Z","caller":"signozschemamigrator/migrate.go:106","msg":"Successfully set env var SIGNOZ_CLUSTER ","component":"migrate cli","cluster-name":"cluster"}
{"level":"info","timestamp":"2024-07-19T11:50:26.822Z","caller":"signozschemamigrator/migrate.go:111","msg":"Setting env var SIGNOZ_REPLICATED","component":"migrate cli","replication":false}
{"level":"error","timestamp":"2024-07-19T11:50:26.823Z","caller":"basemigrator/migrator.go:26","msg":"Failed to create clickhouse connection","migrator":"logs","error":"failed to ping clickhouse: dial tcp 172.20.82.18:9000: connect: connection refused","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/migrationmanager/migrators/basemigrator.New|github.com/SigNoz/signoz-otel-collector/migrationmanager/migrators/basemigrator.New>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/migrationmanager/migrators/basemigrator/migrator.go:26\ngithub.com/SigNoz/signoz-otel-collector/migrationmanager.createNewMigrator\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/migrationmanager/manager.go:58\ngithub.com/SigNoz/signoz-otel-collector/migrationmanager.New\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/migrationmanager/manager.go:31\nmain.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozschemamigrator/migrate.go:120\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.11/x64/src/runtime/proc.go:267"}
{"level":"error","timestamp":"2024-07-19T11:50:26.823Z","caller":"migrationmanager/manager.go:60","msg":"Failed to create base migrator","migrator":"logs","error":"failed to ping clickhouse: dial tcp 172.20.82.18:9000: connect: connection refused","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/migrationmanager.createNewMigrator|github.com/SigNoz/signoz-otel-collector/migrationmanager.createNewMigrator>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/migrationmanager/manager.go:60\ngithub.com/SigNoz/signoz-otel-collector/migrationmanager.New\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/migrationmanager/manager.go:31\nmain.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozschemamigrator/migrate.go:120\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.11/x64/src/runtime/proc.go:267"}
{"level":"error","timestamp":"2024-07-19T11:50:26.823Z","caller":"migrationmanager/manager.go:33","msg":"Failed to create logs migrator","component":"migrationmanager","error":"failed to ping clickhouse: dial tcp 172.20.82.18:9000: connect: connection refused","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/migrationmanager.New|github.com/SigNoz/signoz-otel-collector/migrationmanager.New>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/migrationmanager/manager.go:33\nmain.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozschemamigrator/migrate.go:120\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.11/x64/src/runtime/proc.go:267"}
{"level":"fatal","timestamp":"2024-07-19T11:50:26.823Z","caller":"signozschemamigrator/migrate.go:122","msg":"Failed to create migration manager","component":"migrate cli","error":"failed to ping clickhouse: dial tcp 172.20.82.18:9000: connect: connection refused","stacktrace":"main.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozschemamigrator/migrate.go:122\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.11/x64/src/runtime/proc.go:267"}
but it seemed it succeeded the next time it ran
also, i had to
kubectl scale deploy signoz-otel-collector --replicas=0
during the deployment, otherwise it was just
Waiting for job signoz-schema-migrator-upgrade...
136 Views