Al
11/20/2023, 9:48 PMhelm upgrade signoz signoz/signoz --version '0.30.1'
Seems the signoz-schema-migrator-647928f90a35
job failed with
Pods Statuses: 0 Active (1 Ready) / 0 Succeeded / 1 Failed
Attaching the first portion of the clickhouse logs post upgrade.
signoz-otel-collector-585cc94dd-8fz9r --> signoz-otel-collector-migrate-init is busying with (similar for otel-collector-metrics) :
Waiting for job signoz-schema-migrator-647928f90a35...
Waiting for job signoz-schema-migrator-647928f90a35...
Waiting for job signoz-schema-migrator-647928f90a35...
Waiting for job signoz-schema-migrator-647928f90a35...
Waiting for job signoz-schema-migrator-647928f90a35...
Al
11/20/2023, 10:51 PMsignoz-otel-collector-metrics
is unable to parse otel-collector-metrics-config.yaml
and failing with the following:
{"level":"info","timestamp":"2023-11-21T15:56:54.625Z","caller":"service/service.go:69","msg":"Starting service"}
{"level":"info","timestamp":"2023-11-21T15:56:54.625Z","caller":"opamp/simple_client.go:26","msg":"Starting simple client","component":"simple-client"}
{"level":"fatal","timestamp":"2023-11-21T15:56:54.634Z","caller":"signozcollector/main.go:78","msg":"failed to run service:","error":"failed to start collector service: failed to start : failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:\n\n* error decoding 'receivers': error reading configuration for \"httpcheck/tatertot\": 1 error(s) decoding:\n\n* '' has invalid keys: endpoint, method","stacktrace":"main.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozcollector/main.go:78\nruntime.main\n\t/opt/hostedtoolcache/go/1.20.10/x64/src/runtime/proc.go:250"}
I have the following definition:
httpcheck/tatertot:
collection_interval: 60s
endpoint: <http://tatertot.com>
method: GET
Saad Ansari
11/21/2023, 6:42 AMotelCollector:
nodeSelector: {<http://karpenter.sh/capacity-type|karpenter.sh/capacity-type>: spot}
tolerations:
- key: "<http://gajigesa.com/spot|gajigesa.com/spot>"
operator: "Equal"
value: "true"
effect: "NoSchedule"
config:
processors:
# default parsing of logs
logstransform/internal:
operators:
- type: trace_parser
trace_id:
parse_from: attributes.trace_id
span_id:
parse_from: attributes.span_id
trace_flags:
parse_from: attributes.trace_flags
But still we're unable to see the id's exported as attributes. need help in understanding where exactly we need to add the trace_parser.
We also tried adding it under k8s-infra.presets.logsCollection.operators , but that didnt work as well.Douglas Lara
11/21/2023, 6:00 PMAlex Bowers
11/22/2023, 9:01 AMannotations.description
and annotations.summary
.
My setup is that I have a percentage line graph, and the alert threshold is set to 0.75 percent (0.0-1.0)
and the Y-axis unit
is Percent (0.0 - 1.0)
.
The summary and description I get through have different units (see attached image).
My description is set to the following: This alert is fired when the defined metric (current value: {{$value}}) crosses the threshold ({{$threshold}})
Alex Bowers
11/22/2023, 10:23 AMOK
, but if you go to disable it, it says Enable
. I'm not sure if this is a UI bug or if the alert itself is actually enabled / disabledKashyap Rajendra Kathrani
11/22/2023, 2:14 PMAl
11/22/2023, 3:19 PMopentelemetry-collector-contrib:0.88.0
) in remote clusters?Krishna Teja
11/22/2023, 4:27 PMNicolas Rakover
11/22/2023, 10:41 PMPishang Ujeniya
11/23/2023, 1:38 AMsignoz-0.30.2.tgz
3. Tried installing by converting tgz
to yaml
template and applying it manually using the command helm template signoz-app ./signoz-0.30.2.tgz --namespace signoz-platform --create-namespace --include-crds > ./signoz.v0.30.2.yaml
Please someone help me.
Even the link mentioned in the error message is also not working, so I guess that is the reason. (Attached screenshot in thread)ruoyu shen
11/23/2023, 4:03 AMtelemetry:
metrics:
address: 0.0.0.0:8888
level: detailed
logs:
level: "debug"
Despite these measures, I’m still unable to view the service data in the pod logs. I’m seeking advice on additional troubleshooting steps I might pursue.Shashwat Mohite
11/23/2023, 9:10 AMExternal Metrics
form traces if I implement Otel SDK by just a file tracing.js in node js.Oliver
11/23/2023, 2:06 PMLog.Logger = new LoggerConfiguration()
.WriteTo.OpenTelemetry(options =>
{
options.Endpoint = "<http://192.168.xx.yy:4317>";
options.ResourceAttributes = new Dictionary<string, object>
{
["service.name"] = "test-service"
};
})
.CreateLogger();
• My log message is reaching SigNoz and if I search for test-service
I can see messages in the logs - but nothing shows up in the Services.
The documentation suggests that this should happen automatically if service.name
is declared (which I believe I am doing in-line with the Serilog->OpenTelemetry docs)
Here's the SigNoz detail for a typical log entry (below): Hopefully someone can help point me to why these don't get recognised under Services or Traces?Saad Ansari
11/24/2023, 5:11 AMotelCollector:
config:
processors:
# default parsing of logs
logstransform/internal:
operators:
- type: move
field: body.level
to: attributes.level
- type: regex_parser
id: traceid
# <https://regex101.com/r/yFW5UC/1>
regex: '(?i)(^trace|(("| )+trace))((-|_||)id("|=| |-|:)*)(?P<trace_id>[A-Fa-f0-9]+)'
parse_from: body
parse_to: attributes.temp_trace
if: 'body matches "(?i)(^trace|((\"| )+trace))((-|_||)id(\"|=| |-|:)*)(?P<trace_id>[A-Fa-f0-9]+)"'
output: spanid
- type: regex_parser
id: spanid
# <https://regex101.com/r/DZ2gng/1>
regex: '(?i)(^span|(("| )+span))((-|_||)id("|=| |-|:)*)(?P<span_id>[A-Fa-f0-9]+)'
parse_from: body
parse_to: attributes.temp_trace
if: 'body matches "(?i)(^span|((\"| )+span))((-|_||)id(\"|=| |-|:)*)(?P<span_id>[A-Fa-f0-9]+)"'
output: trace_parser
- type: trace_parser
id: trace_parser
trace_id:
parse_from: attributes.temp_trace.trace_id
span_id:
parse_from: attributes.temp_trace.span_id
output: remove_temp
- type: remove
id: remove_temp
field: attributes.temp_trace
if: '"temp_trace" in attributes'
but am getting an error like below
{"level":"error","timestamp":"2023-11-24T05:05:53.667Z","caller":"opamp/server_client.go:261","msg":"Collector failed for restart during rollback","component":"opamp-server-client","error":"failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:\n\n* error decoding 'processors': error reading configuration for \"logstransform/internal\": 1 error(s) decoding:\n\n* error decoding 'operators[4]': unmarshal to move: 1 error(s) decoding:\n\n* '' has invalid keys: field","stacktrace":"github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).reload\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:261\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*agentConfigManager).applyRemoteConfig\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/config_manager.go:173\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*agentConfigManager).Apply\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/config_manager.go:159\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onRemoteConfigHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:209\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onMessageFuncHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:199\ngithub.com/open-telemetry/opamp-go/client/types.CallbacksStruct.OnMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/types/callbacks.go:162\ngithub.com/open-telemetry/opamp-go/client/internal.(*receivedProcessor).ProcessReceivedMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/receivedprocessor.go:131\ngithub.com/open-telemetry/opamp-go/client/internal.(*wsReceiver).ReceiverLoop\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/wsreceiver.go:57\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:243\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"error","timestamp":"2023-11-24T05:05:53.667Z","caller":"opamp/server_client.go:216","msg":"failed to apply config","component":"opamp-server-client","error":"failed to reload config: /var/tmp/collector-config.yaml: collector failed to restart: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:\n\n* error decoding 'processors': error reading configuration for \"logstransform/internal\": 1 error(s) decoding:\n\n* error decoding 'operators[4]': unmarshal to move: 1 error(s) decoding:\n\n* '' has invalid keys: field","stacktrace":"github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onRemoteConfigHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:216\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onMessageFuncHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:199\ngithub.com/open-telemetry/opamp-go/client/types.CallbacksStruct.OnMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/types/callbacks.go:162\ngithub.com/open-telemetry/opamp-go/client/internal.(*receivedProcessor).ProcessReceivedMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/receivedprocessor.go:131\ngithub.com/open-telemetry/opamp-go/client/internal.(*wsReceiver).ReceiverLoop\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/wsreceiver.go:57\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:243\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"info","timestamp":"2023-11-24T05:05:53.667Z","logger":"agent-config-manager","caller":"opamp/config_manager.go:172","msg":"Config has changed, reloading","path":"/var/tmp/collector-config.yaml"}
can some one help ?
this happened only after adding new operator
- type: move
field: body.level
to: attributes.level
Siddhartha
11/24/2023, 7:54 AMv0.32.0
to v0.34.3
and I keep getting the following error (docker standalone installation):
2023.11.24 07:34:14.800963 [ 50 ] {64a16eef-ad10-4312-a240-8248b27fc777} <Error> TCPHandler: Code: 36. DB::Exception: There was an error on [clickhouse:9000]: Code: 36. DB::Exception: Table doesn't have any table TTL expression, cannot remove. (BAD_ARGUMENTS) (version 23.7.3.14 (official build)). (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000e91fbb7 in /usr/bin/clickhouse
1. DB::DDLQueryStatusSource::generate() @ 0x000000001441a12f in /usr/bin/clickhouse
2. DB::ISource::tryGenerate() @ 0x00000000152c6ef5 in /usr/bin/clickhouse
3. DB::ISource::work() @ 0x00000000152c6a46 in /usr/bin/clickhouse
4. DB::ExecutionThreadContext::executeTask() @ 0x00000000152de39a in /usr/bin/clickhouse
5. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x00000000152d4f10 in /usr/bin/clickhouse
6. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x00000000152d4430 in /usr/bin/clickhouse
7. DB::PipelineExecutor::execute(unsigned long) @ 0x00000000152d40d1 in /usr/bin/clickhouse
8. ? @ 0x00000000152e1f0d in /usr/bin/clickhouse
9. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000ea0704f in /usr/bin/clickhouse
10. ? @ 0x000000000ea0d041 in /usr/bin/clickhouse
11. ? @ 0x00007ff03daf7609 in ?
12. __clone @ 0x00007ff03da1c133 in ?
Henri Ropponen
11/24/2023, 9:19 AMOla Ahlman
11/24/2023, 2:30 PMJohan Watkins
11/24/2023, 3:25 PMGreen Warrior
11/26/2023, 2:23 AMsurya prakash
11/27/2023, 5:54 AMMingyang Zhou
11/27/2023, 7:29 AMMaitryy
11/27/2023, 8:03 AMplatform chi-my-release-clickhouse-cluster-0-0-0 1/1 Running 0 10h
platform my-release-clickhouse-operator-657986696-mtgdq 2/2 Running 0 12h
platform my-release-k8s-infra-otel-agent-pvf29 1/1 Running 0 12h
platform my-release-k8s-infra-otel-deployment-65767679c6-llgmg 1/1 Running 0 12h
platform my-release-signoz-alertmanager-0 0/1 Init:0/1 0 9h
platform my-release-signoz-frontend-5fc8679d4b-zd5c9 0/1 Init:0/1 0 15h
platform my-release-signoz-frontend-775b95894-rl5pm 0/1 Init:0/1 0 11h
platform my-release-signoz-otel-collector-577f7cc9c6-jswbm 0/1 Init:0/1 0 12h
platform my-release-signoz-otel-collector-7b7784c866-hr754 0/1 Init:0/1 0 15h
platform my-release-signoz-otel-collector-metrics-54d75b67c7-5ccx9 0/1 Init:0/1 0 12h
platform my-release-signoz-otel-collector-metrics-7f9fcd767-tqqxv 0/1 Init:0/1 0 15h
platform my-release-signoz-query-service-0 0/1 Init:0/1 0 10h
platform my-release-signoz-schema-migrator-56769c434706-mzm2s 0/1 Init:0/1 0 12h
platform my-release-zookeeper-0 1/1 Running 0 15h
In the init container logs i see
wget: bad address 'my-release-signoz-query-service:8080'
waiting for query-service
wget: bad address 'my-release-signoz-query-service:8080'
waiting for query-service
---> init queryservice logs
wget: bad address 'my-release-clickhouse:8123'
waiting for clickhouseDB
wget: bad address 'my-release-clickhouse:8123'
waiting for clickhouseDB
I was thinking it to be a coredns issue
coredns logs:
[INFO] 10.244.0.46:45441 - 56699 "AAAA IN my-release-signoz-query-service. udp 49 false 512" SERVFAIL qr,aa,rd,ra 49 0.000118854s
[INFO] 10.244.0.46:45441 - 31615 "A IN my-release-signoz-query-service. udp 49 false 512" SERVFAIL qr,aa,rd,ra 49 0.000033334s
[INFO] 10.244.0.56:54189 - 49629 "A IN my-release-clickhouse. udp 39 false 512" SERVFAIL qr,rd,ra 39 0.025739722s
[INFO] 10.244.0.56:54189 - 54743 "AAAA IN my-release-clickhouse. udp 39 false 512" SERVFAIL qr,rd,ra 39 0.025829025s
[INFO] 10.244.0.48:49914 - 36437 "AAAA IN my-release-clickhouse.platform-1.svc.cluster.local. udp 68 false 512" NOERROR qr,aa,rd 161 0.000295586s
[INFO] 10.244.0.48:49914 - 43859 "A IN my-release-clickhouse.platform-1.svc.cluster.local. udp 68 false 512" NOERROR qr,aa,rd 134 0.000323042s
[INFO] 10.244.0.43:34472 - 30113 "AAAA IN my-release-signoz-otel-collector.platform-1.svc.cluster.local. udp 90 false 1232" NOERROR qr,aa,rd 172 0.000309202s
[INFO] 10.244.0.43:43698 - 24992 "A IN my-release-signoz-otel-collector.platform-1.svc.cluster.local. udp 90 false 1232" NOERROR qr,aa,rd 156 0.000315525s
Please let me know what the problem is and how to resolve it.Abhishek pandey
11/27/2023, 10:44 AMAbel Hristodor
11/27/2023, 3:35 PMclickhousetraceexporter/writer.go
throws Cannot reserve 1.00 MiB, not enough space
.
Signoz is deployed to Kubernetes
and following this guide https://signoz.io/docs/operate/clickhouse/increase-clickhouse-pv/ I also increased the space to 25Gi
. The error persists. Can anyone help?Saad Ansari
11/27/2023, 6:04 PMcoalesce.go:175: warning: skipped value for zookeeper.initContainers: Not a table.
Error: UPGRADE FAILED: cannot patch "gg-signoz-schema-migrator-ecb41750145b" with kind Job: Job.batch "gg-signoz-schema-migrator-ecb41750145b" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"<http://app.kubernetes.io/component|app.kubernetes.io/component>":"schema-migrator", "<http://app.kubernetes.io/instance|app.kubernetes.io/instance>":"gg-signoz", "<http://app.kubernetes.io/name|app.kubernetes.io/name>":"signoz", "<http://batch.kubernetes.io/controller-uid|batch.kubernetes.io/controller-uid>":"953acf58-9ead-4cd9-913b-ded5d232fdc7", "<http://batch.kubernetes.io/job-name|batch.kubernetes.io/job-name>":"gg-signoz-schema-migrator-ecb41750145b", "controller-uid":"953acf58-9ead-4cd9-913b-ded5d232fdc7", "job-name":"gg-signoz-schema-migrator-ecb41750145b"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container{core.Container{Name:"gg-signoz-schema-migrator-init", Image:"<http://docker.io/busybox:1.35|docker.io/busybox:1.35>", Command:[]string{"sh", "-c", "until wget --user \"${CLICKHOUSE_USER}:${CLICKHOUSE_PASSWORD}\" --spider -q gg-signoz-clickhouse:8123/ping; do echo -e \"waiting for clickhouseDB\"; sleep 5; done; echo -e \"clickhouse ready, starting schema migrator now\";"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"CLICKHOUSE_USER", Value:"admin", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"CLICKHOUSE_PASSWORD", Value:"27ff0399-0d3a-4bd8-919d-17c2181e6fb9", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"CLICKHOUSE_SECURE", Value:"false", ValueFrom:(*core.EnvVarSource)(nil)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil), Claims:[]core.ResourceClaim(nil)}, ResizePolicy:[]core.ContainerResizePolicy(nil), VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]core.Container{core.Container{Name:"schema-migrator", Image:"signoz/signoz-schema-migrator:0.88.1", Command:[]string(nil), Args:[]string{"--dsn", "<tcp://gg-signoz-clickhouse:9000?username=admin&password=27ff0399-0d3a-4bd8-919d-17c2181e6fb9>"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil), Claims:[]core.ResourceClaim(nil)}, ResizePolicy:[]core.ContainerResizePolicy(nil), VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc01805b860), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc01c53e480), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil), SchedulingGates:[]core.PodSchedulingGate(nil), ResourceClaims:[]core.PodResourceClaim(nil)}}: field is immutable
Pierre
11/27/2023, 7:44 PMMaitry Jadiya
11/28/2023, 12:18 AMDiogo Baeder
11/28/2023, 2:26 AMopentelemetry-instrumentation-logging
, I can get all the information in the log lines (e.g. 2023-11-28 02:07:35,925 INFO [studioregistration.services.registration] [registration.py:51] [trace_id=15789f18fc808508e6ffae468dd0b655 span_id=8f6067a7de71637c resource.service.name=registration trace_sampled=True]
), when I click "View Details" there's no span ID, no trace ID, nothing, in the log fields. So when I try to arrive in the logs coming from the "Traces" section I can't find any logs with, say, that trace ID - because these log fields are empty.
I'm sure I must be doing something wrong, but what is it? Any ideas? I saw some people talking about using parsers in the collector, but is it really necessary?
Thank you in advance!Slackbot
11/28/2023, 5:24 AM