Hello. We are currently investigating SigNoz usag...
# support
t
Hello. We are currently investigating SigNoz usage in our company. We installed SigNoz in our K8s cluster, and sending logs via K8s infra helm chart. Until this part, everything seems to be working (logs are arriving in SigNoz). We tried to set up log pipelines in the UI, but none of them seem to deploy correctly. We tried deleting all pipelines and creating one with filter
body != nil
and an Add processor trying to add field
attributes.welcome
with value
Hello world!
. It works in the preview processor step, but after saving, new logs don't get the new attribute, and the pipelines history always show that the pipelines are dirty. We checked the logs of signoz and otelcollector but could not see anything interesting. (No errors or warnings could be seen). Also checked both Github repo (SigNoz, and SigNoz Charts) for open/closed issues or PR-s for related issues. We are currently running a 2 replica based deployment (2 signoz, 2 otel collector) Does anybody has an idea what could be wrong, or were should we continue our investigation? Thanks in advance.
c
Our use case may be applicable. I was trying to send all traces with a particular attribute key to kafka and created a pipeline. What we did not expect was that the match rule didn't work as we would have thought where you where to say span: - 'attributes'["my.key"] !- nil but instead to filter with the condition being == nil. This basically says filter OUT anything where this condition is true and thus where == null will include all spans where this key is present. We did this in the otel-collector yaml though and not in the UI (which I wasn't even aware of!