Hello everybody. I still have big trouble with slo...
# support
h
Hello everybody. I still have big trouble with slow log messages in signoz (k8s cluster, go application, logrus to console). Is there anything that I can do to speed up things? Should i send the data from logrus directly to open telemetry? Is there any example for this? Where/how do I get the data in the website?
p
hey @Harald Fielker Can you share what you mean by slow log messages? Are you finding logs query to be slower than usual? Can you give some data points on 1. Machine size used to run SigNoz (RAM/CPU allocated) 2. Logs data size on which query is being run 3. What type of query are you writing?
h
k8s cluster - zero volume on logs - I reduced even the logs from the readiness /pings to the webserver.
Copy code
k8s_namespace_name IN ('the-app') AND k8s_pod_name CONTAINS 'foo-service'
The machine is a 12core - 128gb i7 oder i9
3500gb/sec ssd
@Pranay I have an idea - but I can't prove it
I am logging JSON formats with logrus (from golang)
it sorts the json to alphabetic order for the keys
maybe that's an issue, if the "level" isn't at position #1 in the message body?
I have the feeling, I didn't try everything. But I am trying to fix this for months and it simply doesn't work.
I added patches to the metrics examples - but it's just a random number generator 🙂 Whatever I try - I don't get stable metrics running. (as I used them in DDog).
Willing to do what every it takes and to help - at the moment I am getting a bit of the feeling - it's maybe not just me.
@Pranay do you guys have 100% validation, that k3s is working with SigNoz?
Now the messages arrived 😒
How can I debug this?
Copy code
2023-06-21T19:38:38.358Z	warn	batchprocessor@v0.76.1/batch_processor.go:190	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:41.360Z	warn	batchprocessor@v0.76.1/batch_processor.go:190	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:44.362Z	warn	batchprocessor@v0.76.1/batch_processor.go:190	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:47.363Z	warn	batchprocessor@v0.76.1/batch_processor.go:190	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:50.366Z	error	exporterhelper/queued_retry.go:317	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 3}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send>
	/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.76.1/exporterhelper/queued_retry.go:317
got this from the otel collector 🥳🥳🥳
After restarting the pod - everything is working fine
Can I set signoz just to collect data from a specific k8s namespace?
Ok that did the trick - just focus on the namespaces that are relevant - now it's almost realtime.