we suddenly stopped receiving traces on our k8s si...
# support
d
we suddenly stopped receiving traces on our k8s signoz deployment, I am getting logs but not traces when searching for < past 30 mins rate is also coming up but operations are not coming, please take a look
@nitya-signoz @Srikanth Chekuri please let me know how can I resolve this, thanks.
Copy code
2024-06-20T13:23:45.371Z	warn	batchprocessor@v0.88.0/batch_processor.go:258	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2024-06-20T13:23:46.255Z	error	exporterhelper/queue_sender.go:184	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "dropped_items": 126}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).send>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/queue_sender.go:184
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/common.go:196
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func1|go.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func1>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/traces.go:100
<http://go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces|go.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/consumer@v0.88.0/traces.go:25
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:407
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:256
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*shard).start|go.opentelemetry.io/collector/processor/batchprocessor.(*shard).start>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:218
2024-06-20T13:23:46.255Z	warn	batchprocessor@v0.88.0/batch_processor.go:258	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "traces", "error": "sending_queue is full"}
2024-06-20T13:23:46.372Z	error	exporterhelper/queue_sender.go:184	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 273}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).send>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/queue_sender.go:184
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/common.go:196
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func1|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func1>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/logs.go:100
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/consumer@v0.88.0/logs.go:25
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:489
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:256
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*shard).start|go.opentelemetry.io/collector/processor/batchprocessor.(*shard).start>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:218
2024-06-20T13:23:46.372Z	warn	batchprocessor@v0.88.0/batch_processor.go:258	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2024-06-20T13:23:47.373Z	error	exporterhelper/queue_sender.go:184	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 1417}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).send>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/queue_sender.go:184
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/common.go:196
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func1|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func1>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/logs.go:100
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/consumer@v0.88.0/logs.go:25
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:489
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:256
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*shard).start|go.opentelemetry.io/collector/processor/batchprocessor.(*shard).start>
	/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.88.0/batch_processor.go:218
2024-06-20T13:23:47.373Z	warn	batchprocessor@v0.88.0/batch_processor.go:258	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
I am getting these logs
n
Is there any increase in amount of data generated, also check your batch size, you can increase it. Also check if clickhouse is healthy
d
you mean increase batch size from application size or collector side?
n
this is your collector logs right , you can add it in your collector
d
I was trying to figure out how to increase queue size of collector but wasn’t able to find it for k8s deployment
n
For increasing queue size you will have to add queue settings in your exporter https://github.com/SigNoz/signoz-otel-collector/blob/fd180d5dfe7fdf456b00f7e1dd4f7[…]f52222b4a9/exporter/clickhouselogsexporter/testdata/config.yaml . I would suggest to configure your batch processor first before increasing batch size
d
Copy code
/** The maximum batch size of every export. It must be smaller or equal to
     * maxQueueSize. The default value is 512. */
I wasn’t increasing it because of this note
I tried copying otel-config from pod, but wasn’t able to find nay queue size there as well
I am using helm to install signoz, so how can I change queue_size there?
n
the default values are used, you will have to add them on your own. You will have to write a override-values.yaml and apply using helm
Copy code
otelCollector:
    config:
        exporters:
            clickhouselogsexporter:
                sending_queue:
                    queue_size: 100
I would still suggest to configure your batch processor first if not already done
d
can you tell me how I can check default value of queue_size, so that I can configure batch size based on that
Copy code
const batchProcessor = new BatchSpanProcessor(exporter, {
		maxExportBatchSize: 512,
		maxQueueSize: 4096,
		scheduledDelayMillis: 3000,
		exportTimeoutMillis: 30000,
	});
this is my current batch processor configuration
this is your application batch processor https://signoz-community.slack.com/archives/C01HWQ1R0BC/p1718895309867069?thread_ts=1718889715.433979&amp;cid=C01HWQ1R0BC . I was talking about the collector batch processor, since that is what is throwing errors right
d
so the default value is already 50000 assuming 300 req/sec for 5 mins batching, should I increase it to a bigger number like 100000?
is there any resource you know that I can go through to tune these values?
n
Yeah default is 50K, you can also try increasing the timeout. You will have to crosscheck and test out why such small batches are getting droppped , you can tune the values like this https://signoz-community.slack.com/archives/C01HWQ1R0BC/p1718895209821719?thread_ts=1718889715.433979&amp;cid=C01HWQ1R0BC
d
got it, thanks.
👍 1
is there any resource you know that I can go through to tune these values?
I actually meant if you know any resource where I can read about what are generally the good values for these parameters? and generally how these values should be calculated
d
got it, let me go through these, thanks again for the help
👍 1
124 Views