Hello SigNoz team, We are getting the below error:...
# support
m
Hello SigNoz team, We are getting the below error: error exporterhelper/queued_retry_inmemory.go:107 Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "traces", "name": "kafka", "error": "max elapsed time expired Failed to deliver 1 messages due to kafka server: Message was too large, server rejected it to avoid allocation error", "dropped_items": 15} In Kafka topic the max.message.bytes has been set to 5 MB. We have set the below configuration in otel config file
Copy code
producer:
      max_message_bytes: 1000000
I see below thread related to this. https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/22033 Pls guide how to handle the big messages.
a
things on top of my mind • add message size limit at kafka brokers https://stackoverflow.com/a/21343878/3243212 • The kakfkaexporter is probably used with batch processors which produces 10K times of message received by the producers • check the size of the message being dropped at kafka
m
Hi Ankit, 1. We have configured below paramter in Kafka topic. max.messaging.bytes=5000000 2. We have configured below parameters to limit the message that is getting produced by the processor: send_batch_size: 10 send_batch_max_size: 15 timeout: 0s 3. How to check the size of the message which is getting dropped ? Can we get it from the logs of the collector ?