This message was deleted.
# support
s
This message was deleted.
1
s
How much scale are we talking about? It might be overkill for regular users. Just putting the queue alone doesn’t guarantee the prevention of data loss since exporter will eventually drop the data when ClickHouse is not reachable.
a
It might be overkill for regular users.
correct. I think the data flow would look like otel-collector => kafka => clickhouse so we expect Kafka to handle bursts in traffic and downtime of clickhouse
s
They mentioned they want to use it as a receiver as a substitute for the gRPC OTLP receiver and then export it to ClickHouse.
s
ok , so is there a way to implement some circuit breaking mechanism at microservice level keeping transport mechanism to as it is (gRPC) which can be pass with other env variables so that source service does not become down in case signoz backend is completely down as it will be continuously sending telemetry event to signoz , just faced this on my basic setup and testing
a
source service starts dropping telemetry data if signoz is down. It should not affect application other than it will need more memory to keep a batch after which it starts dropping data. The service will also print logs about being unable to send data to signoz but application should work just fine