Hello, i seem to be having issues with sending queue is full `{"level":"error","ts":1727180784.2226...
s
Hello, i seem to be having issues with sending queue is full
{"level":"error","ts":1727180784.2226639,"caller":"exporterhelper/common.go:296","msg":"Exporting failed. Rejecting data.","kind":"exporter","data_type":"traces","name":"clickhousetraces","error":"sending queue is full","rejected_items":30,"stacktrace":"<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/exporterhelper/common.go:296\<http://ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesRequestExporter.func1|ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesRequestExporter.func1>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/exporterhelper/traces.go:134\<http://ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces|ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/consumer@v0.102.1/traces.go:25\<http://ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export|ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.102.0/batch_processor.go:414\<http://ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems|ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.102.0/batch_processor.go:261\<http://ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).startLoop|ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).startLoop>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.102.0/batch_processor.go:223"}
However, when i checked clickhouse storage i got
Copy code
Filesystem                Size      Used Available Use% Mounted on
/dev/sde                884.8G    265.1G    574.7G  32% /var/lib/clickhouse
chi-my-release-clickhouse-cluster-0-0-0:/$
What could be the issue here
{"level":"info","ts":1727194352.5043101,"caller":"exporterhelper/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"traces","name":"clickhousetraces","error":"code: 252, message: Too many parts (3024 with average size of 13.42 KiB) in table 'signoz_traces.dependency_graph_minutes_v2 (19e81fb1-fb32-44fa-ad5b-e3228febb9b3)'. Merges are processing significantly slower than inserts: while pushing to view signoz_traces.dependency_graph_minutes_messaging_calls_mv_v2 (83c6b209-46c8-4560-b1a7-64e30f77c651)","interval":"3.311664052s"}
s
This happens when the parts merging is happening slower. There are two reasons 1. the cpu for clickhouse is not enough 2.your inserts are too many and you need to batch before write to clickhouse
s
You are right my CPU was maxed out, so we have decided to deploy external clickhouse on a VM. Do we have to deploy zookeeper also since we are planning on creating a sharded clickhouse. how would you advise we approach this task?
s
How much CPU did you provision and how much data are you ingesting? Yes, you need to use zookeeper
s
About 50Million Span daily
It's a 6 node k8s cluster with 8vCPU and 34GB Ram
Copy code
clickhouse:
  layout:
    shardsCount: 2
    replicasCount: 1
  zookeeper:
    replicaCount: 3
  podDistribution:
    - type: ClickHouseAntiAffinity
      topologyKey: kubernetes.io/hostname
    - type: ReplicaAntiAffinity
      topologyKey: kubernetes.io/hostname
    - type: ShardAntiAffinity
      topologyKey: kubernetes.io/hostname

  persistence:
    size: 1100Gi

  clickhouseOperator:
    zookeeperLog:
      ttl: 1

schemaMigrator:
  enableReplication: false
@Srikanth Chekuri In your previous response you mentioned this 2.your inserts are too many and you need to batch before write to clickhouse How do i achieve this?
236 Views