Hi! I'm trying to tune Signoz for high load, 1M ro...
# support
t
Hi! I'm trying to tune Signoz for high load, 1M rows/min Getting this error
Copy code
Code: 252. DB::Exception: Too many partitions for single INSERT block (more than 100)
I suppose it comes from otel collector. Could you point me to the setting I have to change? I use helm chart, v 0.91.0
Copy code
config.d/concurrent-queries.xml: |
            <yandex>
                <max_concurrent_queries>5000</max_concurrent_queries>
            </yandex>
          config.d/merge-settings.xml: |
            <yandex>
                <merge_tree>
                    <max_bytes_to_merge_at_max_space_in_pool>10000000000</max_bytes_to_merge_at_max_space_in_pool>
                    <max_bytes_to_merge_at_min_space_in_pool>1000000000</max_bytes_to_merge_at_min_space_in_pool>
                    <merge_max_block_size>8192</merge_max_block_size>
                </merge_tree>
            </yandex>
          users.d/distributed-timeouts-profile.xml: |-
            <yandex>
                <profiles>
                    <default>
                        <distributed_ddl_task_timeout>600</distributed_ddl_task_timeout>
                        <distributed_background_insert_timeout>300</distributed_background_insert_timeout>
                        <insert_distributed_timeout>300</insert_distributed_timeout>
                        <insert_quorum_timeout>600000</insert_quorum_timeout>
                        <max_memory_usage>16106127360</max_memory_usage>
                        <max_bytes_before_external_group_by>20000000000</max_bytes_before_external_group_by>
                        <max_execution_time>600</max_execution_time>
                        <max_partitions_per_insert_block>1000</max_partitions_per_insert_block>
                        <max_memory_usage_for_user>32212254720</max_memory_usage_for_user>
                        <max_concurrent_queries_for_user>400</max_concurrent_queries_for_user>
                        <max_concurrent_queries_for_all_users>600</max_concurrent_queries_for_all_users>
                        <max_threads>16</max_threads>
                        <max_bytes_before_external_sort>20000000000</max_bytes_before_external_sort>
                        <parts_to_delay_insert>1000</parts_to_delay_insert>
                        <parts_to_throw_insert>2000</parts_to_throw_insert>
                    </default>
                </profiles>
            </yandex>
Copy code
otelCollector:
        autoscaling:
          enabled: true
          maxReplicas: 30
        config:
          exporters:
            clickhouselogsexporter:
              retry_on_failure:
                enabled: true
                initial_interval: 5s
                max_elapsed_time: 300s
                max_interval: 30s
              timeout: 30s
            clickhousetraces:
              retry_on_failure:
                enabled: true
                initial_interval: 5s
                max_elapsed_time: 300s
                max_interval: 30s
              timeout: 30s
          processors:
            batch:
              send_batch_max_size: 10000
              send_batch_size: 5000
              timeout: 10s
            k8sattributes:
              passthrough: true
v
Increase your batch size please
Copy code
send_batch_max_size: 100000
              send_batch_size: 100000
t
Did not helped.. I tried 10M
t
I am running into same issue, looks like we need do decrease batch size instead of increasing. I reduced to 1k, so far seems ok