MaoShu SRE
10/28/2024, 2:05 AMMy data writing volume is about 1.25Gb/s, 2.5M c/s, and the data is stored for two days. During the operation, there are many small file problems (Too many parts (300 with average size of 21.80 KiB) in table 'signoz_metrics.samples_v4
)
MaoShu SRE
10/28/2024, 2:06 AMclickhouse uses 12 shard / 1 replica,
collector sets send_batch_size: 100000.
After running for a period of time and 2 days later, there were many mergetree operations in clickhouse. Then the entire system OOM becomes unusable.
MaoShu SRE
10/28/2024, 2:06 AMCan anyone give me some optimization suggestions?
Srikanth Chekuri
10/28/2024, 5:57 PMMaoShu SRE
10/29/2024, 1:18 AMMaoShu SRE
11/01/2024, 3:08 AMI am still adding logs, and now the write volume is 1.6Gib/s. Can you give me some performance suggestions? My idea is to increase the number of rows written by otel-collector in batches to clickhouse at a time, thereby reducing the number of file merges. I feel that the modified send_batch_size: 100000 configuration is not effective.
Srikanth Chekuri
11/01/2024, 4:50 AMMaoShu SRE
11/01/2024, 6:30 AMSrikanth Chekuri
11/01/2024, 8:43 AM