Hello, We have migrated our observability stack f...
# support
e
Hello, We have migrated our observability stack from ClickHouse to Signoz. First of all, thank you very much for creating such a great product—everything looks fantastic. I have a question: In version 0.75.0, the /services screen takes quite a long time to populate with data. We have to wait for about one minute, and if we select an environment, it loads faster but still takes around 40 seconds. Is there anything we can do to improve this performance? If I’m not mistaken, it seems to be making a request to the query-service’s "/api/v1/services" endpoint in the background.
v
How is your clickhouse running?
e
We deployed ClickHouse using the Helm Chart in Signoz. It is running on EKS with 2 shards, using PVC with a gp3 volume type.
v
How big are the clickhouse machines? And how big is the gp3 volume size? Probably a disk bottleneck.
e
Actually, it’s not too much; a 200 GB disks is currently attached, and they usage are as follows:
v
I don't remember of the top of my head how gp3 throughput scales with the size, but if you want it to be faster, your disk needs more throughput. Increasing size of the disk is one way. Moving to a bigger instance size is one way.
It seems like a lot of folks are interested in knowing how to do this. @Nagesh Bansal
n
should we create some FAQs around this?
e
GP3 is not a bad disk in terms of performance; it offers 1,000 MB/s throughput and can reach up to 16K IOPS. There are higher-performance disks like IO2, but they would be too costly for us. 😕 Would increasing the number of ClickHouse replicas provide any benefits? If multi-replica reads are being utilized, it might lead to some improvements.
a
i am also struggling with disk performance bottleneck with single clickhouse instance @Enis Kollugil i think increase number of shard will help more than replica. @Srikanth Chekuri can you give more idea how to properly implement sharding? will just by increasing below in helm chart layout: shardsCount: 2 do the trick or more is needed?
v
Yes this should do the trick.
The old data won't get rebalanced automatically.
Starting with the new data, sharding will happen!