Hi Team, I came across this statement in the Sig...
# support
p
Hi Team, I came across this statement in the SigNoz documentation: “SigNoz utilizes ClickHouse, a high-performance columnar database, which allows it to efficiently manage large volumes of data. This architecture supports ingestion rates exceeding 10TB per day, making it capable of handling significant operational loads typical in large-scale environments.” In our environment, we are experiencing very high ingestion rates, and most of the time, ClickHouse struggles to keep up. Do we need to set up ClickHouse sharding and clustering manually, instead of relying solely on the out-of-the-box Helm deployment, to handle such high ingestion rates effectively? We are already implementing sampling and other optimizations, but we want SigNoz to scale to support very high ingestion loads. We’re using the community version of SigNoz. If available, anyone can provide a production-grade values.yaml sample configuration that can help achieve this scalability?
u
I'm curious too