Hey, does anyone have any experience to share how ...
# general
Hey, does anyone have any experience to share how signoz cloud performs at scale? For context, we're currently using Humio for logs (3TB ingest/day) and victoriametrics for metrics (don't remember the scale out of my head, but will look it up later). We tried out Loki and it really didn't work for us - full text queries that take single digit seconds in humio would be very slow or time out in Loki.
I'm going to look with the team at what's the max scale we're being used at @Christian Theilemann, will look at what data we can share. In general, SigNoz is using Clickhouse as its datastore, they've written a bit about optimizing performance for log storage, I should ping Dale to see if he ever wrote the promised follow-up
Hi @Christian Theilemann , it is difficult to do exact comparison but we have tested with 5TB/day and 1M active timeseries per minute.
It should definitely easily scale upto 5 times more
have you done something like search/query for a random string (like
) on the entire data (and not just a pre-filtered dataset or specific column) of the last 2 days - which would, in the worst case - scan about 10TB of data? I know that things like that are very problematic in loki and some other systems (in loki this would almost certainly time out), but in humio they optimize queries like these via bloom filters.
Btw, you don't have by chance a way to ingest log data from vector.dev (which is what we're currently using) ? That would make it easy to test things out for me.
You can also use inverted index (still experimental in clickhouse) https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/invertedindexes