https://signoz.io logo
#general
Title
# general
j

Juha Patrikainen

02/08/2023, 1:12 PM
Installed clickhouse with SigNoz chart. Is there a way to limit clickhouse data persistency? Now clickhouse PV is 100% full which of course brings pain.
p

Pranay

02/08/2023, 1:13 PM
@Juha Patrikainen Have you checked this - https://signoz.io/docs/userguide/retention-period/ Though updating retention period is a resource heavy operation - so ensure you have some memory and cpu capacity available before you run this, also this might take some time
What is your current retention period
j

Juha Patrikainen

02/08/2023, 1:17 PM
Aah, they are empty at the moment
Thanks for the help @Pranay!
p

Pranay

02/08/2023, 1:20 PM
You may also want to upgrade to v0.15.0 where we have upgraded the logic for updating the retention period to make it more robust. In earlier versions, it sometimes causes silent fail issues
j

Juha Patrikainen

02/08/2023, 1:21 PM
@Pranay Would ne nice to be configure retention via Helm chart values. Seems that is not possible at the moment.
p

Pranay

02/08/2023, 1:23 PM
Yeah, thats a good idea actually. Can you create an issue for this - https://github.com/SigNoz/signoz/issues/new/choose
j

Juha Patrikainen

02/08/2023, 1:23 PM
Sure, no prob 👍
p

Prashant Shahi

02/08/2023, 11:08 PM
It would require changes in query-service to support that. With the help of either environment variables or flags.
j

Juha Patrikainen

02/09/2023, 8:09 AM
Setting retention times did not do anything, I guess it could not complete it because PV was already full.
p

Prashant Shahi

02/09/2023, 8:10 AM
when PV is full, it is recommended to increase PV size prior to updating retention times,
j

Juha Patrikainen

02/09/2023, 8:11 AM
@Prashant Shahi @Pranay I deleted clickhouse PV and created it again, but now SigNoz otel collector gives error on start:
Copy code
...
2023-02-09T08:12:09.537Z	info	clickhousetracesexporter/clickhouse_factory.go:142	View does not exist, skipping patch	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "table": "dependency_graph_minutes_db_calls_mv"}
2023-02-09T08:12:09.537Z	info	clickhousetracesexporter/clickhouse_factory.go:116	Running migrations from path: 	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "test": "/migrations"}
2023-02-09T08:12:09.546Z	info	clickhousetracesexporter/clickhouse_factory.go:128	Clickhouse Migrate finished	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "error": "Dirty database version 13. Fix and force version."}
Error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": code: 60, message: Table signoz_traces.distributed_signoz_index_v2 doesn't exist
2023/02/09 08:12:09 application run finished with error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": code: 60, message: Table signoz_traces.distributed_signoz_index_v2 doesn't exist
Is there a way to have SigNoz recreate tables?
Ended up doing full reinstall -> problem solved.
p

Prashant Shahi

02/09/2023, 1:29 PM
@Juha Patrikainen I see. That issue could have been resolved by restarting OtelCollector container.
After clickhouse PV is force removed like that, OtelCollector restart would run migration script in next restart.
Ended up doing full reinstall -> problem solved.
good to know that issue is resolved.
w

Wesley Hartford

04/19/2023, 6:19 PM
Mostly for people who come across this thread in a search, this is what I do when clickhouse is full; delete all the logs:
kubectl exec chi-signoz-clickhouse-cluster-0-0-0 -n signoz -- clickhouse client -d signoz_logs -q 'truncate table logs'
p

Pranay

04/25/2023, 3:35 AM
thanks for sharing @Wesley Hartford
5 Views