Having issues with clickhouse after setting up s3 ...
# support
t
Having issues with clickhouse after setting up s3 retention and then removing s3 retention
Copy code
2023.01.30 15:50:37.313591 [ 7 ] {} <Error> Application: DB::Exception: Unknown storage policy `tiered`: Cannot attach table `signoz_logs`.`logs` from metadata file /var/lib/clickhouse/metadata/signoz_logs/logs.sql from query ATTACH TABLE signoz_logs.logs (`timestamp` UInt64 CODEC(DoubleDelta, LZ4), `observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4), `id` String CODEC(ZSTD(1)), `trace_id` String CODEC(ZSTD(1)), `span_id` String CODEC(ZSTD(1)), `trace_flags` UInt32, `severity_text` LowCardinality(String) CODEC(ZSTD(1)), `severity_number` UInt8, `body` String CODEC(ZSTD(2)), `resources_string_key` Array(String) CODEC(ZSTD(1)), `resources_string_value` Array(String) CODEC(ZSTD(1)), `attributes_string_key` Array(String) CODEC(ZSTD(1)), `attributes_string_value` Array(String) CODEC(ZSTD(1)), `attributes_int64_key` Array(String) CODEC(ZSTD(1)), `attributes_int64_value` Array(Int64) CODEC(ZSTD(1)), `attributes_float64_key` Array(String) CODEC(ZSTD(1)), `attributes_float64_value` Array(Float64) CODEC(ZSTD(1)), `k8s_pod_name` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'k8s_pod_name')] CODEC(LZ4), `k8s_container_name` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'k8s_container_name')] CODEC(LZ4), INDEX body_idx body TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4, INDEX id_minmax id TYPE minmax GRANULARITY 1, INDEX trace_id_idx trace_id TYPE bloom_filter(0.01) GRANULARITY 64, INDEX span_id_idx span_id TYPE bloom_filter(0.01) GRANULARITY 64, INDEX k8s_container_name_idx k8s_container_name TYPE bloom_filter(0.01) GRANULARITY 64) ENGINE = MergeTree PARTITION BY toDate(timestamp / 1000000000) ORDER BY (timestamp, id) TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(31104000), toDateTime(timestamp / 1000000000) + toIntervalSecond(604800) TO VOLUME 's3' SETTINGS index_granularity = 8192, storage_policy = 'tiered'
2023.01.30 15:50:37.313805 [ 7 ] {} <Information> Application: shutting down
2023.01.30 15:50:37.313978 [ 8 ] {} <Information> BaseDaemon: Stop SignalListener thread
2023.01.30 15:50:37.345957 [ 1 ] {} <Information> Application: Child process exited normally with code 70.
a
I don't think we have a way to disable s3 once applied as it changes table schema. Removing s3 config is expected to throw an error. @Timothy Wigginton can you paste output of
show create signoz_logs.logs
cc: @Vishal Sharma can this be solved by running an alter query on clickhouse?
t
Will recreating the s3 bucket solve the issue? Also clickhouse pod isn’t able to start because of this so I am not sure if I could run
show create signoz_logs.logs
a
yeah maybe...if other configs are same. worth trying creating bucket and restarting clickhouse. I can see the table info printed in the error log
Copy code
.... ENGINE = MergeTree PARTITION BY toDate(timestamp / 1000000000) ORDER BY (timestamp, id) TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(31104000), toDateTime(timestamp / 1000000000) + toIntervalSecond(604800) TO VOLUME 's3' SETTINGS index_granularity = 8192, storage_policy = 'tiered'
t
So there is no way to revert s3 retention but would I have issues if I changed the s3 bucket?
I added the bucket back but I still have the same issue
v
@Timothy Wigginton Are you able to connect to clickhouse?
t
I was spending too much time trying to restore it so I removed everything and redeployed it. It’s no big deal since it’s just in a test environment but what would be a good way to backup and restore volumes?
v
There are few backup and restore tools available for clickhouse but we haven’t tested such tools. Created an issue for the same, you can follow it: https://github.com/SigNoz/signoz/issues/2160
t
Thank you