https://signoz.io logo
#support
Title
# support
s

sudhanshu dev

12/22/2022, 10:25 AM
By skipping few versions?
below is the error `2022.12.22 102622.674699 [ 7 ] {} <Error> Application: DB:Exception Suspiciously many (20 parts, 1.63 MiB in total) broken parts to remove while maximum allowed broken parts count is 10. You can change the maximum value with merge tree setting 'max_suspicious_broken_parts' in <merge_tree> configuration section or in table settings in .sql file (don't forget to return setting back to default value): Cannot attach table
signoz_traces
.
durationSort
from metadata file /var/lib/clickhouse/metadata/signoz_traces/durationSort.sql from query ATTACH TABLE signoz_traces.durationSort (
timestamp
DateTime64(9) CODEC(DoubleDelta, LZ4),
traceID
FixedString(32) CODEC(ZSTD(1)),
spanID
String CODEC(ZSTD(1)),
parentSpanID
String CODEC(ZSTD(1)),
serviceName
LowCardinality(String) CODEC(ZSTD(1)),
name
LowCardinality(String) CODEC(ZSTD(1)),
kind
Int8 CODEC(T64, ZSTD(1)),
durationNano
UInt64 CODEC(T64, ZSTD(1)),
statusCode
Int16 CODEC(T64, ZSTD(1)),
component
LowCardinality(String) CODEC(ZSTD(1)),
httpMethod
LowCardinality(String) CODEC(ZSTD(1)),
httpUrl
LowCardinality(String) CODEC(ZSTD(1)),
httpCode
LowCardinality(String) CODEC(ZSTD(1)),
httpRoute
LowCardinality(String) CODEC(ZSTD(1)),
httpHost
LowCardinality(String) CODEC(ZSTD(1)),
gRPCCode
LowCardinality(String) CODEC(ZSTD(1)),
gRPCMethod
LowCardinality(String) CODEC(ZSTD(1)),
hasError
Bool CODEC(T64, ZSTD(1)),
tagMap
Map(LowCardinality(String), String) CODEC(ZSTD(1)),
rpcSystem
LowCardinality(String) CODEC(ZSTD(1)),
rpcService
LowCardinality(String) CODEC(ZSTD(1)),
rpcMethod
LowCardinality(String) CODEC(ZSTD(1)),
responseStatusCode
LowCardinality(String) CODEC(ZSTD(1)), INDEX idx_service serviceName TYPE bloom_filter GRANULARITY 4, INDEX idx_name name TYPE bloom_filter GRANULARITY 4, INDEX idx_kind kind TYPE minmax GRANULARITY 4, INDEX idx_duration durationNano TYPE minmax GRANULARITY 1, INDEX idx_httpCode httpCode TYPE set(0) GRANULARITY 1, INDEX idx_hasError hasError TYPE set(2) GRANULARITY 1, INDEX idx_tagMapKeys mapKeys(tagMap) TYPE bloom_filter(0.01) GRANULARITY 64, INDEX idx_tagMapValues mapValues(tagMap) TYPE bloom_filter(0.01) GRANULARITY 64, INDEX idx_httpRoute httpRoute TYPE bloom_filter GRANULARITY 4, INDEX idx_httpUrl httpUrl TYPE bloom_filter GRANULARITY 4, INDEX idx_httpHost httpHost TYPE bloom_filter GRANULARITY 4, INDEX idx_httpMethod httpMethod TYPE bloom_filter GRANULARITY 4, INDEX idx_timestamp timestamp TYPE minmax GRANULARITY 1, INDEX idx_rpcMethod rpcMethod TYPE bloom_filter GRANULARITY 4, INDEX idx_responseStatusCode responseStatusCode TYPE set(0) GRANULARITY 1) ENGINE = MergeTree PARTITION BY toDate(timestamp) ORDER BY (durationNano, timestamp) TTL toDateTime(timestamp) + toIntervalSecond(172800), toDateTime(timestamp) + toIntervalSecond(86400) TO VOLUME 's3' SETTINGS index_granularity = 8192, storage_policy = 'tiered'`
seems some schema issue
Plz look and let me know
p

Prashant Shahi

12/22/2022, 10:33 AM
upgrade from 0.11 to 0.12 is non-breaking.
@Ankit Nayan any idea on this?
s

sudhanshu dev

12/22/2022, 10:34 AM
in latest build signoz introduced distributed clickhouse
Is it related to that
?
a

Ankit Nayan

12/22/2022, 11:31 AM
@sudhanshu dev https://kb.altinity.com/altinity-kb-setup-and-maintenance/suspiciously-many-broken-parts/#cause Seems like the data has been corrupted on the disk
s

sudhanshu dev

12/22/2022, 11:34 AM
Ok let me check the link and get back to you
I went through the doc and did not get any idea
how to solve it
Can anyone have any idea?
I understood what to do
but how to do in this setup
as we deployed using helm
a

Ankit Nayan

12/22/2022, 3:31 PM
can you try fresh installation?
Or past data is important?
s

sudhanshu dev

12/22/2022, 3:32 PM
No
I am trying to fix
I found one more exceptions which related to s3
I hope that causes the above exceptions
a

Ankit Nayan

12/22/2022, 3:33 PM
ok
s

sudhanshu dev

12/22/2022, 3:54 PM
this is the know issue
from clickhouse
now getting this error
<Error> Application: DB:Exception Unknown storage policy `tiered`: Cannot attach table
signoz_logs
.
logs
from metadata file /var/lib/clickhouse/metadata/signoz_logs/logs.sql from query ATTACH TABLE signoz_logs.logs (
timestamp
UInt64 CODEC(DoubleDelta, LZ4),
observed_timestamp
UInt64 CODEC(DoubleDelta, LZ4),
id
String CODEC(ZSTD(1)),
trace_id
String CODEC(ZSTD(1)),
span_id
String CODEC(ZSTD(1)),
trace_flags
UInt32,
severity_text
LowCardinality(String) CODEC(ZSTD(1)),
severity_number
UInt8,
body
String CODEC(ZSTD(2)),
resources_string_key
Array(String) CODEC(ZSTD(1)),
resources_string_value
Array(String) CODEC(ZSTD(1)),
attributes_string_key
Array(String) CODEC(ZSTD(1)),
attributes_string_value
Array(String) CODEC(ZSTD(1)),
attributes_int64_key
Array(String) CODEC(ZSTD(1)),
attributes_int64_value
Array(Int64) CODEC(ZSTD(1)),
attributes_float64_key
Array(String) CODEC(ZSTD(1)),
attributes_float64_value
Array(Float64) CODEC(ZSTD(1)),
k8s_namespace_name
String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_namespace_name')],
k8s_container_name
String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_container_name')],
k8s_pod_name
String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_pod_name')], INDEX body_idx body TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4, INDEX id_minmax id TYPE minmax GRANULARITY 1, INDEX k8s_namespace_name_idx k8s_namespace_name TYPE bloom_filter(0.01) GRANULARITY 64, INDEX k8s_container_name_idx k8s_container_name TYPE bloom_filter(0.01) GRANULARITY 64, INDEX k8s_pod_name_idx k8s_pod_name TYPE bloom_filter(0.01) GRANULARITY 64) ENGINE = MergeTree PARTITION BY toDate(timestamp / 1000000000) ORDER BY (timestamp, id) TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(86400), toDateTime(timestamp / 1000000000) + toIntervalSecond(43200) TO VOLUME 's3' SETTINGS index_granularity = 8192, storage_policy = 'tiered'
a

Ankit Nayan

12/22/2022, 3:59 PM
I feel it's something specific to you. A fresh install might be much simpler if possible
s

sudhanshu dev

12/22/2022, 4:00 PM
Ok
I will try that
My concern was if similar things happens in production
No issues
I will deployment fresh
a

Ankit Nayan

12/22/2022, 4:03 PM
My concern was if similar things happens in production
We have not heard of any such cases yet
s

sudhanshu dev

12/22/2022, 4:03 PM
Got it
Then may be I did some mistake during upgrade
2 Views