Darren Smith
09/12/2024, 5:06 PM2024-09-12T17:00:59.545Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:code: 10, message: Not found column attributes_bool.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_bool_final_mv (211bdbc2-196d-43ad-9f49-a31b13627556)", "interval": "3.047923669s"}
I presumed the migration service handled all that for usDarren Smith
09/12/2024, 5:26 PMSHOW CREATE TABLE signoz_logs.attribute_keys_bool_final_mv
CREATE MATERIALIZED VIEW signoz_logs.attribute_keys_bool_final_mv TO signoz_logs.logs_attribute_keys
(
`name` LowCardinality(String),
`datatype` String
)
AS SELECT DISTINCT
arrayJoin(mapKeys(attributes_bool)) AS name,
'Bool' AS datatype
FROM signoz_logs.logs_v2
ORDER BY name ASC │
that i presume is okayDarren Smith
09/12/2024, 5:27 PMDESCRIBE TABLE logs_v2
<snip>
14. │ attributes_bool │ Map(LowCardinality(String), Bool) │ │ │ │ ZSTD(1) │ │
</snip>
Darren Smith
09/12/2024, 5:30 PMnitya-signoz
09/13/2024, 5:05 AMselect * from system.mutations where is_done=0
Darren Smith
09/13/2024, 7:35 AMip-192-168-2-148.eu-west-2.compute.internal :) select * from system.mutations where is_done=0
Ok.
0 rows in set. Elapsed: 0.003 sec.
Darren Smith
09/13/2024, 7:36 AMDarren Smith
09/13/2024, 7:37 AMDarren Smith
09/13/2024, 7:38 AMDarren Smith
09/13/2024, 7:40 AMDOCKER_MULTI_NODE_CLUSTER
bit i'm not sure - that is only to setup the distributed versions of the schema right? the exporters don't do any schema migration themselves?nitya-signoz
09/13/2024, 7:41 AMDarren Smith
09/13/2024, 7:41 AMDarren Smith
09/13/2024, 7:41 AM<remote_servers>
<cluster>
<shard>
<replica>
<host>clickhouse-1.signoz-dev.internal</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse-2.signoz-dev.internal</host>
<port>9000</port>
</replica>
<internal_replication>true</internal_replication>
</shard>
</cluster>
nitya-signoz
09/13/2024, 7:43 AMDarren Smith
09/13/2024, 7:45 AMnitya-signoz
09/13/2024, 7:46 AMDarren Smith
09/13/2024, 7:47 AM{
name = "migration_service"
image = var.ecr_image_schema-migrator
command = ["--dsn=<tcp://clickhouse-1>.${local.fqdn}:9000", "--replication=true"]
logConfiguration = {
Darren Smith
09/13/2024, 7:47 AMDarren Smith
09/13/2024, 7:47 AMDarren Smith
09/13/2024, 7:48 AMnitya-signoz
09/13/2024, 7:48 AMshow create table logs
show create table logs_v2
show create table distributed_logs
show create table distributed_logs_v2
show create table logs_v2_resource
show create table distributed_logs_v2_resource
Darren Smith
09/13/2024, 7:48 AM4. │ distributed_logs │
5. │ distributed_logs_attribute_keys │
6. │ distributed_logs_resource_keys │
7. │ distributed_logs_v2 │
8. │ distributed_logs_v2_resource │
9. │ distributed_tag_attributes │
10. │ distributed_usage │
Darren Smith
09/13/2024, 7:49 AM───────────────────────────────────────────────────────────────────────────────────────────┐
1. │ CREATE TABLE signoz_logs.logs
(
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`resources_string_key` Array(String) CODEC(ZSTD(1)),
`resources_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_string_key` Array(String) CODEC(ZSTD(1)),
`attributes_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_int64_key` Array(String) CODEC(ZSTD(1)),
`attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
`attributes_float64_key` Array(String) CODEC(ZSTD(1)),
`attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
`attributes_bool_key` Array(String) CODEC(ZSTD(1)),
`attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string_key` Array(String) CODEC(ZSTD(1)),
`scope_string_value` Array(String) CODEC(ZSTD(1)),
INDEX body_idx body TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
INDEX id_minmax id TYPE minmax GRANULARITY 1,
INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(1296000)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1 │
Darren Smith
09/13/2024, 7:49 AM1. │ CREATE TABLE signoz_logs.logs_v2
(
`ts_bucket_start` UInt64 CODEC(DoubleDelta, LZ4),
`resource_fingerprint` String CODEC(ZSTD(1)),
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`attributes_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`attributes_number` Map(LowCardinality(String), Float64) CODEC(ZSTD(1)),
`attributes_bool` Map(LowCardinality(String), Bool) CODEC(ZSTD(1)),
`resources_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX body_idx lower(body) TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
INDEX id_minmax id TYPE minmax GRANULARITY 1,
INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4,
INDEX attributes_string_idx_key mapKeys(attributes_string) TYPE tokenbf_v1(1024, 2, 0) GRANULARITY 1,
INDEX attributes_string_idx_val mapValues(attributes_string) TYPE ngrambf_v1(4, 5000, 2, 0) GRANULARITY 1,
INDEX attributes_int64_idx_key mapKeys(attributes_number) TYPE tokenbf_v1(1024, 2, 0) GRANULARITY 1,
INDEX attributes_int64_idx_val mapValues(attributes_number) TYPE bloom_filter GRANULARITY 1,
INDEX attributes_bool_idx_key mapKeys(attributes_bool) TYPE tokenbf_v1(1024, 2, 0) GRANULARITY 1
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (ts_bucket_start, resource_fingerprint, severity_text, timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(1296000)
SETTINGS ttl_only_drop_parts = 1, index_granularity = 8192 │
Darren Smith
09/13/2024, 7:50 AM───────────────────────────────────────────────────────────────────────────────────────────┐
1. │ CREATE TABLE signoz_logs.distributed_logs
(
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`resources_string_key` Array(String) CODEC(ZSTD(1)),
`resources_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_string_key` Array(String) CODEC(ZSTD(1)),
`attributes_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_int64_key` Array(String) CODEC(ZSTD(1)),
`attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
`attributes_float64_key` Array(String) CODEC(ZSTD(1)),
`attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
`attributes_bool_key` Array(String) CODEC(ZSTD(1)),
`attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string_key` Array(String) CODEC(ZSTD(1)),
`scope_string_value` Array(String) CODEC(ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs', cityHash64(id)) │
Darren Smith
09/13/2024, 7:50 AM1. │ CREATE TABLE signoz_logs.distributed_logs_v2
(
`ts_bucket_start` UInt64 CODEC(DoubleDelta, LZ4),
`resource_fingerprint` String CODEC(ZSTD(1)),
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`attributes_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`attributes_number` Map(LowCardinality(String), Float64) CODEC(ZSTD(1)),
`attributes_bool` Map(LowCardinality(String), Bool) CODEC(ZSTD(1)),
`resources_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string` Map(LowCardinality(String), String) CODEC(ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs_v2', cityHash64(id)) │
Darren Smith
09/13/2024, 7:51 AM1. │ CREATE TABLE signoz_logs.logs_v2_resource
(
`labels` String CODEC(ZSTD(5)),
`fingerprint` String CODEC(ZSTD(1)),
`seen_at_ts_bucket_start` Int64 CODEC(Delta(8), ZSTD(1)),
INDEX idx_labels lower(labels) TYPE ngrambf_v1(4, 1024, 3, 0) GRANULARITY 1
)
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(seen_at_ts_bucket_start / 1000)
ORDER BY (labels, fingerprint, seen_at_ts_bucket_start)
TTL (toDateTime(seen_at_ts_bucket_start) + toIntervalSecond(1296000)) + toIntervalSecond(1800)
SETTINGS ttl_only_drop_parts = 1, index_granularity = 8192 │
Darren Smith
09/13/2024, 7:51 AM1. │ CREATE TABLE signoz_logs.distributed_logs_v2_resource
(
`labels` String CODEC(ZSTD(5)),
`fingerprint` String CODEC(ZSTD(1)),
`seen_at_ts_bucket_start` Int64 CODEC(Delta(8), ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs_v2_resource', cityHash64(labels, fingerprint)) │
Darren Smith
09/13/2024, 7:53 AMDarren Smith
09/13/2024, 7:53 AMDarren Smith
09/13/2024, 7:58 AM1. │ CREATE TABLE signoz_logs.logs
(
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`resources_string_key` Array(String) CODEC(ZSTD(1)),
`resources_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_string_key` Array(String) CODEC(ZSTD(1)),
`attributes_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_int64_key` Array(String) CODEC(ZSTD(1)),
`attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
`attributes_float64_key` Array(String) CODEC(ZSTD(1)),
`attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
`attributes_bool_key` Array(String) CODEC(ZSTD(1)),
`attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string_key` Array(String) CODEC(ZSTD(1)),
`scope_string_value` Array(String) CODEC(ZSTD(1)),
INDEX body_idx body TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
INDEX id_minmax id TYPE minmax GRANULARITY 1,
INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(1296000)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1 │
Replica 2: logsv2:
1. │ CREATE TABLE signoz_logs.logs_v2
(
`ts_bucket_start` UInt64 CODEC(DoubleDelta, LZ4),
`resource_fingerprint` String CODEC(ZSTD(1)),
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`attributes_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`attributes_number` Map(LowCardinality(String), Float64) CODEC(ZSTD(1)),
`attributes_bool` Map(LowCardinality(String), Bool) CODEC(ZSTD(1)),
`resources_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX body_idx lower(body) TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
INDEX id_minmax id TYPE minmax GRANULARITY 1,
INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4,
INDEX attributes_string_idx_key mapKeys(attributes_string) TYPE tokenbf_v1(1024, 2, 0) GRANULARITY 1,
INDEX attributes_string_idx_val mapValues(attributes_string) TYPE ngrambf_v1(4, 5000, 2, 0) GRANULARITY 1,
INDEX attributes_int64_idx_key mapKeys(attributes_number) TYPE tokenbf_v1(1024, 2, 0) GRANULARITY 1,
INDEX attributes_int64_idx_val mapValues(attributes_number) TYPE bloom_filter GRANULARITY 1,
INDEX attributes_bool_idx_key mapKeys(attributes_bool) TYPE tokenbf_v1(1024, 2, 0) GRANULARITY 1
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (ts_bucket_start, resource_fingerprint, severity_text, timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(1296000)
SETTINGS ttl_only_drop_parts = 1, index_granularity = 8192 │
Replica Distributed Logs:
1. │ CREATE TABLE signoz_logs.distributed_logs
(
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`resources_string_key` Array(String) CODEC(ZSTD(1)),
`resources_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_string_key` Array(String) CODEC(ZSTD(1)),
`attributes_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_int64_key` Array(String) CODEC(ZSTD(1)),
`attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
`attributes_float64_key` Array(String) CODEC(ZSTD(1)),
`attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
`attributes_bool_key` Array(String) CODEC(ZSTD(1)),
`attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string_key` Array(String) CODEC(ZSTD(1)),
`scope_string_value` Array(String) CODEC(ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs', cityHash64(id)) │
Replica : distributed_logs_v2:
1. │ CREATE TABLE signoz_logs.distributed_logs_v2
(
`ts_bucket_start` UInt64 CODEC(DoubleDelta, LZ4),
`resource_fingerprint` String CODEC(ZSTD(1)),
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`attributes_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`attributes_number` Map(LowCardinality(String), Float64) CODEC(ZSTD(1)),
`attributes_bool` Map(LowCardinality(String), Bool) CODEC(ZSTD(1)),
`resources_string` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
`scope_name` String CODEC(ZSTD(1)),
`scope_version` String CODEC(ZSTD(1)),
`scope_string` Map(LowCardinality(String), String) CODEC(ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs_v2', cityHash64(id)) │
Replica: logs_v2_resource;
1. │ CREATE TABLE signoz_logs.logs_v2_resource
(
`labels` String CODEC(ZSTD(5)),
`fingerprint` String CODEC(ZSTD(1)),
`seen_at_ts_bucket_start` Int64 CODEC(Delta(8), ZSTD(1)),
INDEX idx_labels lower(labels) TYPE ngrambf_v1(4, 1024, 3, 0) GRANULARITY 1
)
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toDate(seen_at_ts_bucket_start / 1000)
ORDER BY (labels, fingerprint, seen_at_ts_bucket_start)
TTL (toDateTime(seen_at_ts_bucket_start) + toIntervalSecond(1296000)) + toIntervalSecond(1800)
SETTINGS ttl_only_drop_parts = 1, index_granularity = 8192 │
Replica: distributed_logs_v2_resource:
1. │ CREATE TABLE signoz_logs.distributed_logs_v2_resource
(
`labels` String CODEC(ZSTD(5)),
`fingerprint` String CODEC(ZSTD(1)),
`seen_at_ts_bucket_start` Int64 CODEC(Delta(8), ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs_v2_resource', cityHash64(labels, fingerprint)) │
nitya-signoz
09/13/2024, 8:03 AMattribute_keys_bool_final_mv
Can you share for this table as wellDarren Smith
09/13/2024, 8:05 AM1. │ CREATE MATERIALIZED VIEW signoz_logs.attribute_keys_bool_final_mv TO signoz_logs.logs_attribute_keys
(
`name` LowCardinality(String),
`datatype` String
)
AS SELECT DISTINCT
arrayJoin(mapKeys(attributes_bool)) AS name,
'Bool' AS datatype
FROM signoz_logs.logs_v2
ORDER BY name ASC │
Darren Smith
09/13/2024, 8:05 AM1. │ CREATE MATERIALIZED VIEW signoz_logs.attribute_keys_bool_final_mv TO signoz_logs.logs_attribute_keys
(
`name` LowCardinality(String),
`datatype` String
)
AS SELECT DISTINCT
arrayJoin(mapKeys(attributes_bool)) AS name,
'Bool' AS datatype
FROM signoz_logs.logs_v2
ORDER BY name ASC │
Darren Smith
09/13/2024, 8:08 AMnitya-signoz
09/13/2024, 8:14 AMnitya-signoz
09/13/2024, 8:14 AMDarren Smith
09/13/2024, 8:15 AMNote: Currently whilst we have a second box, the query service, and exporter datasources are only pointing to box 1. As is the migration service.
I was going to ask you, the best practices for this as i couldn't find them. i.e. do i add a NLB and just Round Robin? or Fail Over? or what's best in your opinion with your stack.
Darren Smith
09/13/2024, 8:18 AMDarren Smith
09/13/2024, 8:26 AM$ ./otel_gen.sh -t log -c 1 -s abc -m "abc test"
Creating 1 LOG event(s)...
Sending to endpoint: <https://ingest.signoz.observationdeck.io>
Service name: abc
Successfully sent LOG event. Response: {"partialSuccess":{}}
This resulted in the collector (reduced spam):
LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
Resource attributes:
collector_service
-> service.name: Str(abc)
LogRecord #0
ObservedTimestamp: 2024-09-13 08:20:25.589427423 +0000 UTC
collector_service
Body: Str(abc test)
.....
So it can see the 1 log file okay.
Your exporter then says:
exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:code: 10, message: Not found column attributes_bool.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_bool_final_mv (211bdbc2-196d-43ad-9f49-a31b13627556)", "interval": "5.551237308s"}
And then:
exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:code: 10, message: Not found column attributes_bool.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_bool_final_mv (211bdbc2-196d-43ad-9f49-a31b13627556)", "interval": "5.459340989s"}
Then after a minute or two:
2024-09-13T08:25:01.856Z error exporterhelper/queue_sender.go:101 Exporting failed. Dropping data. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "no more retries left: StatementSend:code: 10, message: Not found column attributes_bool.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_bool_final_mv (211bdbc2-196d-43ad-9f49-a31b13627556)", "dropped_items": 1}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.newQueueSender.func1|go.opentelemetry.io/collector/exporter/exporterhelper.newQueueSender.func1>
/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/exporterhelper/queue_sender.go:101
<http://go.opentelemetry.io/collector/exporter/internal/queue.(*boundedMemoryQueue[...]).Consume|go.opentelemetry.io/collector/exporter/internal/queue.(*boundedMemoryQueue[...]).Consume>
/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/internal/queue/bounded_memory_queue.go:52
<http://go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1|go.opentelemetry.io/collector/exporter/internal/queue.(*Consumers[...]).Start.func1>
/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/internal/queue/consumers.go:43
Darren Smith
09/13/2024, 8:27 AMDarren Smith
09/13/2024, 8:27 AMDarren Smith
09/13/2024, 8:30 AMSELECT * FROM distributed_logs_v2 WHERE body = 'abc test'
┌─ts_bucket_start─┬─resource_fingerprint──────────────────────┬───────────timestamp─┬──observed_timestamp─┬─id──────────────────────────┬─trace_id─┬─span_id─┬─trace_flags─┬─severity_text─┬─severity_number─┬─body─────┬─attributes_string──────────┬─attributes_number─┬─attributes_bool─┬─resources_string───────┬─scope_name─┬─scope_version─┬─scope_string─┐
1. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo2JomAkYq8qRkyGpANRjpKP │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
2. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo2QNZb1BH8oVS6YB78wiLWX │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
3. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo34m6taZGn1U6U0V0xKXUQa │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
4. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo3FHkbhTV5hk3SU7pgKtgy1 │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
5. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo49rokpXNSlknbyPfJYLe4R │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
6. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo5GbpyEMZltQUK5hnVTCsuz │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
7. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo6Ska2c4N6uNaQ13kumCIwk │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
8. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo7jWcRZkSY5t3v0K34EQq0q │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
9. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo98hUhWprwO2YgIuaQLX8On │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
10. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo9DgZ3XVLicqXIT0iPTZ90x │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
11. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo9o8BX1EW3RjkRsLwDp3lJm │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
12. │ 1726214400 │ service.name=abc;hash=8639156975360962647 │ 1726215625479421772 │ 1726215625589427423 │ 2m0Zo9xbIKoMr7kPaPUOQl9yAG5 │ │ │ 0 │ INFO │ 0 │ abc test │ {'event.domain':'example'} │ {} │ {} │ {'service.name':'abc'} │ │ │ {} │
└─────────────────┴───────────────────────────────────────────┴─────────────────────┴─────────────────────┴─────────────────────────────┴──────────┴─────────┴─────────────┴───────────────┴─────────────────┴──────────┴────────────────────────────┴───────────────────┴─────────────────┴────────────────────────┴────────────┴───────────────┴──────────────┘
12 rows in set. Elapsed: 0.014 sec. Processed 11.04 thousand rows, 4.51 MB (804.42 thousand rows/s., 328.47 MB/s.)
Peak memory usage: 4.23 MiB.
so 1 log generated 12 rows :Snitya-signoz
09/13/2024, 8:32 AMDarren Smith
09/13/2024, 8:36 AM2024.09.13 08:24:17.296544 [ 516027 ] {2ca0f204-c24b-4340-ad76-345ab6e6a8e2} <Error> executeQuery: Code: 10. DB::Exception: Not found column attributes_bool.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_bool_final_mv (211bdbc2-196d-43ad-9f49-a31b13627556). (NOT_FOUND_COLUMN_IN_BLOCK) (version 24.8.2.3 (official build)) (from 192.168.2.50:47028) (in query: INSERT INTO signoz_logs.distributed_logs_v2 ( ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string ) VALUES), Stack trace (when copying this message, always include the lines below):
Darren Smith
09/13/2024, 8:45 AMnitya-signoz
09/13/2024, 8:50 AMSELECT
database,
table,
error_count,
last_exception,
data_files,
data_compressed_bytes
FROM system.distribution_queue
WHERE error_count > 0
Darren Smith
09/13/2024, 8:50 AMDarren Smith
09/13/2024, 8:52 AMnitya-signoz
09/13/2024, 8:53 AMDarren Smith
09/13/2024, 8:53 AMclickhouse-server --version
ClickHouse server version 24.8.2.3 (official build).
Darren Smith
09/13/2024, 8:54 AMDarren Smith
09/13/2024, 8:55 AMnitya-signoz
09/13/2024, 8:56 AMDarren Smith
09/13/2024, 8:56 AMDarren Smith
09/13/2024, 8:56 AMDarren Smith
09/13/2024, 8:57 AMDarren Smith
09/13/2024, 8:57 AMnitya-signoz
09/13/2024, 8:58 AMnitya-signoz
09/13/2024, 8:59 AMnitya-signoz
09/13/2024, 8:59 AMsignoz_logs.attribute_keys_bool_final_mv
on both replicas and see if data is ingested properly after that.Darren Smith
09/13/2024, 9:00 AMnitya-signoz
09/13/2024, 9:01 AMDarren Smith
09/13/2024, 9:03 AMreplica 1: DROP TABLE IF EXISTS signoz_logs.attribute_keys_bool_final_mv;
replica 2: DROP TABLE IF EXISTS signoz_logs.attribute_keys_bool_final_mv;
i did that and tried sending a log - to see without recreating it. Interestingly it now is moaning about:
2024-09-13T09:02:48.666Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:code: 10, message: Not found column attributes_string.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_string_final_mv (44688cda-629a-4b09-a3d1-a564774a84ce)", "interval": "37.013097477s"}
Darren Smith
09/13/2024, 9:04 AMnitya-signoz
09/13/2024, 9:06 AMattribute_keys_string_final_mv
attribute_keys_float64_final_mv
these won’t lead to data lossDarren Smith
09/13/2024, 9:07 AMNot found column attributes_bool.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.attribute_keys_bool_final_mv (96565ac5-0c8c-42d8-86fc-8fcce7355841)", "interval": "6.130953205s"}
interesting. i'll export the schema and delete all 3 now thenDarren Smith
09/13/2024, 9:11 AM2024-09-13T09:10:57.302Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:code: 10, message: Not found column resources_string.keys in block. There are only columns: ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string: while pushing to view signoz_logs.resource_keys_string_final_mv (d4857f49-db69-4684-9886-68f75be959e3)", "interval": "6.535330254s"}
Darren Smith
09/13/2024, 9:11 AMSrikanth Chekuri
09/13/2024, 9:12 AMnitya-signoz
09/13/2024, 9:14 AMDarren Smith
09/13/2024, 9:14 AMDarren Smith
09/13/2024, 9:15 AMDarren Smith
09/13/2024, 9:18 AMnitya-signoz
09/13/2024, 9:18 AMDarren Smith
09/13/2024, 9:18 AMSrikanth Chekuri
09/13/2024, 9:21 AMDarren Smith
09/13/2024, 9:21 AMDarren Smith
09/14/2024, 10:30 AMSigNoz is an open-source APM. It helps developers monitor their applications & troubleshoot problems, an open-source alternative to DataDog, NewRelic, etc.
Powered by