Hello team, i hope you are alright, i need your su...
# support
m
Hello team, i hope you are alright, i need your support on this. I'm unable to filter logs based on service.name, i'm getting error
Aw snap :/ Something went wrong. Please try again or contact support.
By looking at the clickhouse logs, i can see 4 errors, all with same issue :
Code: 47. DB::Exception: Missing columns: 'resource_string_service$$name' while processing query
I'm using Signoz 0.54.0 deployed with chart 0.52.0. Full stack error in thread. Thank you in advance
Copy code
chi-signoz-clickhouse-cluster-0-0-0 clickhouse {"date_time":"1729501352.090124","thread_name":"TCPServerConnection ([#66])","thread_id":"1011","level":"Error","query_id":"6c9f1d21-7389-4d2c-9d31-935d999a9887","logger_name":"TCPHandler","message":"Code: 47. DB::Exception: Missing columns: 'resource_string_service$$name' while processing query: 'SELECT toStartOfInterval(fromUnixTimestamp64Nano(timestamp), toIntervalSecond(240)) AS ts, toFloat64(count()) AS value FROM signoz_logs.distributed_logs WHERE ((timestamp >= 1729414951000000000) AND (timestamp <= 1729501351000000000)) AND (`resource_string_service$$name` IN ('my-service')) GROUP BY ts ORDER BY value DESC', required columns: 'timestamp' 'resource_string_service$$name', maybe you meant: 'timestamp'. (UNKNOWN_IDENTIFIER), Stack trace (when copying this message, always include the lines below):\n\n0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse\n1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000007305f8c in \/usr\/bin\/clickhouse\n2. DB::TreeRewriterResult::collectUsedColumns(std::shared_ptr<DB::IAST> const&, bool, bool) @ 0x0000000011855580 in \/usr\/bin\/clickhouse\n3. DB::TreeRewriter::analyzeSelect(std::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::vector<DB::TableWithColumnNamesAndTypes, std::allocator<DB::TableWithColumnNamesAndTypes>> const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::TableJoin>) const @ 0x000000001185b6db in \/usr\/bin\/clickhouse\n4. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const @ 0x00000000114dccc5 in \/usr\/bin\/clickhouse\n5. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x00000000114d2f2c in \/usr\/bin\/clickhouse\n6. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011591418 in \/usr\/bin\/clickhouse\n7. std::unique_ptr<DB::IInterpreter, std::default_delete<DB::IInterpreter>> std::__function::__policy_invoker<std::unique_ptr<DB::IInterpreter, std::default_delete<DB::IInterpreter>> (DB::InterpreterFactory::Arguments const&)>::__call_impl<std::__function::__default_alloc_func<DB::registerInterpreterSelectWithUnionQuery(DB::InterpreterFactory&)::$_0, std::unique_ptr<DB::IInterpreter, std::default_delete<DB::IInterpreter>> (DB::InterpreterFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::InterpreterFactory::Arguments const&) (.llvm.6985886846813747729) @ 0x0000000011597817 in \/usr\/bin\/clickhouse\n8. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x00000000114b8ef9 in \/usr\/bin\/clickhouse\n9. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000011902a16 in \/usr\/bin\/clickhouse\n10. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x00000000118ff77a in \/usr\/bin\/clickhouse\n11. DB::TCPHandler::runImpl() @ 0x000000001291be29 in \/usr\/bin\/clickhouse\n12. DB::TCPHandler::run() @ 0x0000000012933eb9 in \/usr\/bin\/clickhouse\n13. Poco::Net::TCPServerConnection::start() @ 0x00000000153a5a72 in \/usr\/bin\/clickhouse\n14. Poco::Net::TCPServerDispatcher::run() @ 0x00000000153a6871 in \/usr\/bin\/clickhouse\n15. Poco::PooledThread::run() @ 0x000000001549f047 in \/usr\/bin\/clickhouse\n16. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001549d67d in \/usr\/bin\/clickhouse\n17. ? @ 0x00007bda79e29609\n18. ? @ 0x00007bda79d4e353\n","source_file":"src\/Server\/TCPHandler.cpp; void DB::TCPHandler::runImpl()","source_line":"686"}
v
@nitya-signoz can you please take a look, Thanks!
n
can you go to the old explorer page and check if
service.name
is a selected field ?
Also are you using multiple shards of clickhouse ?
m
Yes, it is part of the selected fields. No, i'm using single instance of clickhouse, no shards, no replica.
n
can you exec into clickhouse container
Copy code
clickhouse client

show create table signoz_logs.logs;
show create table signoz_logs.distributed_logs;
can you run the above sql queries and share the result.
m
Copy code
SHOW CREATE TABLE signoz_logs.distributed_logs

Query id: a2d3e742-70fa-468d-8ffa-7202dfe331ab

┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE signoz_logs.distributed_logs
(
    `timestamp` UInt64 CODEC(DoubleDelta, LZ4),
    `observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
    `id` String CODEC(ZSTD(1)),
    `trace_id` String CODEC(ZSTD(1)),
    `span_id` String CODEC(ZSTD(1)),
    `trace_flags` UInt32,
    `severity_text` LowCardinality(String) CODEC(ZSTD(1)),
    `severity_number` UInt8,
    `body` String CODEC(ZSTD(2)),
    `resources_string_key` Array(String) CODEC(ZSTD(1)),
    `resources_string_value` Array(String) CODEC(ZSTD(1)),
    `attributes_string_key` Array(String) CODEC(ZSTD(1)),
    `attributes_string_value` Array(String) CODEC(ZSTD(1)),
    `attributes_int64_key` Array(String) CODEC(ZSTD(1)),
    `attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
    `attributes_float64_key` Array(String) CODEC(ZSTD(1)),
    `attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
    `attributes_bool_key` Array(String) CODEC(ZSTD(1)),
    `attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
    `scope_name` String CODEC(ZSTD(1)),
    `scope_version` String CODEC(ZSTD(1)),
    `scope_string_key` Array(String) CODEC(ZSTD(1)),
    `scope_string_value` Array(String) CODEC(ZSTD(1))
)
ENGINE = Distributed('cluster', 'signoz_logs', 'logs', cityHash64(id)) │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

1 row in set. Elapsed: 0.002 sec.
Copy code
SHOW CREATE TABLE signoz_logs.logs

Query id: cbe09c16-604e-40ba-a066-60fbca12415d

┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE signoz_logs.logs
(
    `timestamp` UInt64 CODEC(DoubleDelta, LZ4),
    `observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
    `id` String CODEC(ZSTD(1)),
    `trace_id` String CODEC(ZSTD(1)),
    `span_id` String CODEC(ZSTD(1)),
    `trace_flags` UInt32,
    `severity_text` LowCardinality(String) CODEC(ZSTD(1)),
    `severity_number` UInt8,
    `body` String CODEC(ZSTD(2)),
    `resources_string_key` Array(String) CODEC(ZSTD(1)),
    `resources_string_value` Array(String) CODEC(ZSTD(1)),
    `attributes_string_key` Array(String) CODEC(ZSTD(1)),
    `attributes_string_value` Array(String) CODEC(ZSTD(1)),
    `attributes_int64_key` Array(String) CODEC(ZSTD(1)),
    `attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
    `attributes_float64_key` Array(String) CODEC(ZSTD(1)),
    `attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
    `attributes_bool_key` Array(String) CODEC(ZSTD(1)),
    `attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
    `scope_name` String CODEC(ZSTD(1)),
    `scope_version` String CODEC(ZSTD(1)),
    `scope_string_key` Array(String) CODEC(ZSTD(1)),
    `scope_string_value` Array(String) CODEC(ZSTD(1)),
    `resource_string_service$$name` String DEFAULT resources_string_value[indexOf(resources_string_key, 'service.name')] CODEC(ZSTD(1)),
    INDEX body_idx body TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
    INDEX id_minmax id TYPE minmax GRANULARITY 1,
    INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
    INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
    INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
    INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4
)
ENGINE = MergeTree
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(432000)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1 │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

1 row in set. Elapsed: 0.002 sec.
n
seems like the selected field is not added properly. can you run
Copy code
alter table signoz_logs.logs drop column `resource_string_service$$name`
now you can go to the old explorer and then convert it to a selected field again.
m
I did it, here's the table now:
Copy code
SHOW CREATE TABLE signoz_logs.logs

Query id: 3004f3c9-23f3-4aa3-8646-d3eebe9e488f

┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE signoz_logs.logs
(
    `timestamp` UInt64 CODEC(DoubleDelta, LZ4),
    `observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
    `id` String CODEC(ZSTD(1)),
    `trace_id` String CODEC(ZSTD(1)),
    `span_id` String CODEC(ZSTD(1)),
    `trace_flags` UInt32,
    `severity_text` LowCardinality(String) CODEC(ZSTD(1)),
    `severity_number` UInt8,
    `body` String CODEC(ZSTD(2)),
    `resources_string_key` Array(String) CODEC(ZSTD(1)),
    `resources_string_value` Array(String) CODEC(ZSTD(1)),
    `attributes_string_key` Array(String) CODEC(ZSTD(1)),
    `attributes_string_value` Array(String) CODEC(ZSTD(1)),
    `attributes_int64_key` Array(String) CODEC(ZSTD(1)),
    `attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
    `attributes_float64_key` Array(String) CODEC(ZSTD(1)),
    `attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
    `attributes_bool_key` Array(String) CODEC(ZSTD(1)),
    `attributes_bool_value` Array(Bool) CODEC(ZSTD(1)),
    `scope_name` String CODEC(ZSTD(1)),
    `scope_version` String CODEC(ZSTD(1)),
    `scope_string_key` Array(String) CODEC(ZSTD(1)),
    `scope_string_value` Array(String) CODEC(ZSTD(1)),
    `resource_string_service$$name` String DEFAULT resources_string_value[indexOf(resources_string_key, 'service.name')] CODEC(ZSTD(1)),
    `resource_string_service$$name_exists` Bool DEFAULT if(indexOf(resources_string_key, 'service.name') != 0, true, false) CODEC(ZSTD(1)),
    INDEX body_idx body TYPE ngrambf_v1(4, 60000, 5, 0) GRANULARITY 1,
    INDEX id_minmax id TYPE minmax GRANULARITY 1,
    INDEX severity_number_idx severity_number TYPE set(25) GRANULARITY 4,
    INDEX severity_text_idx severity_text TYPE set(25) GRANULARITY 4,
    INDEX trace_flags_idx trace_flags TYPE bloom_filter GRANULARITY 4,
    INDEX scope_name_idx scope_name TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4,
    INDEX `resource_string_service$$name_idx` `resource_string_service$$name` TYPE bloom_filter(0.01) GRANULARITY 64
)
ENGINE = MergeTree
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(432000)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1 │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

1 row in set. Elapsed: 0.001 sec.
n
Perfect, the query should work now.
m
I can also confirm that filters are working now in search \o/ thank you