<@U02NBMHMJ9L> <@U02SS3ZAMKQ> Any idea about below...
# general
s
@Vishal Sharma @Prashant Shahi Any idea about below error. that we are getting in clickhouse pods and pods keep restarting. I understood it is related to the insufficient mem. {"date_time":"1736745070.040729","thread_name":"","thread_id":"580","level":"Error","query_id":"","logger_name":"MergeTreeBackgroundExecutor","message":"Exception while executing background task {7165cb86-e3b3-4832-9778-cf6a91b2273c:20250111 18913 18971 1} Code: 241. DB:Exception Memory limit (total) exceeded: would use 6.83 GiB (attempt to allocate chunk of 4224032 bytes), maximum: 6.79 GiB. OvercommitTracker decision: Memory overcommit isn't used. Waiting time or overcommit denominator are set to zero.: (while reading column scope_string): (while reading from part \/var\/lib\/clickhouse\/store\/716\/7165cb86-e3b3-4832-9778-cf6a91b2273c\/20250111_18931_18931_0\/ in table signoz_logs.logs_v2 (7165cb86-e3b3-4832-9778-cf6a91b2273c) located on disk default of type local, from mark 0 with max_rows_to_read = 5888): While executing MergeTreeSequentialSource. (MEMORY_LIMIT_EXCEEDED), Stack trace (when copying this message, always include the lines below):\n\n0. DB:Exception:Exception(DB:Exception:MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse\n1. DB:Exception:Exception<char const*, char const*, String, long&, String, char const*, std::basic_string_view<char, std::char_traits<char>>>(int, FormatStringHelperImpl<std::type_identity<char const*>::type, std::type_identity<char const*>::type, std:type identity&lt;String&gt;:type, std:type identity&lt;long&amp;&gt;:type, std:type identity&lt;String&gt;:type, std::type_identity<char const*>::type, std:type identity&lt;std:basic_string_view<char, std:char traits&lt;char&gt;&gt;&gt;:type>, char const*&&, char const*&&, String&&, long&, String&&, char const*&&, std::basic_string_view<char, std::char_traits<char>>&&) @ 0x000000000c816d0a in \/usr\/bin\/clickhouse\n2. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816948 in \/usr\/bin\/clickhouse\n3. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816389 in \/usr\/bin\/clickhouse\n4. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816389 in \/usr\/bin\/clickhouse\n5. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816389 in \/usr\/bin\/clickhouse\n6. Allocator<false, false>::alloc(unsigned long, unsigned long) @ 0x000000000c7d560d in \/usr\/bin\/clickhouse\n7. void DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::resize<>(unsigned long) @ 0x0000000007220183 in \/usr\/bin\/clickhouse\n8. DB:SerializationArray:deserializeBinaryBulkWithMultipleStreams(COW<DB:IColumn&gt;immutable ptr&lt;DB:IColumn>&, unsigned long, DB:ISerialization:DeserializeBinaryBulkSettings&, std:shared ptr&lt;DBISerialization:DeserializeBinaryBulkState>&, std::unordered_map<String, COW<DB:IColumn&gt;immutable ptr&lt;DB:IColumn>, std::hash<String>, std::equal_to<String>, std:allocator&lt;std:pair<String const, COW<DB:IColumn&gt;immutable ptr&lt;DB:IColumn>>>>*) const @ 0x00000000108b9585 in \/usr\/bin\/clickhouse\n9. DB:MergeTreeReaderWide:readRows(unsigned long, unsigned long, bool, unsigned long, std:vector&lt;COW&lt;DBIColumn&gt;immutable ptr&lt;DB:IColumn>, std:allocator&lt;COW&lt;DBIColumn&gt;immutable ptr&lt;DB:IColumn>>>&) @ 0x000000001251ab34 in \/usr\/bin\/clickhouse\n10. DB:MergeTreeSequentialSource:generate() @ 0x000000001251ca4d in \/usr\/bin\/clickhouse\n11. DB:ISource:tryGenerate() @ 0x000000001297acf5 in \/usr\/bin\/clickhouse\n12. DB:ISource:work() @ 0x000000001297a743 in \/usr\/bin\/clickhouse\n13. DB:ExecutionThreadContext:executeTask() @ 0x000000001299371a in \/usr\/bin\/clickhouse\n14. DB:PipelineExecutor:executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001298a170 in \/usr\/bin\/clickhouse\n15. DB:PipelineExecutor:executeStep(std::atomic<bool>*) @ 0x0000000012989928 in \/usr\/bin\/clickhouse\n16. DB:PullingPipelineExecutor:pull(DB::Chunk&) @ 0x0000000012998017 in \/usr\/bin\/clickhouse\n17. DB:PullingPipelineExecutor:pull(DB::Block&) @ 0x00000000129981d3 in \/usr\/bin\/clickhouse\n18. DB:MergeTaskExecuteAndFinalizeHorizontalPart:executeImpl() @ 0x000000001233b6f2 in \/usr\/bin\/clickhouse\n19. DB:MergeTaskExecuteAndFinalizeHorizontalPart:execute() @ 0x000000001233b64b in \/usr\/bin\/clickhouse\n20. DB:MergeTask:execute() @ 0x0000000012340d99 in \/usr\/bin\/clickhouse\n21. DB:MergePlainMergeTreeTask:executeStep() @ 0x0000000012723517 in \/usr\/bin\/clickhouse\n22. DB:MergeTreeBackgroundExecutor&lt;DBDynamicRuntimeQueue&gt;:threadFunction() @ 0x00000000123532c4 in \/usr\/bin\/clickhouse\n23. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c8eb0c1 in \/usr\/bin\/clickhouse\n24. void std: function:__policy_invoker<void ()>: call impl&lt;std function default alloc func&lt;ThreadFromGlobalPoolImpl&lt;false&gt;:ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std: function:__policy_storage const*) @ 0x000000000c8ee8fa in \/usr\/bin\/clickhouse\n25. void* std: thread proxy[abiv15000]<std:tuple&lt;stdunique ptr&lt;std:__thread_struct, std:default delete&lt;std:__thread_struct>>, void ThreadPoolImpl<std:thread&gt;:scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in \/usr\/bin\/clickhouse\n26. ? @ 0x00007f2bbbadb609\n27. ? @ 0x00007f2bbba00353\n (version 24.1.2.5 (official build))","source_file":"src\/Common\/Exception.cpp; void DB::tryLogCurrentExceptionImpl(Poco::Logger *, const std::string &)","source_line":"222"}
Copy code
{"date_time":"1736745070.040745","thread_name":"","thread_id":"592","level":"Error","query_id":"","logger_name":"MergeTreeBackgroundExecutor","message":"Exception while executing background task {7165cb86-e3b3-4832-9778-cf6a91b2273c::20250111_18664_18732_1}: Code: 241. DB::Exception: Memory limit (total) exceeded: would use 6.83 GiB (attempt to allocate chunk of 4299599 bytes), maximum: 6.79 GiB. OvercommitTracker decision: Memory overcommit isn't used. Waiting time or overcommit denominator are set to zero.: (while reading column attributes_string): (while reading from part \/var\/lib\/clickhouse\/store\/716\/7165cb86-e3b3-4832-9778-cf6a91b2273c\/20250111_18669_18669_0\/ in table signoz_logs.logs_v2 (7165cb86-e3b3-4832-9778-cf6a91b2273c) located on disk default of type local, from mark 0 with max_rows_to_read = 6526): While executing MergeTreeSequentialSource. (MEMORY_LIMIT_EXCEEDED), Stack trace (when copying this message, always include the lines below):\n\n0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse\n1. DB::Exception::Exception<char const*, char const*, String, long&, String, char const*, std::basic_string_view<char, std::char_traits<char>>>(int, FormatStringHelperImpl<std::type_identity<char const*>::type, std::type_identity<char const*>::type, std::type_identity<String>::type, std::type_identity<long&>::type, std::type_identity<String>::type, std::type_identity<char const*>::type, std::type_identity<std::basic_string_view<char, std::char_traits<char>>>::type>, char const*&&, char const*&&, String&&, long&, String&&, char const*&&, std::basic_string_view<char, std::char_traits<char>>&&) @ 0x000000000c816d0a in \/usr\/bin\/clickhouse\n2. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816948 in \/usr\/bin\/clickhouse\n3. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816389 in \/usr\/bin\/clickhouse\n4. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816389 in \/usr\/bin\/clickhouse\n5. MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) @ 0x000000000c816389 in \/usr\/bin\/clickhouse\n6. Allocator<false, false>::realloc(void*, unsigned long, unsigned long, unsigned long) @ 0x000000000c7d5d87 in \/usr\/bin\/clickhouse\n7. void DB::PODArrayBase<1ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::resize_exact<>(unsigned long) @ 0x0000000007226ba6 in \/usr\/bin\/clickhouse\n8. void DB::deserializeBinarySSE2<1>(DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul>&, DB::ReadBuffer&, unsigned long) @ 0x00000000108f8836 in \/usr\/bin\/clickhouse\n9. DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::unordered_map<String, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>>*) const @ 0x00000000108b01d9 in \/usr\/bin\/clickhouse\n10. DB::SerializationTuple::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::unordered_map<String, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>>*) const @ 0x0000000010902f5a in \/usr\/bin\/clickhouse\n11. DB::SerializationArray::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::unordered_map<String, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>>*) const @ 0x00000000108b99c8 in \/usr\/bin\/clickhouse\n12. DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x000000001251ab34 in \/usr\/bin\/clickhouse\n13. DB::MergeTreeSequentialSource::generate() @ 0x000000001251ca4d in \/usr\/bin\/clickhouse\n14. DB::ISource::tryGenerate() @ 0x000000001297acf5 in \/usr\/bin\/clickhouse\n15. DB::ISource::work() @ 0x000000001297a743 in \/usr\/bin\/clickhouse\n16. DB::ExecutionThreadContext::executeTask() @ 0x000000001299371a in \/usr\/bin\/clickhouse\n17. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001298a170 in \/usr\/bin\/clickhouse\n18. DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000012989928 in \/usr\/bin\/clickhouse\n19. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x0000000012998017 in \/usr\/bin\/clickhouse\n20. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x00000000129981d3 in \/usr\/bin\/clickhouse\n21. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x000000001233b6f2 in \/usr\/bin\/clickhouse\n22. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x000000001233b64b in \/usr\/bin\/clickhouse\n23. DB::MergeTask::execute() @ 0x0000000012340d99 in \/usr\/bin\/clickhouse\n24. DB::MergePlainMergeTreeTask::executeStep() @ 0x0000000012723517 in \/usr\/bin\/clickhouse\n25. DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x00000000123532c4 in \/usr\/bin\/clickhouse\n26.
Could u plz help me to provide any configuration changes required to fix this.
n
This means that, there are some queries running which are consuming memory and it more than the memory allocated to the clickhouse server. You can try increasing the memory or find out the query which is using that much memory
Copy code
WITH window_cte AS
    (
        SELECT
            normalized_query_hash,
            simpleJSONExtractString(log_comment, 'alertID') AS alert,
            max(`ProfileEvents.Values`[indexOf(`ProfileEvents.Names`, 'UserTimeMicroseconds')]) / 1000 AS userCPUms_max,
            avg(`ProfileEvents.Values`[indexOf(`ProfileEvents.Names`, 'UserTimeMicroseconds')]) / 1000 AS userCPUms_avg,
            formatReadableSize(max(read_bytes)) AS read_bytes_max,
            formatReadableSize(avg(read_bytes)) AS read_bytes_avg,
            formatReadableSize(max(memory_usage)) AS max_memory,
            formatReadableSize(avg(memory_usage)) AS avg_memory,
            avg(query_duration_ms) AS avg_duration,
            median(query_duration_ms) AS median_duration,
            max(query_duration_ms) AS max_duration,
            min(query_duration_ms) AS min_duration
        FROM system.query_log
        WHERE (query LIKE '%signoz_logs.distributed_logs_v2%') AND (type = 'QueryFinish') AND (event_time > (now() - toIntervalHour(1)))
        GROUP BY
            normalized_query_hash,
            alert
    )
SELECT
    normalized_query_hash,
    alert,
    max_memory,
    avg_memory,
    userCPUms_max,
    userCPUms_avg,
    read_bytes_max,
    read_bytes_avg,
    avg_duration,
    median_duration,
    max_duration,
    min_duration
FROM window_cte
ORDER BY max_memory ASC
Try to find the query by the hash
Copy code
select * from system.query_log where normalized_query_hash='<hash>' format Vertical
s
@nitya-signoz Collector also failing even I gave enough MEM {"level":"info","ts":1736798672.3330631,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSendread read tcp 10.87.4.20536676 &gt;172.20.188.2009000: use of closed network connection","interval":"4.780449068s"} {"level":"info","ts":1736798673.136538,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSendread read tcp 10.87.4.20550180 &gt;172.20.188.2009000: use of closed network connection","interval":"4.254368674s"}
@nitya-signoz Any idea how to fix above issue of otel collector Defaulted container "collector" out of: collector, obs-signoz-otel-collector-migrate-init (init) {"level":"info","ts":1736997206.1215098,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"metrics","name":"clickhousemetricswrite","error":"read: read tcp 10.87.4.15144134 &gt;172.20.188.2009000: use of closed network connection","errorVerbose":"read:\n github.com/ClickHouse/ch-go/proto.(*Reader).ReadFull\n /home/runner/go/pkg/mod/github.com/!sig!noz/ch-go@v0.61.2-dd/proto/reader.go:62\n - read tcp 10.87.4.15144134 &gt;172.20.188.2009000: use of closed network connection","interval":"6.376415299s"}
We are facing this issue since last one week
n
if there are just retry errors occasionally, then it’s fine. Unless you are seeing errors which says data is dropped after multiple retries. Did you give enough resources to clickhouse ?
s
yes I gave 7 GB and 2 CPU
issue is that otelcollector in crashloopback state with above error.
n
and how much data are you sending approximately ? also check clickhouse logs if it says something.
Also I think 2 cpu is very less and the 7GB ram is also very less. Try increasing a bit more ?
s
{"date_time":"1736999335.124222","thread_name":"TCPServerConnection ([#6237])","thread_id":"7073","level":"Error","query_id":"57fab5ed-c172-4874-b661-e8a5515b0a9b","logger_name":"executeQuery","message":"Code: 210. DB:NetException I\/O error: Broken pipe, while writing to socket ([:ffff10.87.4.20]:9000 -> [:ffff10.87.4.151]:47552). (NETWORK_ERROR) (version 24.1.2.5 (official build)) (from [:ffff10.87.4.151]:47552) (in query: INSERT INTO signoz_logs.distributed_logs_v2 ( ts_bucket_start, resource_fingerprint, timestamp, observed_timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_name, scope_version, scope_string ) VALUES), Stack trace (when copying this message, always include the lines below):\n\n0. DB:Exception:Exception(DB:Exception:MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse\n1. DB:NetException:NetException<String, String, String>(int, FormatStringHelperImpl<std:type identity&lt;String&gt;:type, std:type identity&lt;String&gt;:type, std:type identity&lt;String&gt;:type>, String&&, String&&, String&&) @ 0x000000000caa69a1 in \/usr\/bin\/clickhouse\n2. DB:WriteBufferFromPocoSocket:nextImpl() @ 0x000000000caa733e in \/usr\/bin\/clickhouse\n3. DB:TCPHandler:runImpl() @ 0x000000001292120f in \/usr\/bin\/clickhouse\n4. DB:TCPHandler:run() @ 0x0000000012933eb9 in \/usr\/bin\/clickhouse\n5. Poco:NetTCPServerConnection:start() @ 0x00000000153a5a72 in \/usr\/bin\/clickhouse\n6. Poco:NetTCPServerDispatcher:run() @ 0x00000000153a6871 in \/usr\/bin\/clickhouse\n7. Poco:PooledThread:run() @ 0x000000001549f047 in \/usr\/bin\/clickhouse\n8. Poco:ThreadImpl:runnableEntry(void*) @ 0x000000001549d67d in \/usr\/bin\/clickhouse\n9. ? @ 0x00007f0036ddb609\n10. ? @ 0x00007f0036d00353\n","source_file":"src\/Interpreters\/executeQuery.cpp; void DB::logException(ContextPtr, QueryLogElement &, bool)","source_line":"211"} {"date_time":"1736999335.124364","thread_name":"TCPServerConnection ([#6237])","thread_id":"7073","level":"Error","query_id":"57fab5ed-c172-4874-b661-e8a5515b0a9b","logger_name":"TCPHandler","message":"Code: 210. DB:NetException I\/O error: Broken pipe, while writing to socket ([:ffff10.87.4.20]:9000 -> [:ffff10.87.4.151]:47552). (NETWORK_ERROR), Stack trace (when copying this message, always include the lines below):\n\n0. DB:Exception:Exception(DB:Exception:MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse\n1. DB:NetException:NetException<String, String, String>(int, FormatStringHelperImpl<std:type identity&lt;String&gt;:type, std:type identity&lt;String&gt;:type, std:type identity&lt;String&gt;:type>, String&&, String&&, String&&) @ 0x000000000caa69a1 in \/usr\/bin\/clickhouse\n2. DB:WriteBufferFromPocoSocket:nextImpl() @ 0x000000000caa733e in \/usr\/bin\/clickhouse\n3. DB:TCPHandler:runImpl() @ 0x000000001292120f in \/usr\/bin\/clickhouse\n4. DB:TCPHandler:run() @ 0x0000000012933eb9 in \/usr\/bin\/clickhouse\n5. Poco:NetTCPServerConnection:start() @ 0x00000000153a5a72 in \/usr\/bin\/clickhouse\n6. Poco:NetTCPServerDispatcher:run() @ 0x00000000153a6871 in \/usr\/bin\/clickhouse\n7. Poco:PooledThread:run() @ 0x000000001549f047 in \/usr\/bin\/clickhouse\n8. Poco:ThreadImpl:runnableEntry(void*) @ 0x000000001549d67d in \/usr\/bin\/clickhouse\n9. ? @ 0x00007f0036ddb609\n10. ? @ 0x00007f0036d00353\n","source_file":"src\/Server\/TCPHandler.cpp; void DB:TCPHandler:runImpl()","source_line":"686"}
Above is CH logs and let me increase the resources
n
those are just network errors.
s
Can we ignore these error?
n
yeah occassional network errors can be ignored.
s
@nitya-signoz Below is the query result where max query read_bytes_max: 6.56 GiB Row 1: ────── normalized_query_hash: 9813613594975606277 alert: max_memory: 268.29 MiB avg_memory: 38.33 MiB userCPUms_max: 7081.294 userCPUms_avg: 1015.8555714285715 read_bytes_max: 6.56 GiB read_bytes_avg: 959.42 MiB avg_duration: 2680.5714285714284 median_duration: 8 max_duration: 18715 min_duration: 5 once I checked the query with above hash below is the query query: SELECT timestamp, id, trace_id, span_id, trace_flags, severity_text, severity_number, scope_name, scope_version, body, attributes_string, attributes_number, attributes_bool, resources_string, scope_string from signoz_logs.distributed_logs_v2 where (timestamp >= 1737028697000000000 AND timestamp = 1737032297000000000) AND (ts_bucket_start = 1737026897 AND ts_bucket_start <= 1737032297) order by timestamp desc LIMIT 100 But our issue still persist even we gave 12 GB Mem and 5 CPU to clickhouse . Otel collector continuously restarting
n
no this is not the one, try using this query
Copy code
WITH window_cte AS
    (
        SELECT
            normalized_query_hash,
            simpleJSONExtractString(log_comment, 'alertID') AS alert,
            max(`ProfileEvents.Values`[indexOf(`ProfileEvents.Names`, 'UserTimeMicroseconds')]) / 1000 AS userCPUms_max,
            avg(`ProfileEvents.Values`[indexOf(`ProfileEvents.Names`, 'UserTimeMicroseconds')]) / 1000 AS userCPUms_avg,
            formatReadableSize(max(read_bytes)) AS read_bytes_max,
            formatReadableSize(avg(read_bytes)) AS read_bytes_avg,
            formatReadableSize(max(memory_usage)) AS max_memory,
            formatReadableSize(avg(memory_usage)) AS avg_memory,
            avg(query_duration_ms) AS avg_duration,
            median(query_duration_ms) AS median_duration,
            max(query_duration_ms) AS max_duration,
            min(query_duration_ms) AS min_duration
        FROM system.query_log
        WHERE (query LIKE '%signoz_logs.distributed_logs_v2%') AND (event_time > (now() - toIntervalHour(200)))
        GROUP BY
            normalized_query_hash,
            alert
    )
SELECT
    normalized_query_hash,
    alert,
    max_memory,
    avg_memory,
    userCPUms_max,
    userCPUms_avg,
    read_bytes_max,
    read_bytes_avg,
    avg_duration,
    median_duration,
    max_duration,
    min_duration
FROM window_cte
ORDER BY max_memory ASC