Hi. We created a log view (Log Explorer > Save ...
# support
Hi. We created a log view (Log Explorer > Save as View) which executes 6 queries to create a chart. When we set a low time range (15 min, 30 min, 1 hour) the queries execute correctly and the chart is displayed. But if we use anything above 12 hours (12h, 24h, 48h) the queries don't finish and are being aborted and the chart show "something went wrong, try again or contact support". The same queries execute correctly when executed directly from clickhouse pod, they "work" on about 470 mln rows (~500GB of data), take ~240 seconds to finish and process speed is ~2GB/s when we set the time range for 48h. Upon searching in query-service logs pod, clickhouse pod logs, clickhouse pod process list, clickhouse running queries we found out that the same queries executed from the frontend have a setting of max_execution_time=64 (seconds) so they are being aborted due to timeout. Searching all configs we couldn't find where this 64 seconds is set/configured. The clickhouse default Profile max_execution_time is set to default value of 0 (SELECT getSetting('max_execution_time'); directly from clickhouse-client from clickhouse pod). How can we change this max_execution_time value or disable it?
Hi, thank you, but changing those values doesn't help. Long queries are still being aborted/canceled after 64 seconds, every query still has the 'max_execution_time':'64' setting present. This way trying to query a large amount of data will always fail 😐