hello, we seems to be missing metrics from time to...
# support
a
hello, we seems to be missing metrics from time to time, and I noticed this in the clickhouse server logs:
Copy code
"message": "Code: 252. DB::Exception: Received from chi-signoz-clickhouse-cluster-0-0:9000. DB::Exception: Too many partitions for single INSERT block (more than 100). The limit is controlled by 'max_partitions_per_insert_block' setting. Large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc).. Stack trace:

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse
1. DB::Exception::Exception<unsigned long&>(int, FormatStringHelperImpl<std::type_identity<unsigned long&>::type>, unsigned long&) @ 0x00000000072fd210 in \/usr\/bin\/clickhouse
2. DB::MergeTreeDataWriter::splitBlockIntoParts(DB::Block const&, unsigned long, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::Context const>, std::shared_ptr<DB::AsyncInsertInfo>) @ 0x00000000125c49ac in \/usr\/bin\/clickhouse
3. DB::ReplicatedMergeTreeSinkImpl<false>::consume(DB::Chunk) @ 0x00000000126a2e37 in \/usr\/bin\/clickhouse
4. DB::SinkToStorage::onConsume(DB::Chunk) @ 0x0000000012ccb7c2 in \/usr\/bin\/clickhouse
5. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>>(std::__function::__policy_storage const*) @ 0x0000000012bfe98b in \/usr\/bin\/clickhouse
6. DB::runStep(std::function<void ()>, DB::ThreadStatus*, std::atomic<unsigned long>*) @ 0x0000000012bfe69c in \/usr\/bin\/clickhouse
7. DB::ExceptionKeepingTransform::work() @ 0x0000000012bfdd73 in \/usr\/bin\/clickhouse
8. DB::ExecutionThreadContext::executeTask() @ 0x000000001299371a in \/usr\/bin\/clickhouse
9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001298a170 in \/usr\/bin\/clickhouse
10. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001298b258 in \/usr\/bin\/clickhouse
11. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c8eb0c1 in \/usr\/bin\/clickhouse
12. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c8ee8fa in \/usr\/bin\/clickhouse
13. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in \/usr\/bin\/clickhouse
14. ? @ 0x00007f826aac5609
15. ? @ 0x00007f826a9ea353
: While sending \/var\/lib\/clickhouse\/store\/8a4\/8a46ec25-a0c2-4fe7-8f8a-fd239916b219\/shard1_replica1\/15162907.bin. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in \/usr\/bin\/clickhouse
1. DB::readException(DB::ReadBuffer&, String const&, bool) @ 0x000000000c86bf9f in \/usr\/bin\/clickhouse
2. DB::Connection::receiveException() const @ 0x0000000012820cca in \/usr\/bin\/clickhouse
3. DB::Connection::receivePacket() @ 0x0000000012829d91 in \/usr\/bin\/clickhouse
4. DB::RemoteInserter::onFinish() @ 0x000000001225b2d7 in \/usr\/bin\/clickhouse
5. DB::DistributedAsyncInsertDirectoryQueue::processFile(String&) @ 0x0000000012255bd2 in \/usr\/bin\/clickhouse
6. DB::DistributedAsyncInsertDirectoryQueue::processFiles() @ 0x000000001224e309 in \/usr\/bin\/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::DistributedAsyncInsertDirectoryQueue::DistributedAsyncInsertDirectoryQueue(DB::StorageDistributed&, std::shared_ptr<DB::IDisk> const&, String const&, std::shared_ptr<DB::IConnectionPool>, DB::ActionBlocker&, DB::BackgroundSchedulePool&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001225911b in \/usr\/bin\/clickhouse
8. DB::BackgroundSchedulePool::threadFunction() @ 0x000000001051d6ed in \/usr\/bin\/clickhouse
9. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000001051e7b3 in \/usr\/bin\/clickhouse
10. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in \/usr\/bin\/clickhouse
11. ? @ 0x00007f2298822609
12. ? @ 0x00007f2298747353
 (version 24.1.2.5 (official build))",
Now I was wondering in what direction I need to look for the cause of this? I assume we do too much data but it isn't quite clear to me where it originates from, so any insights are welcome.
s
hello, we seems to be missing metrics from time to time, and I noticed this in the clickhouse server logs:
``````
The error see in logs and issue mentioned are independent
Now I was wondering in what direction I need to look for the cause of this? I assume we do too much data but it isn't quite clear to me where it originates from, so any insights are welcome.
This happens when you send logs data that have more than 100 days of data i.e there are certain timestamps, whose day of the timestamps span across more than 100 days. This usually happens when you collect old logs and send them.