Hi guys - I work for MSFT in Azure Kubernetes Team and I am working on a POC using Singnoz. I tested...
p

Pramod Sharma

over 1 year ago
Hi guys - I work for MSFT in Azure Kubernetes Team and I am working on a POC using Singnoz. I tested it yesterday and I love it but I am having issues getting logs from another log file, I changed the name, file type like .log json in my deployment but signoz is not refreshing the file.name or logs. It is still showing logs from old file somehow.
receivers:
filelog:
include: [ /var/log/audit/aks.log ]
start_at: end
operators:
- type: json_parser
service:
pipelines:
logs:
receivers: [otlp, filelog, httplogreceiver/heroku, httplogreceiver/json]
processors: [batch]
exporters: [clickhouselogsexporter]
extraVolumeMounts:
- mountPath: /var/log/audit/audit.log
name: audit-log
- mountPath: /var/log/audit/aks.log
name: aks-log
- mountPath: /mnt/blob
name: blob-log
extraVolumes:
- hostPath:
path: /var/log/audit/audit.log
type: FileOrCreate
name: audit-log
- hostPath:
path: /var/log/audit/aks.log
type: FileOrCreate
name: aks-log
- name: blob-log
persistentVolumeClaim:
claimName: pvc-blob-fuse
See I am trying to search for newlog file name but it's still shows old file name. No result on file aks.log which I have configured above. I can see the file in the pod
Pod: default/signoz-otel-collector-8cc98f667-b9grm | Container: signoz-otel-collector
~ $ ls /var/log/audit/
aks.log    audit.log
cat conf/otel-collector-config.yaml
receivers:
  filelog:
    include:
    - /var/log/audit/aks.log
    operators:
    - type: json_parser
    start_at: end
i am facing errors in clickhouse logs, after starting up docker-compose, ```2024.09.20 20:24:02.4412...
u

Ujjwal Rastogi

12 months ago
i am facing errors in clickhouse logs, after starting up docker-compose,
2024.09.20 20:24:02.441204 [ 739 ] {} <Information> ZooKeeperClient: Keeper feature flag MULTI_READ: disabled
2024.09.20 20:24:02.441223 [ 739 ] {} <Information> ZooKeeperClient: Keeper feature flag CHECK_NOT_EXISTS: disabled
2024.09.20 20:24:02.441230 [ 739 ] {} <Information> ZooKeeperClient: Keeper feature flag CREATE_IF_NOT_EXISTS: disabled
2024.09.20 20:24:02.441276 [ 1 ] {} <Information> Application: Ready for connections.
2024.09.20 20:24:30.284853 [ 739 ] {fa403a79-7d30-4d8a-804c-efc7059d25e0} <Error> executeQuery: Code: 36. DB::Exception: Table doesn't have any table TTL expression, cannot remove. (BAD_ARGUMENTS) (version 24.1.2.5 (official build)) (from 0.0.0.0:0) (in query: /* ddl_entry=query-0000000458 */ ALTER TABLE signoz_metrics.time_series_v2 REMOVE TTL), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000721a243 in /usr/bin/clickhouse
2. DB::AlterCommands::validate(std::shared_ptr<DB::IStorage> const&, std::shared_ptr<DB::Context const>) const @ 0x0000000011c97fea in /usr/bin/clickhouse
3. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x0000000011390865 in /usr/bin/clickhouse
4. DB::InterpreterAlterQuery::execute() @ 0x000000001138dd71 in /usr/bin/clickhouse
5. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000011904974 in /usr/bin/clickhouse
6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, DB::QueryFlags, std::optional<DB::FormatSettings> const&, std::function<void (DB::IOutputFormat&)>) @ 0x000000001190824a in /usr/bin/clickhouse
7. DB::DDLWorker::tryExecuteQuery(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x0000000010d02188 in /usr/bin/clickhouse
8. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x0000000010d00755 in /usr/bin/clickhouse
9. DB::DDLWorker::scheduleTasks(bool) @ 0x0000000010cfdd33 in /usr/bin/clickhouse
10. DB::DDLWorker::runMainThread() @ 0x0000000010cf708e in /usr/bin/clickhouse
11. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x0000000010d0ebd4 in /usr/bin/clickhouse
12. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in /usr/bin/clickhouse
13. ? @ 0x000078986404d609
14. ? @ 0x0000789863f72353

2024.09.20 20:24:30.285056 [ 739 ] {fa403a79-7d30-4d8a-804c-efc7059d25e0} <Error> DDLWorker: Query /* ddl_entry=query-0000000458 */ ALTER TABLE signoz_metrics.time_series_v2 REMOVE TTL wasn't finished successfully: Code: 36. DB::Exception: Table doesn't have any table TTL expression, cannot remove. (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x000000000721a243 in /usr/bin/clickhouse
2. DB::AlterCommands::validate(std::shared_ptr<DB::IStorage> const&, std::shared_ptr<DB::Context const>) const @ 0x0000000011c97fea in /usr/bin/clickhouse
3. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x0000000011390865 in /usr/bin/clickhouse
4. DB::InterpreterAlterQuery::execute() @ 0x000000001138dd71 in /usr/bin/clickhouse
5. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000011904974 in /usr/bin/clickhouse
6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, DB::QueryFlags, std::optional<DB::FormatSettings> const&, std::function<void (DB::IOutputFormat&)>) @ 0x000000001190824a in /usr/bin/clickhouse
7. DB::DDLWorker::tryExecuteQuery(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x0000000010d02188 in /usr/bin/clickhouse
8. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x0000000010d00755 in /usr/bin/clickhouse
9. DB::DDLWorker::scheduleTasks(bool) @ 0x0000000010cfdd33 in /usr/bin/clickhouse
10. DB::DDLWorker::runMainThread() @ 0x0000000010cf708e in /usr/bin/clickhouse
11. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x0000000010d0ebd4 in /usr/bin/clickhouse
12. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in /usr/bin/clickhouse
13. ? @ 0x000078986404d609
14. ? @ 0x0000789863f72353
 (version 24.1.2.5 (official build))
2024.09.20 20:24:30.334566 [ 49 ] {a5606266-a60e-4f3b-b380-5987f3abef2e} <Error> executeQuery: Code: 36. DB::Exception: There was an error on [clickhouse:9000]: Code: 36. DB::Exception: Table doesn't have any table TTL expression, cannot remove. (BAD_ARGUMENTS) (version 24.1.2.5 (official build)). (BAD_ARGUMENTS) (version 24.1.2.5 (official build)) (from 172.24.0.4:43676) (in query: ALTER TABLE signoz_metrics.time_series_v2 ON CLUSTER cluster REMOVE TTL;), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::DDLQueryStatusSource::generate() @ 0x00000000118efe71 in /usr/bin/clickhouse
2. DB::ISource::tryGenerate() @ 0x000000001297acf5 in /usr/bin/clickhouse
3. DB::ISource::work() @ 0x000000001297a743 in /usr/bin/clickhouse
4. DB::ExecutionThreadContext::executeTask() @ 0x000000001299371a in /usr/bin/clickhouse
5. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001298a170 in /usr/bin/clickhouse
6. DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000012989380 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000129970a3 in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in /usr/bin/clickhouse
9. ? @ 0x000078986404d609
10. ? @ 0x0000789863f72353

2024.09.20 20:24:30.334683 [ 49 ] {a5606266-a60e-4f3b-b380-5987f3abef2e} <Error> TCPHandler: Code: 36. DB::Exception: There was an error on [clickhouse:9000]: Code: 36. DB::Exception: Table doesn't have any table TTL expression, cannot remove. (BAD_ARGUMENTS) (version 24.1.2.5 (official build)). (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::DDLQueryStatusSource::generate() @ 0x00000000118efe71 in /usr/bin/clickhouse
2. DB::ISource::tryGenerate() @ 0x000000001297acf5 in /usr/bin/clickhouse
3. DB::ISource::work() @ 0x000000001297a743 in /usr/bin/clickhouse
4. DB::ExecutionThreadContext::executeTask() @ 0x000000001299371a in /usr/bin/clickhouse
5. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001298a170 in /usr/bin/clickhouse
6. DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000012989380 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000129970a3 in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in /usr/bin/clickhouse
9. ? @ 0x000078986404d609
10. ? @ 0x0000789863f72353

2024.09.20 20:24:59.510841 [ 47 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):

0. Poco::Net::SocketImpl::error(int, String const&) @ 0x00000000153a1b5f in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::sendBytes(void const*, int, int) @ 0x00000000153a2bbd in /usr/bin/clickhouse
2. Poco::Net::StreamSocketImpl::sendBytes(void const*, int, int) @ 0x00000000153a5296 in /usr/bin/clickhouse
3. Poco::Net::HTTPSession::write(char const*, long) @ 0x00000000153908b3 in /usr/bin/clickhouse
4. Poco::Net::HTTPHeaderIOS::~HTTPHeaderIOS() @ 0x000000001538bbdb in /usr/bin/clickhouse
5. Poco::Net::HTTPHeaderOutputStream::~HTTPHeaderOutputStream() @ 0x000000001538bf1f in /usr/bin/clickhouse
6. DB::HTTPServerResponse::send() @ 0x0000000012942988 in /usr/bin/clickhouse
7. DB::HTTPServerConnection::sendErrorResponse(Poco::Net::HTTPServerSession&, Poco::Net::HTTPResponse::HTTPStatus) @ 0x000000001293ecda in /usr/bin/clickhouse
8. DB::HTTPServerConnection::run() @ 0x000000001293e97b in /usr/bin/clickhouse
9. Poco::Net::TCPServerConnection::start() @ 0x00000000153a5a72 in /usr/bin/clickhouse
10. Poco::Net::TCPServerDispatcher::run() @ 0x00000000153a6871 in /usr/bin/clickhouse
11. Poco::PooledThread::run() @ 0x000000001549f047 in /usr/bin/clickhouse
12. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001549d67d in /usr/bin/clickhouse
13. ? @ 0x000078986404d609
14. ? @ 0x0000789863f72353
 (version 24.1.2.5 (official build))
2024.09.20 20:25:29.558842 [ 47 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
I'm getting this error after the helm upgrade --namespace=platform my-release signoz/k8s-infra --ver...
s

Samuel Olowoyeye

12 months ago
I'm getting this error after the helm upgrade --namespace=platform my-release signoz/k8s-infra --version 0.11.12 I was previously using 0.11.7 and this did not happen
2024/09/19 23:49:30 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:49:39 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:49:48 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:49:57 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:50:06 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:50:15 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:50:24 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)
2024/09/19 23:50:33 http: superfluous response.WriteHeader call from <http://go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader|go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/request.(*RespWriterWrapper).writeHeader> (resp_writer_wrapper.go:78)