Hello I am facing this issue whenever I want to se...
# support
m
Hello I am facing this issue whenever I want to see metric detail traces.
Copy code
ts=2022-08-31T10:22:39.399416813Z caller=log.go:168 level=info msg="Completed loading of configuration file" filename=/root/config/prometheus.yml
2022-08-31T10:22:48.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 57.388µs
2022-08-31T10:22:48.913Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 91.352µs
2022-08-31T10:22:58.911Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 28.994µs
2022-08-31T10:22:58.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 29.987µs
2022-08-31T10:23:08.911Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 34.044µs
2022-08-31T10:23:08.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 68.438µs
2022-08-31T10:23:18.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 32.411µs
2022-08-31T10:23:18.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 165.58µs
2022-08-31T10:23:28.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 52.459µs
2022-08-31T10:23:28.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 60.634µs
2022-08-31T10:23:38.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 40.206µs
2022-08-31T10:23:38.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 15.829µs
2022-08-31T10:23:48.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 16.001µs
2022-08-31T10:23:48.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 33.143µs
2022-08-31T10:23:58.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 32.522µs
2022-08-31T10:23:58.913Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 35.286µs
2022-08-31T10:24:08.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 56.676µs
2022-08-31T10:24:08.913Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 19.336µs
2022-08-31T10:24:18.915Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 31.96µs
2022-08-31T10:24:18.916Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 13.445µs
2022-08-31T10:24:28.912Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 31.679µs
2022-08-31T10:24:28.913Z	INFO	app/server.go:189	/api/v1/version	timeTaken: 34.184µs
Readiness probe failed: Get "http://10.42.8.128:8080/api/v1/version": dial tcp 10.42.8.1288080 connect: connection refused also unable to fetch data from api/v1/services response:
Copy code
{
  "data": null,
  "total": 0,
  "limit": 0,
  "offset": 0,
  "errors": [
    {
      "code": 500,
      "msg": "Error in processing sql query"
    }
  ]
}
I reinstalled in k8s multiple times. Updated repo and install again got same issue. Increases memory and cpu in yaml. Problem is still occurring.
s
What's the SigNoz query service and collector version you are running? Please make sure to use correct version sync if you are running service independently.
m
Copy code
queryService:
  name: "query-service"
  replicaCount: 1
  image:
    registry: <http://docker.io|docker.io>
    repository: signoz/query-service
    tag: 0.11.0

otelCollector:
  name: "otel-collector"
  image:
    registry: <http://docker.io|docker.io>
    repository: signoz/signoz-otel-collector
    tag: 0.55.0
    pullPolicy: Always

sigoz version: v0.11.0
s
Is you clickhouse running properly? Could you share the logs of
otel-collector
? Is this a fresh installation or did you upgrade from old version? If you upgraded from old version you need to make sure migrations are run for each version from the last update.
m
updated from old version working fine till yesterday. so i installed fresh today.
Copy code
2022.08.31 11:49:38.813872 [ 9 ] {42099487-15a0-4f61-a076-b29f781995df} <Error> TCPHandler: Code: 81. DB::Exception: Database signoz_metrics doesn't exist. (UNKNOWN_DATABASE), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb6fc2fa in /usr/bin/clickhouse
1. DB::DatabaseCatalog::assertDatabaseExistsUnlocked(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x15d6705a in /usr/bin/clickhouse
2. DB::DatabaseCatalog::getDatabase(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x15d69478 in /usr/bin/clickhouse
3. DB::Context::resolveStorageID(DB::StorageID, DB::Context::StorageNamespace) const @ 0x15d00d9e in /usr/bin/clickhouse
4. ? @ 0x1642acc6 in /usr/bin/clickhouse
5. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x16429535 in /usr/bin/clickhouse
6. DB::TCPHandler::runImpl() @ 0x16fe632a in /usr/bin/clickhouse
7. DB::TCPHandler::run() @ 0x16ff6959 in /usr/bin/clickhouse
8. Poco::Net::TCPServerConnection::start() @ 0x1b3eadef in /usr/bin/clickhouse
9. Poco::Net::TCPServerDispatcher::run() @ 0x1b3ed241 in /usr/bin/clickhouse
10. Poco::PooledThread::run() @ 0x1b5b3c89 in /usr/bin/clickhouse
11. Poco::ThreadImpl::runnableEntry(void*) @ 0x1b5b0fe0 in /usr/bin/clickhouse
clickhouse log
s
What was the last version? If upgrading from old version one needs to make sure migrations are run properly one by one. https://signoz.io/docs/operate/migration/
m
collector log
Copy code
2022-08-31T11:52:13.198Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2022-08-31T11:52:13.998Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
time="2022-08-31T11:52:14Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T11:52:14.399Z	error	kubeletstatsreceiver@v0.55.0/scraper.go:81	call to /stats/summary endpoint failed	{"kind": "receiver", "name": "kubeletstats", "pipeline": "metrics", "error": "Get \"<https://hp-k8s-dev-a6:10250/stats/summary>\": dial tcp: lookup hp-k8s-dev-a6 on 10.43.0.10:53: no such host"}
<http://github.com/open-telemetry/opentelemetry-collector-contrib/receiver/kubeletstatsreceiver.(*kubletScraper).scrape|github.com/open-telemetry/opentelemetry-collector-contrib/receiver/kubeletstatsreceiver.(*kubletScraper).scrape>
	/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/receiver/kubeletstatsreceiver@v0.55.0/scraper.go:81
<http://go.opentelemetry.io/collector/receiver/scraperhelper.ScrapeFunc.Scrape|go.opentelemetry.io/collector/receiver/scraperhelper.ScrapeFunc.Scrape>
I removed all the data and installed fresh signoz.
s
Are you doing a fresh install now? And are you still facing issues after that?
m
yes i removed all the resources and namespaces today.
s
When did you do that? It's not clear did you upgrade from the old version or did you start a fresh installation?
m
I had upgraded few days ago. But today I removed all resources and reinstall fresh after getting that issue.
s
Can you share the full log of collector since the service start up?
m
what is the command to get that?
s
Just the full log of the pod or container which is running the collector instace.
m
Copy code
/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:143
2022-08-31T12:02:00.533Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
time="2022-08-31T12:02:04Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T12:02:07.867Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "error": "code: 81, message: Database signoz_metrics doesn't exist", "interval": "175.286846ms"}
time="2022-08-31T12:02:08Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T12:02:10.533Z	error	exporterhelper/queued_retry.go:99	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 512}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry.go:99
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/logs.go:111
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/consumer/logs.go:36
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:343
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:176
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:143
2022-08-31T12:02:10.533Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2022-08-31T12:02:10.734Z	info	fileconsumer/file.go:178	Started watching file	{"kind": "receiver", "name": "filelog/k8s", "pipeline": "logs", "component": "fileconsumer", "path": "/var/log/pods/default_everestdb-search-grpc-service-554d85b9cd-5vpcw_612584d0-2dd5-4add-b9ab-7997b4e2a737/service/16429.log"}
time="2022-08-31T12:02:13Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T12:02:14.334Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2022-08-31T12:02:14.609Z	error	exporterhelper/queued_retry_inmemory.go:107	Exporting failed. No more retries left. Dropping data.	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "error": "max elapsed time expired code: 81, message: Database signoz_metrics doesn't exist", "dropped_items": 55}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.onTemporaryFailure|go.opentelemetry.io/collector/exporter/exporterhelper.onTemporaryFailure>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry_inmemory.go:107
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry.go:199
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/metrics.go:132
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry_inmemory.go:119
<http://go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume|go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/internal/bounded_memory_queue.go:82
<http://go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2|go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/internal/bounded_memory_queue.go:69
2022-08-31T12:02:14.628Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "PrepareBatch:code: 81, message: Database signoz_logs doesn't exist", "interval": "19.820436835s"}
2022-08-31T12:02:17.998Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "error": "code: 81, message: Database signoz_metrics doesn't exist", "interval": "196.650499ms"}
time="2022-08-31T12:02:18Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
time="2022-08-31T12:02:23Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T12:02:24.334Z	error	exporterhelper/queued_retry.go:99	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 530}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry.go:99
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/logs.go:111
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/consumer/logs.go:36
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:343
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:176
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:143
2022-08-31T12:02:24.335Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2022-08-31T12:02:28.082Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "error": "code: 81, message: Database signoz_metrics doesn't exist", "interval": "232.248656ms"}
time="2022-08-31T12:02:33Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T12:02:34.335Z	error	exporterhelper/queued_retry.go:99	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 322}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry.go:99
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/logs.go:111
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/consumer/logs.go:36
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:343
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:176
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:143
2022-08-31T12:02:34.335Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2022-08-31T12:02:34.455Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "PrepareBatch:code: 81, message: Database signoz_logs doesn't exist", "interval": "38.486332909s"}
2022-08-31T12:02:38.315Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "error": "code: 81, message: Database signoz_metrics doesn't exist", "interval": "247.288703ms"}
time="2022-08-31T12:02:43Z" level=error msg="code: 81, message: Database signoz_metrics doesn't exist" component=clickhouse
2022-08-31T12:02:44.335Z	error	exporterhelper/queued_retry.go:99	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 278}
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/queued_retry.go:99
<http://go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2|go.opentelemetry.io/collector/exporter/exporterhelper.NewLogsExporter.func2>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/exporter/exporterhelper/logs.go:111
<http://go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs|go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/consumer/logs.go:36
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export|go.opentelemetry.io/collector/processor/batchprocessor.(*batchLogs).export>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:343
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:176
<http://go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle|go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle>
	/go/pkg/mod/go.opentelemetry.io/collector@v0.55.0/processor/batchprocessor/batch_processor.go:143
2022-08-31T12:02:44.335Z	warn	batchprocessor/batch_processor.go:178	Sender failed	{"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2022-08-31T12:02:48.357Z	info	exporterhelper/queued_retry.go:215	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "error": "code: 81, message: Database signoz_metrics doesn't exist", "interval": "185.703794ms"}
Logs from 8/31/2022, 5:38:28 PM
Show timestamps
Show previous terminated container
s
It says database doesn't exist but that should not happen. What steps did you take? Did you purge the clickhouse data volume also or did you just reinstall the latest version of signoz?
m
I had removed namespace forcefully using kubectl and reinstall everything .
s
Copy code
{
  "kind": "exporter",
  "data_type": "logs",
  "name": "clickhouselogsexporter",
  "error": "PrepareBatch:code: 81, message: Database signoz_logs doesn't exist",
  "interval": "19.820436835s"
}
Copy code
{
  "kind": "exporter",
  "data_type": "metrics",
  "name": "clickhousemetricswrite",
  "error": "code: 81, message: Database signoz_metrics doesn't exist",
  "interval": "196.650499ms"
}
None of the DB exists. This is the reason for the issue. I am not sure about your deployment setup and what did you do. Maybe @Prashant Shahi can help.
m
Copy code
2022-08-31T12:20:49.680Z	error	exporterhelper/queued_retry.go:99	Dropping data because sending_queue is full. Try increasing queue_size.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 1003}
s
That's because the write to db failed since the DB doesn't exist and the items are enqueued for retry and eventually queue becomes full as none of writes succeed (no db to write to). Hence they are dropped after some point. Same for metrics/traces.
m
oh thanks what should I have to now should i remove everything again and reinstall
s
Since you said you already removed the old data it should be fine to reinstall but make sure you are doing it right.
m
I removed the crds of clickhouse which was not letting me to remove namespace. Once I removed crds and install signoz it is working fine. Thank you for helping me @Srikanth Chekuri
p
@Mukesh Chaudhary That's great to know that issue is resolve.
In case in future, you would need to increase
queue_size
from default
5000
of
otlp/signoz
exporter to avoid dropping data. Refer to this for estimating suitable value for the same: https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/README.md#configuration
m
Thank you @Prashant Shahi
701 Views