Hello, i seem to be having issues with sending queue is full `{"level":"error","ts":1727180784.2226...
s

Samuel Olowoyeye

12 months ago
Hello, i seem to be having issues with sending queue is full
{"level":"error","ts":1727180784.2226639,"caller":"exporterhelper/common.go:296","msg":"Exporting failed. Rejecting data.","kind":"exporter","data_type":"traces","name":"clickhousetraces","error":"sending queue is full","rejected_items":30,"stacktrace":"<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/exporterhelper/common.go:296\<http://ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesRequestExporter.func1|ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesRequestExporter.func1>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/exporter@v0.102.0/exporterhelper/traces.go:134\<http://ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces|ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/consumer@v0.102.1/traces.go:25\<http://ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export|ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.102.0/batch_processor.go:414\<http://ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems|ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.102.0/batch_processor.go:261\<http://ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).startLoop|ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).startLoop>\n\t/home/runner/go/pkg/mod/go.opentelemetry.io/collector/processor/batchprocessor@v0.102.0/batch_processor.go:223"}
However, when i checked clickhouse storage i got
Filesystem                Size      Used Available Use% Mounted on
/dev/sde                884.8G    265.1G    574.7G  32% /var/lib/clickhouse
chi-my-release-clickhouse-cluster-0-0-0:/$
What could be the issue here
ok, need some help. 1. with windows server app logs written to a share, what's the preferred way to...
b

brandon

over 1 year ago
ok, need some help. 1. with windows server app logs written to a share, what's the preferred way to get those into signoz? these systems are vmware VMs in our DC, and our signoz systems are in our much larger AWS environment. i was going to simply mount the share (either NFS or SMB) to the signoz system for this POC. 2. following the instructions here (https://signoz.io/docs/userguide/collect_logs_from_file/), i just copied the logs over to the system, and configured the
docker-compose.yaml
and
otel-collector-config.yaml
files to point to a single log file, but i get the following when i view the logs for the otel-collector container:
{"level":"fatal","timestamp":"2024-03-04T17:29:06.159Z","caller":"signozcollector/main.go:72","msg":"failed to create collector service:","error":"failed to create server client: failed to create collector config: failed to upsert instance id failed to parse config file /var/tmp/collector-config.yaml: yaml: line 166: did not find expected key","stacktrace":"main.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozcollector/main.go:72\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.7/x64/src/runtime/proc.go:267"}
here is my corresponding config block:
logs:
      receivers: [otlp, tcplog/docker, filelog]
      processors: [batch]
      exporters: [clickhouselogsexporter]
        filelog:
          include: [/cloudadmins/logs/OTHER/D202306/M5WEB_PRESENTATION_COMMON_CSILOGON.ASPX_638224444396847090_0.txt]
          start_at: beginning