ok, need some help. 1. with windows server app logs written to a share, what's the preferred way to...
b

brandon

about 1 year ago
ok, need some help. 1. with windows server app logs written to a share, what's the preferred way to get those into signoz? these systems are vmware VMs in our DC, and our signoz systems are in our much larger AWS environment. i was going to simply mount the share (either NFS or SMB) to the signoz system for this POC. 2. following the instructions here (https://signoz.io/docs/userguide/collect_logs_from_file/), i just copied the logs over to the system, and configured the
docker-compose.yaml
and
otel-collector-config.yaml
files to point to a single log file, but i get the following when i view the logs for the otel-collector container:
{"level":"fatal","timestamp":"2024-03-04T17:29:06.159Z","caller":"signozcollector/main.go:72","msg":"failed to create collector service:","error":"failed to create server client: failed to create collector config: failed to upsert instance id failed to parse config file /var/tmp/collector-config.yaml: yaml: line 166: did not find expected key","stacktrace":"main.main\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/cmd/signozcollector/main.go:72\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.7/x64/src/runtime/proc.go:267"}
here is my corresponding config block:
logs:
      receivers: [otlp, tcplog/docker, filelog]
      processors: [batch]
      exporters: [clickhouselogsexporter]
        filelog:
          include: [/cloudadmins/logs/OTHER/D202306/M5WEB_PRESENTATION_COMMON_CSILOGON.ASPX_638224444396847090_0.txt]
          start_at: beginning
The `signoz-otel-collector` keeps restarting with OOMKilled - exit code: 137. There’s only ~175k sp...
t

Tyler Wells

11 months ago
The
signoz-otel-collector
keeps restarting with OOMKilled - exit code: 137. There’s only ~175k spans, and 17k metrics but it’s using a ton of memory and then crashing I see this in the logs.
{"level":"info","timestamp":"2024-06-12T13:23:22.493Z","caller":"signozcol/collector.go:121","msg":"Collector service is running"}
{"level":"info","timestamp":"2024-06-12T13:23:22.493Z","logger":"agent-config-manager","caller":"opamp/config_manager.go:168","msg":"Config has not changed"}
{"level":"info","timestamp":"2024-06-12T13:23:23.279Z","caller":"service/service.go:73","msg":"Client started successfully"}
{"level":"info","timestamp":"2024-06-12T13:23:23.279Z","caller":"opamp/client.go:49","msg":"Ensuring collector is running","component":"opamp-server-client"}
2024-06-12T13:24:22.389Z	warn	clickhousemetricsexporter/exporter.go:272	Dropped cumulative histogram metric	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "name": "signoz_latency"}
2024-06-12T13:24:22.484Z	warn	clickhousemetricsexporter/exporter.go:279	Dropped exponential histogram metric with no data points	{"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "name": "signoz_latency"}
2024-06-12T13:25:18.135Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "5.882953348s"}
2024-06-12T13:25:24.996Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "7.161709269s"}
2024-06-12T13:25:26.504Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "6.523426302s"}
2024-06-12T13:25:26.536Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "4.419607822s"}
2024-06-12T13:25:26.753Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "6.233919422s"}
2024-06-12T13:25:26.763Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "2.67037973s"}
2024-06-12T13:25:26.769Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "5.126252319s"}
2024-06-12T13:25:26.958Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "4.857335267s"}
2024-06-12T13:25:28.494Z	info	exporterhelper/retry_sender.go:177	Exporting failed. Will retry the request after interval.	{"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "error": "StatementSend:context deadline exceeded", "interval": "4.344819049s"}
any help would be much appreciated.