Hi Team, i have a service which generate json logs...
# support
s
Hi Team, i have a service which generate json logs - but sometimes it generates pretty huge message content in it, and signoz is not parsing that message into one single log - but instead into multiple logs.. How do i address this ?
n
s
@Kashyap Rajendra Kathrani how big is our log line ??
k
@Saad Ansari @nitya-signoz about ~14200 characters each
s
its getting split into 6-7 logs line
@nitya-signoz i dont see any
max_log_line
in the above link you shared
n
Sorry it’s max_log_size
s
ok
what do you recommend the value for this ?
n
It will based on the data you send, try setting it to a higher number. Also if there are newlines in your json then also it may split into multiple lines, in that case you will have to add multiline config
s
ok
can you help me on where exactly do i need to add this config ?
am using this helm chart to deploy signoz - https://github.com/SigNoz/charts/tree/main/charts/signoz
am confused as to where the config should go to ?
n
@Prashant Shahi can you help with the doc to write override-values.yaml
Ahh don’t think override will work here, you will have to modify this locally and add the max_log_size https://github.com/SigNoz/charts/blob/cfc3248dbba022c7c0c32d4a2af4c6bf517239cd/charts/k8s-infra/templates/_config.tpl#L202
s
Copy code
receivers:
  filelog/k8s:
    max_log_size: 5MiB
is this the correct way ? @nitya-signoz
@nitya-signoz the default size of the max_log_size seems to be 1MiB , and our entire log entry is only around 80kb.
still the entire log line is getting split in signoz
i cannot manually edit the _config.tpl - since it will break our deployment automation. can you tell me if there is a way to do it via helm upgrade cmd ?
@Prashant Shahi can you help here ?
this is the error i see in
k8s-infra-otel-agent
pod
Copy code
2023-11-29T05:46:30.585Z	info	fileconsumer/file.go:182	Started watching file	{"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/gg-sdo_gg-sdo-celery-5d9fd5cbdd-rmbkt_ecad1502-1752-4707-a057-36c74ce0a353/gg-sdo/13.log"}
2023-11-29T06:03:30.585Z	info	fileconsumer/file.go:182	Started watching file	{"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/gg-be-core_gg-be-core-8c9f8b856-zpnjq_221bf4ce-0555-4ad9-a7d2-c8206b2f7a69/gg-be-core/0.log"}
2023-11-29T06:35:57.384Z	info	fileconsumer/file.go:182	Started watching file	{"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/gg-sdo_gg-sdo-celery-5d9fd5cbdd-rmbkt_ecad1502-1752-4707-a057-36c74ce0a353/gg-sdo/14.log"}
2023-11-29T06:53:30.586Z	error	helper/transformer.go:98	Failed to process entry	{"kind": "receiver", "name": "filelog/k8s", "data_type": "logs", "operator_id": "parser-crio", "operator_type": "regex_parser", "error": "regex pattern does not match", "action": "send", "entry": {"observed_timestamp":"2023-11-29T06:53:30.586150827Z","timestamp":"0001-01-01T00:00:00Z","body":"ive\",\"suspendedReason\":\"From admin dashb","attributes":{"log.file.path":"/var/log/pods/gg-employer-api-bff_gg-employer-api-bff-67f5f85b69-g2rkr_0222b537-cf40-435d-b6b2-6e7af22878ab/gg-employer-api-bff/0.log"},"severity":0,"scope_name":""}}
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/transformer.go:98|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/transformer.go:98>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ParseWith|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ParseWith>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/parser.go:140|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/parser.go:140>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/parser.go:112|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/parser.go:112>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWith|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWith>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/parser.go:98|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/parser.go:98>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/regex.(*Parser).Process|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/regex.(*Parser).Process>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/parser/regex/regex.go:99|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/parser/regex/regex.go:99>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/router.(*Transformer).Process|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/router.(*Transformer).Process>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/transformer/router/router.go:130|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/transformer/router/router.go:130>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*WriterOperator).Write|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*WriterOperator).Write>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/writer.go:53|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/helper/writer.go:53>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/input/file.(*Input).emit|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/input/file.(*Input).emit>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/input/file/file.go:52|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/operator/input/file/file.go:52>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer/internal/reader.(*Reader).ReadToEnd|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer/internal/reader.(*Reader).ReadToEnd>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/fileconsumer/internal/reader/reader.go:106|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/fileconsumer/internal/reader/reader.go:106>
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Manager).consume.func1|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/fileconsumer.(*Manager).consume.func1>
	<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/fileconsumer/file.go:156|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.88.0/fileconsumer/file.go:156>
because of this - the log msg is getting split into smaller logs - even though the size of the log is less than 1MiB (which is the default
max_log_size
value as per the docs you shared)
@nitya-signoz
n
are you using EKS or something else ?
From the error log I can see the body is broken to
ive\",\"suspendedReason\":\"From admin dashb
By default line are splitted based on newline, you will have to change the multiline config if you have newline characters because of which the lines are getting splitted https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/filelogreceiver/README.md#multiline-configuration
I guess you will have to override the entire filelog receiver to test out those changes. To override something for k8s infra chart it’s done like this https://signoz.io/docs/userguide/collect_kubernetes_pod_logs/#steps-to-filterexclude-logs-collection . But not sure if we can override the entire receiver. @Prashant Shahi can help. This is the config that is finally generated by the chart for the collector for reference https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/examples/kubernetes/otel-collector-config.yml .
s
am using EKS
there are no new lines - its just one big json log line - but still signoz is breaking it into new lines
n
So as I mentioned there maybe two reasons your lines are broken. one is newline else it’s exceeding log size. Can you run SigNoz on docker locally, and print the same log lines and try to collect it ? also share the github repo with us it will be easier to help. ex:- https://github.com/SigNoz/nginx-logs-parsing