Hi! Running into an issue with pipelines. I'm trying to parse a json field `labels` from a log such ...
m

Mircea Colonescu

over 1 year ago
Hi! Running into an issue with pipelines. I'm trying to parse a json field
labels
from a log such as this one
{
  "body": "{\"raw_log\":\"{\\\"level\\\":\\\"info\\\",\\\"module\\\":\\\"server\\\",\\\"module\\\":\\\"txindex\\\",\\\"height\\\":28557212,\\\"time\\\":\\\"2024-09-12T16:13:47-04:00\\\",\\\"message\\\":\\\"indexed block events\\\"}\"}",
  "id": "2lz9RKpucUEwudqQjp7LieQ9U4W",
  "timestamp": 1726172028356,
  "attributes": {
    "com.hashicorp.nomad.alloc_id": "71f80e7a-31d8-9a51-d5c5-9ad19783d6a5",
    "container_name": "/chain-binary-71f80e7a-31d8-9a51-d5c5-9ad19783d6a5",
    "labels": "{\"com.hashicorp.nomad.alloc_id\":\"71f80e7a-31d8-9a51-d5c5-9ad19783d6a5\"}",
    "level": "info",
    "message": "indexed block events",
    "module": "txindex",
    "nomad_job_name": "testnet-validator",
    "time": "2024-09-12T16:13:47-04:00"
  },
  "resources": {},
  "severity_text": "",
  "severity_number": 0,
  "trace_id": "",
  "span_id": "",
  "trace_flags": 0
}
The preview in the frontend works as expected. When I save the pipeline, however, it does not work and I see these errors in the collector logs
2024-09-12T20:16:29.396Z	error	helper/transformer.go:102	Failed to process entry	{"kind": "processor", "name": "logstransform/pipeline_Test", "pipeline": "logs", "operator_id": "4c9ebbab-d8b1-4ecb-9e07-c42459db68ab", "operator_type": "json_parser", "error": "running if expr: interface conversion: interface {} is map[string]interface {}, not string (1:48)\n | attributes?.labels != nil && attributes.labels matches \"^\\\\s*{.*}\\\\s*$\"\n | ...............................................^", "action": "send"}
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*TransformerOperator).HandleEntryError>
	/home/runner/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.102.0/operator/helper/transformer.go:102
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWithCallback>
	/home/runner/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.102.0/operator/helper/parser.go:105
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWith|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/helper.(*ParserOperator).ProcessWith>
	/home/runner/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.102.0/operator/helper/parser.go:98
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/json.(*Parser).Process|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/parser/json.(*Parser).Process>
	/home/runner/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.102.0/operator/parser/json/parser.go:24
<http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/router.(*Transformer).Process|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/router.(*Transformer).Process>
	/home/runner/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.102.0/operator/transformer/router/transformer.go:57
<http://github.com/open-telemetry/opentelemetry-collector-contrib/processor/logstransformprocessor.(*logsTransformProcessor).converterLoop|github.com/open-telemetry/opentelemetry-collector-contrib/processor/logstransformprocessor.(*logsTransformProcessor).converterLoop>
	/home/runner/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/processor/logstransformprocessor@v0.102.0/processor.go:213
Any idea why this might be an issue? The pipeline executes the next step after the failed json parsing.
please has anyone exposed their otel-collector through nginx ingress before? i need help, here is my...
a

Abdulmalik Salawu

about 1 year ago
please has anyone exposed their otel-collector through nginx ingress before? i need help, here is my current settings
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: signoz-otel-collector-grpc-ingress
  namespace: ops
  annotations:
    <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
    <http://nginx.ingress.kubernetes.io/grpc-backend|nginx.ingress.kubernetes.io/grpc-backend>: "true"
    <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: "GRPCS"
    <http://nginx.ingress.kubernetes.io/proxy-buffer-size|nginx.ingress.kubernetes.io/proxy-buffer-size>: "128k"
    <http://nginx.ingress.kubernetes.io/ssl-redirect|nginx.ingress.kubernetes.io/ssl-redirect>: "true"
    <http://nginx.ingress.kubernetes.io/proxy-body-size|nginx.ingress.kubernetes.io/proxy-body-size>: "0"
    <http://nginx.ingress.kubernetes.io/proxy-connect-timeout|nginx.ingress.kubernetes.io/proxy-connect-timeout>: "300"
    <http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout>: "300"
    <http://nginx.ingress.kubernetes.io/proxy-send-timeout|nginx.ingress.kubernetes.io/proxy-send-timeout>: "300"
    <http://nginx.ingress.kubernetes.io/upstream-keepalive-timeout|nginx.ingress.kubernetes.io/upstream-keepalive-timeout>: "600"
    <http://nginx.ingress.kubernetes.io/upstream-keepalive-requests|nginx.ingress.kubernetes.io/upstream-keepalive-requests>: "100"
spec:
  rules:
  - host: <http://otelcollector.domain.com|otelcollector.domain.com>
    http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: signoz-otel-collector
            port:
              number: 4317
---
apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
  name: signoz-otel-collector-http-ingress
  namespace: ops
  annotations:
    ingressClassName: nginx
    <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: HTTP
    <http://nginx.ingress.kubernetes.io/proxy-connect-timeout|nginx.ingress.kubernetes.io/proxy-connect-timeout>: "300"
    <http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout>: "300"
    <http://nginx.ingress.kubernetes.io/proxy-send-timeout|nginx.ingress.kubernetes.io/proxy-send-timeout>: "300"
spec:
  rules:
  - host: <http://otelcollector-http.domain.com|otelcollector-http.domain.com>
    http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: signoz-otel-collector
            port:
              number: 4318
Hello Team, Could you please assist me with an issue I'm facing? I have installed and configured Sig...
p

Prathap ch

over 1 year ago
Hello Team, Could you please assist me with an issue I'm facing? I have installed and configured SigNoz on an AWS EKS cluster. It is working as expected with the OTEL collector endpoint pointing to the default service/pod endpoint like this:
otlp:
        endpoint: <http://signoz-otel-collector.platform.svc.cluster.local:4317>
Our team has requested to externalize this endpoint using Ingress, so we can use the same endpoint across different clusters instead of configuring OTEL in each cluster. We have installed the AWS Load Balancer Controller to use as the ingress controller. I created an Ingress resource using the following specification referenced in the Helm values file:
otelCollector:
  ingress:
    # -- Enable ingress for otelCollector
    enabled: true
    # -- Annotations to otelCollector Ingress
    annotations:
      alb.ingress.kubernetes.io/load-balancer-name: signoz-collector-lb
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:XXXXXXXXX:certificate/XXXXXXXXXXX
      alb.ingress.kubernetes.io/ip-address-type: ipv4
      alb.ingress.kubernetes.io/target-type: ip
    # -- Frontend Ingress Host names with their path details
    hosts:
      - host: <HOST_URL>
        paths:
          - path: /
            pathType: ImplementationSpecific
            port: 4318
I was able to create the Load Balancer, but it is failing the health check, which prevents access to the endpoint from outside. Can someone please check and help me resolve this issue? I am using EKS version 1.27 and the Helm chart version is "signoz-0.44.0".