This message was deleted.
# support
s
This message was deleted.
n
One of the ways is to send it to a single otel agent, and it can forward to both the signoz deployments
o
ok, how could that configuration look like?
I have my own local otel collector running for log handling. I could send traces to this, that in turn will forward it to both signoz otel deployments. But how would the otel yaml configuration look like for this? The code I have for logging:
Copy code
receivers:
  filelog/containers:
    include: [  "/var/lib/docker/volumes/*/_data/*.log" ]
    start_at: end
    include_file_path: false
    include_file_name: false
    operators:
      - type: json_parser
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%d %H:%M:%S'
      - type: move
        from: attributes.message
        to: body
      - type: move
        from: attributes.label
        to: <http://attributes.app|attributes.app>
      - type: remove
        field: attributes.timestamp
processors:
  batch:
    send_batch_size: 10000
    send_batch_max_size: 11000
    timeout: 10s
exporters:
  otlp/us0:
    endpoint: <http://123.123.123.123:4317>
    tls:
      insecure: true
  otlp/de0:
    endpoint: <http://123.123.123.123:4317>
    tls:
      insecure: true
service:
  pipelines:
    logs:
      receivers: [filelog/containers]
      processors: [batch]
      exporters: [ otlp/us0, otlp/de0 ]
n
It will be similar to above, you will have a traces pipeline, where you will receive traces and export to two different endpoints
o
Yes I understand that, but I need guidance for how to do this configuration as I'm new to opentelemetry and signoz. Could you provide some example code for adapting above code to listen for and forward traces to signoz?
n
It will be like this
Copy code
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  filelog/containers:
    include: [  "/var/lib/docker/volumes/*/_data/*.log" ]
    start_at: end
    include_file_path: false
    include_file_name: false
    operators:
      - type: json_parser
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%d %H:%M:%S'
      - type: move
        from: attributes.message
        to: body
      - type: move
        from: attributes.label
        to: <http://attributes.app|attributes.app>
      - type: remove
        field: attributes.timestamp
processors:
  batch:
    send_batch_size: 10000
    send_batch_max_size: 11000
    timeout: 10s
exporters:
  otlp/us0:
    endpoint: <http://123.123.123.123:4317>
    tls:
      insecure: true
  otlp/de0:
    endpoint: <http://123.123.123.123:4317>
    tls:
      insecure: true
service:
  pipelines:
    logs:
      receivers: [filelog/containers]
      processors: [batch]
      exporters: [ otlp/us0, otlp/de0 ]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [ otlp/us0, otlp/de0 ]
o
ok thanks, but I would need different exporters right? traces shouldn't go to
Copy code
<http://123.123.123.123:4317>
?
n
The same will work, OTLP takes care of that.
o
Wohoo 🕺 ... success, thanks @nitya-signoz
🎉 1
n
Thats great 🎉