Hello all :grin: I'm having some trouble setting ...
# support
j
Hello all 😁 I'm having some trouble setting up logspout on a different host. Following: https://signoz.io/docs/userguide/collect_docker_logs/ I have 2 servers, one is running signoz (Machine A) and the other (Machine B) is running my services in docker containers and which I want to mount logspout. Settings: (Machine A) docker-compose.yaml (snippet of otel-collector)
Copy code
otel-collector:
    ports:
      # - "1777:1777"     # pprof extension
      - "4317:4317" # OTLP gRPC receiver
      - "4318:4318" # OTLP HTTP receiver
      - "2255:2255" # <------ Port opened for logspout comm
(Machine B)
Copy code
docker run --net=host --rm --name="logspout" \
     --volume=/var/run/docker.sock:/var/run/docker.sock \
     gliderlabs/logspout \
     syslog+tcp://<machine_a_ip>:2255
---------------------- When starting logspout on (Machine B) de container fails with the output:
2025/02/05 14:27:32 !! dial tcp <machine_a_ip>:2255: connect: connection refused
Running netcat inside (Machine B):
nc -zv <machine_a_ip> 2255
nc: connect to <machine_a_ip> port 2255 (tcp) failed: Connection refused
Ok, expected, since logspout is already failing... Running netcat inside (Machine A)
nc -zv 0.0.0.0 2255
Connection to 0.0.0.0 2255 port [tcp/*] succeeded!
So from this I can understand that I have some problem with port 2255 on (Machine A), but: Running tcpdump on Machine A (that is running signoz) while starting logspout from the Machine B:
sudo tcpdump -i any port 2255
I get:
Copy code
14:47:52.799030 eth0  In  IP static.****.your-server.34632 > static.<machine_a_ip>.2255: Flags [S], seq 2710839612, win 64240, options [mss 1460,sackOK,TS val 533319106 ecr 0,nop,wscale 7], length 0
14:47:52.799194 br-e99af481de3a Out IP static.your-server.34632 > 172.18.0.4.2255: Flags [S], seq 2710839612, win 64240, options [mss 1460,sackOK,TS val 533319106 ecr 0,nop,wscale 7], length 0
14:47:52.799207 veth62369ab Out IP static.your-server.34632 > 172.18.0.4.2255: Flags [S], seq 2710839612, win 64240, options [mss 1460,sackOK,TS val 533319106 ecr 0,nop,wscale 7], length 0
14:47:52.799254 veth62369ab P   IP 172.18.0.4.2255 > static.****.your-server.34632: Flags [R.], seq 0, ack 2710839613, win 0, length 0
14:47:52.799254 br-e99af481de3a In  IP 172.18.0.4.2255 > static.****.your-server.34632: Flags [R.], seq 0, ack 1, win 0, length 0
14:47:52.799279 eth0  Out IP static.<machine_a_ip>.2255 > static.****.your-server.34632: Flags [R.], seq 0, ack 2710839613, win 0, length 0
So the tcp is reaching Machine B but not being accepted on the container. Can someone tell me what am I missing? Thank for the help in advance 😁
n
what is the otel collector config in machine A ?
j
Hello @nitya-signoz, I have tried to mess with otel config, currently I have this.
Copy code
receivers:
  tcplog/docker: #Added this
    listen_address: "0.0.0.0:2255"
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  prometheus:
    config:
      global:
        scrape_interval: 60s
      scrape_configs:
        - job_name: otel-collector
          static_configs:
          - targets:
              - localhost:8888
            labels:
              job_name: otel-collector
processors:
  batch:
    send_batch_size: 10000
    send_batch_max_size: 11000
    timeout: 10s
  resourcedetection:
    # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
    detectors: [env, system]
    timeout: 2s
  signozspanmetrics/delta:
    metrics_exporter: clickhousemetricswrite
    metrics_flush_interval: 60s
    latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
    dimensions_cache_size: 100000
    aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
    enable_exp_histogram: true
    dimensions:
      - name: service.namespace
        default: default
      - name: deployment.environment
        default: default
      # This is added to ensure the uniqueness of the timeseries
      # Otherwise, identical timeseries produced by multiple replicas of
      # collectors result in incorrect APM metrics
      - name: signoz.collector.id
      - name: service.version
      - name: browser.platform
      - name: browser.mobile
      - name: k8s.cluster.name
      - name: k8s.node.name
      - name: k8s.namespace.name
      - name: host.name
      - name: host.type
      - name: container.name
extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  pprof:
    endpoint: 0.0.0.0:1777
exporters:
  clickhousetraces:
    datasource: <tcp://clickhouse:9000/signoz_traces>
    low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
    use_new_schema: true
  clickhousemetricswrite:
    endpoint: <tcp://clickhouse:9000/signoz_metrics>
    resource_to_telemetry_conversion:
      enabled: true
  clickhousemetricswrite/prometheus:
    endpoint: <tcp://clickhouse:9000/signoz_metrics>
  clickhousemetricswritev2:
    dsn: <tcp://clickhouse:9000/signoz_metrics>
  clickhouselogsexporter:
    dsn: <tcp://clickhouse:9000/signoz_logs>
    timeout: 10s
    use_new_schema: true
  # debug: {}
service:
  telemetry:
    logs:
      encoding: json
    metrics:
      address: 0.0.0.0:8888
  extensions:
    - health_check
    - pprof
  pipelines:
    traces:
      receivers: [otlp]
      processors: [signozspanmetrics/delta, batch]
      exporters: [clickhousetraces]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [clickhousemetricswrite, clickhousemetricswritev2]
    metrics/prometheus:
      receivers: [prometheus]
      processors: [batch]
      exporters: [clickhousemetricswrite/prometheus, clickhousemetricswritev2]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [clickhouselogsexporter]
I added this:
Copy code
tcplog/docker: #Added this
    listen_address: "0.0.0.0:2255"
With or without it the result is the same
n
add tcplog/docker: to
Copy code
logs:
      receivers: [otlp, tcplog/docker]
      processors: [batch]
      exporters: [clickhouselogsexporter]
j
Yup, that did it! So it was denying the conn because it wasn't specified on otel config?
n
Yeah since the receiver won’t start unless you describe it in the pipeline