I am using the debug exporter to see why signoz is...
# support
k
I am using the debug exporter to see why signoz isnt logging anything from my signoz-collector docker on a remote host. getting the following output, does this mean its sending metrics or not? i am not sure.... anyone know?
Copy code
collector  | 2024-12-06T19:08:32.119Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 24, "data points": 6386}
collector  | 2024-12-06T19:08:32.136Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 164, "metrics": 495, "data points": 1015}
collector  | 2024-12-06T19:08:42.093Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 24, "data points": 6386}
collector  | 2024-12-06T19:08:52.098Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 24, "data points": 6386}
collector  | 2024-12-06T19:09:02.103Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 164, "metrics": 495, "data points": 1015}
collector  | 2024-12-06T19:09:02.105Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 24, "data points": 6386}
collector  | 2024-12-06T19:09:12.093Z info  MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 24, "data points": 6386}
s
It means it receives data but doesn't necessarily mean it sends data. Did you configure it to send somewhere? When you ask any questions related to collector, please share the config and collector version
k
Copy code
receivers:
  hostmetrics:
    collection_interval: 30s
    root_path: /hostfs
    scrapers:
      cpu: {}
      disk: {}
      load: {}
      filesystem: {}
      memory: {}
      network: {}
      paging: {}
      process:
        mute_process_name_error: true
       # mute_process_user_error: true
        mute_process_exe_error: true
        mute_process_io_error: true
      processes: {}
  filelog/XX:
    include: [  "/hostfs/home/XX/run/*/log/terminal.txt" ]
    start_at: end
    include_file_path: true
  filelog/containers:
    include: [  "/hostfs/var/lib/docker/containers//.log" ]
    start_at: end
    include_file_path: true
    include_file_name: false
    operators:
      # Find out which format is used by docker
      - type: router
        id: get-format
        routes:
          - output: parser-docker
            expr: 'body matches "^\\{"'
      # Parse Docker format
      - type: json_parser
        id: parser-docker
        output: extract_metadata_from_filepath
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'

      # Extract metadata from file path
      - type: regex_parser
        id: extract_metadata_from_filepath
        regex: '^.*containers/(?P<container_id>[^_]+)/.*log$'
        parse_from: attributes["log.file.path"]
        output: parse_body
      - type: move
      - type: move
        id: parse_body
        from: attributes.log
        to: body
        output: add_source
      - type: add
        id: add_source
        field: resource["source"]
        value: "docker"
  filelog/syslog:
    include: [  "/hostfs/var/log/*log" ]
    start_at: end
    include_file_path: true
    include_file_name: false
processors:
  batch:
    send_batch_size: 1000
    timeout: 10s
  resourcedetection:
    detectors: [env, system]
    timeout: 2s
    system:
      hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback
extensions:
  health_check: {}
  zpages: {}
  bearertokenauth:
    token: "XXXXX"
exporters:
  otlphttp:
    endpoint: "<https://XXX.YYY.com:443>"
    auth:
      authenticator: bearertokenauth
  otlp:
    endpoint: "X.X.X.X:4317"
    tls:
      insecure: true 
      insecure_skip_verify: true
    auth:
      authenticator: bearertokenauth
  logging:
    # verbosity of the logging export: detailed, normal, basic
    verbosity: normal
service:
  telemetry:
    metrics:
      address: 0.0.0.0:8888
  extensions: [health_check, zpages, bearertokenauth]
  pipelines:
    metrics/internal:
      receivers: [hostmetrics]
      processors: [resourcedetection, batch]
      exporters: [ otlphttp, logging]
    logs:
      receivers: [filelog/syslog, filelog/containers, filelog/XX]
      processors: [batch]
      exporters: [ otlphttp ]
@Srikanth Chekuri thank you for the info, here is the config and the version is otelcol-contrib version 0.88.0 and I have also tested it with docker signoz/signoz-otel-collector:0.111.14
signoz host relevant config :
Copy code
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
        include_metadata: true
        auth:
          request_params:
          - token
          authenticator: bearertokenauth
        cors:
          allowed_origins:
            - "https://*"
            - "http://*"
          max_age: 7200
        compression_algorithms: ["", "gzip"]

extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  zpages:
    endpoint: 0.0.0.0:55679
  pprof:
    endpoint: 0.0.0.0:1777
  bearertokenauth:
    token: "XXXXXX"

exporters:
  clickhousetraces:
    datasource: <tcp://clickhouse:9000/signoz_traces>
    low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
  clickhousemetricswrite:
    endpoint: <tcp://clickhouse:9000/signoz_metrics>
    resource_to_telemetry_conversion:
      enabled: true
  clickhousemetricswrite/prometheus:
    endpoint: <tcp://clickhouse:9000/signoz_metrics>
  clickhouselogsexporter:
    dsn: <tcp://clickhouse:9000/signoz_logs>
    timeout: 10s
    use_new_schema: true

service:
  telemetry:
    logs:
      encoding: json
    metrics:
      address: 0.0.0.0:8888
  extensions:
    - health_check
    - zpages
    - pprof
    - bearertokenauth
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [signozspanmetrics/cumulative, signozspanmetrics/delta, batch]
      exporters: [clickhousetraces]
    metrics:
      receivers: [otlp, httpcheck]
      processors: [resourcedetection/ec2, resourcedetection/docker, resourcedetection/env, resourcedetection/system, batch]
      exporters: [clickhousemetricswrite]
    metrics/generic:
      receivers: [hostmetrics]
      processors: [resourcedetection/ec2, resourcedetection/docker, resourcedetection/env, resourcedetection/system, batch]
      exporters: [clickhousemetricswrite]
    metrics/prometheus:
      receivers: [prometheus]
      processors: [batch]
      exporters: [clickhousemetricswrite/prometheus]
    metrics/hostmetrics:
      receivers: [hostmetrics]
      processors: [resourcedetection]
      exporters: [clickhousemetricswrite]
    logs:
      receivers: [otlp, syslog, tcplog/docker]
      processors: [batch]
      exporters: [clickhouselogsexporter]
s
This should be sending host metrics and filelogs to main signoz installation
k
it shoud be, but sadly there are no entries for this host on the signoz side
if theres anything I can use to debug this further I would appreciate the info
s
but sadly there are no entries for this host on the signoz side
How are you confirming this? What do you expect and what is there?
k
I am sending logs three ways to signoz, 1. logspout : everything is ok, logs are in signoz (doing this for host named test001) 2. rsyslog: everything is OK, logs are in signoz (doing this for host named test001) 3. otel-collector via otlp : no logs in signoz, and no host entry in signoz (doing this for host named test005) so when I search in the query hostname=test005 I get no log lines. But there are many log lines for test001
s
You should have clarified that earlier. All along, I was talking about metrics because you said, does this mean its sending metrics or not? i am not sure.... anyone know? There are no log lines getting collected, please re-check your config.
k
sorry I was trying to send both metrics and logs from host test005 (with otel collector) , and Im getting nothing from this specific host. in the infra m onitoring this host isnt listed. in the logs this host isnt listed. at this point im just trying to get this host to show anything on signoz side.
if I use rsyslog I am positive I will get results on signoz side.
s
Copy code
metrics:
      receivers: [otlp, httpcheck]
      processors: [resourcedetection/ec2, resourcedetection/docker, resourcedetection/env, resourcedetection/system, batch]
      exporters: [clickhousemetricswrite]
Why do you have resource detection for otlp data which already has the resource detection set on test host
k
mainly because I dont know better.
s
Copy code
include: [  "/hostfs/var/lib/docker/containers//.log" ]
Is this correct?
k
most likely an issue of the copy paste into slack
there are astricks in there in reality
i have done my best to find a doc that explain how to setup a host with logs/metrics etc going to a signoz install over http. I have not been able t find anything that showed a complete set of configs or structions on server/client. so I have scoured the docs and put together what I could so I know this issue is 99.9% due to my lack of knowledge.
image.png
s
ok, are you running collector inside docker? where is /hostfs coming from?
k
I am running the signoz install and collector inside docker yes
s
I meant, are you running the test host collector inside docker?
k
yes
I have tried with a direct binary also, but my main install will be via docker if I can get it working
so its how I am trying to make it work currently
s
ok, where is /hostfs coming from? are you mounting it?
k
Copy code
volumes:
      - ./collector-config.yaml:/etc/otel-collector-config.yaml
      - /:/hostfs:ro
s
I am not sure what might be the issue in this case. I see two issues, 1. using docker for test host data collection 2. overriding the resource detection on signoz side when the test host already has resource detector
k
1: ok I will test with binary only for now till I know its working, then see if I can make docker work as an extended goal. 2: I will clean that up and see if that makes a difference.
also I found this page: https://signoz.io/docs/userguide/send-logs-http/ I am going to try sending a curl and see if I can get it to go thru. will update this in just a bit once I test. hopefully it does what I think it does thank you for all your time, I sincerely appreciate it.
that link seems to be for a different receiver. I tested with binary only and it made no diff to having it in docker. I will clean up resource detection at some point when I try again, but I think i will stop my efforts to make this work for now.
s
The metrics are definitely getting sent as seen in the debug exporter, so if you are not seeing host, then it is definitely resourcedetection issue
k
ok I cleaned up my config completely, just the bare basics, one single receiver, one single exporter, and one processor (batch) that got my logs into the signoz instance. so I have made progress so far.
now I will start adding things back in one by one and see where I break things. also will cleanup the resource detection and see if I can fix it.
just missing hostname on the logs im sending so Need to see how to add that to the logs.
thanks for sticking with me 🙂 appreciate it all
s
glad to hear that your logs are now available.
k
so i had time finally today to work further on this, and I was able to fix hostmetrics. The config on signoz side was blocking it. Simplifying it fixed the issue. so logs = working metrics = working now I get to actually expland and fix the logs so they are useful