Hello, I want to monitor Apache. For this, in otel...
# support
j
Hello, I want to monitor Apache. For this, in otel-collector-config.yaml I have added these lines: receivers: Apache: endpoint: "http://localhost:80/server-status?auto" Where can I see the metrics of my Apache server? In alerts I can't see the metrics.
s
Did you configure the server to enable status report?
j
Yes, I put in the browser http://localhost:80/server-status?auto and I see the status page.
Copy code
but I don't know where I can see the apache metrics.
s
If the collector can scrape successfully the metrics show up when you type the metric name dashboards or alerts. You may want to look at the logs if there are any issues while getting metrics from apache.
j
so in apache I don't have to do anything right? Where can the logs be seen?
s
so in apache I don’t have to do anything right?
You need to configure the
httpd.conf
to let the collector receive the metrics
Where can the logs be seen?
In the pod/container for the collector where you configured the apache receiver. Remember just adding the receiver also doesn’t create metrics. You need to add it to the pipeline.
j
el apache esta bien configurado. Cuando entra a : http://localhost/server-status?auto veo esto: localhost ServerVersion: Apache/2.4.53 (Unix) ServerMPM: event Server Built: Mar 29 2022 100339 CurrentTime: Monday, 27-Feb-2023 143814 UTC RestartTime: Monday, 27-Feb-2023 143734 UTC ParentServerConfigGeneration: 1 ParentServerMPMGeneration: 0 ServerUptimeSeconds: 40 ServerUptime: 40 seconds Load1: 4.33 Load5: 3.14 Load15: 1.67 Total Accesses: 1 Total kBytes: 0 Total Duration: 68 CPUUser: .03 CPUSystem: .02 CPUChildrenUser: 0 CPUChildrenSystem: 0 CPULoad: .125 Uptime: 40 ReqPerSec: .025 BytesPerSec: 0 BytesPerReq: 0 DurationPerReq: 68 BusyWorkers: 1 IdleWorkers: 74 Processes: 3 Stopping: 0 BusyWorkers: 1 IdleWorkers: 74 ConnsTotal: 0 ConnsAsyncWriting: 0 ConnsAsyncKeepAlive: 0 ConnsAsyncClosing: 0 Scoreboard:
Copy code
and on the other hand in otel-collector-config.yaml

I have configured this:

receivers:
  Apache:
    endpoint: "<http://localhost:80/server-status?auto>"
Copy code
but what has been said... from the Dashboard or Alert I see the metrics
Copy code
he apache log without problem. I see the calls to the status:

172.17.0.1 - - [26/Feb/2023:13:04:29 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1168
172.17.0.1 - - [26/Feb/2023:13:04:29 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1169
172.17.0.1 - - [26/Feb/2023:13:04:29 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1169
172.17.0.1 - - [26/Feb/2023:13:04:30 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1169
172.17.0.1 - - [26/Feb/2023:13:04:32 +0000] "GET /server-status?auto HTTP/1.1" 200 1169
172.17.0.1 - - [26/Feb/2023:13:04:34 +0000] "GET /server-status?auto HTTP/1.1" 200 1168
172.17.0.1 - - [26/Feb/2023:13:04:35 +0000] "GET /server-status?auto HTTP/1.1" 200 1169
172.17.0.1 - - [26/Feb/2023:13:04:35 +0000] "GET /server-status?auto HTTP/1.1" 200 1169
172.17.0.1 - - [26/Feb/2023:13:04:35 +0000] "GET /server-status?auto HTTP/1.1" 200 1168
s
Share your full collector configuration
j
receivers: filelog/dockercontainers: include: [ "/var/lib/docker/containers/*/*.log" ] start_at: end include_file_path: true include_file_name: false operators: - type: json_parser id: parser-docker output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: regex_parser id: extract_metadata_from_filepath regex: '^.*containers/(?P<container_id>[^_]+)/.*log$' parse_from: attributes["log.file.path"] output: parse_body - type: move id: parse_body from: attributes.log to: body output: time - type: remove id: time field: attributes.time opencensus: endpoint: 0.0.0.0:55678 otlp/spanmetrics: protocols: grpc: endpoint: localhost:12345 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_http: endpoint: 0.0.0.0:14268 # thrift_compact: # endpoint: 0.0.0.0:6831 # thrift_binary: # endpoint: 0.0.0.0:6832 hostmetrics: collection_interval: 30s scrapers: cpu: {} load: {} memory: {} disk: {} filesystem: {} network: {} prometheus: config: global: scrape_interval: 60s scrape_configs: # otel-collector internal metrics - job_name: otel-collector static_configs: - targets: - localhost:8888 labels: job_name: otel-collector apache: endpoint: "http://localhost:8080/server-status?auto" processors: batch: send_batch_size: 10000 send_batch_max_size: 11000 timeout: 10s signozspanmetrics/prometheus: metrics_exporter: prometheus latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ] dimensions_cache_size: 100000 dimensions: - name: service.namespace default: default - name: deployment.environment default: default # This is added to ensure the uniqueness of the timeseries # Otherwise, identical timeseries produced by multiple replicas of # collectors result in incorrect APM metrics - name: 'signoz.collector.id' # memory_limiter: # # 80% of maximum memory up to 2G # limit_mib: 1500 # # 25% of limit up to 2G # spike_limit_mib: 512 # check_interval: 5s # # # 50% of the maximum memory # limit_percentage: 50 # # 20% of max memory usage spike expected # spike_limit_percentage: 20 # queued_retry: # num_workers: 4 # queue_size: 100 # retry_on_failure: true resourcedetection: # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. detectors: [env, system] # include ec2 for AWS, gce for GCP and azure for Azure. timeout: 2s extensions: health_check: endpoint: 0.0.0.0:13133 zpages: endpoint: 0.0.0.0:55679 pprof: endpoint: 0.0.0.0:1777 exporters: clickhousetraces: datasource: tcp://clickhouse:9000/?database=signoz_traces docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING} clickhousemetricswrite: endpoint: tcp://clickhouse:9000/?database=signoz_metrics resource_to_telemetry_conversion: enabled: true clickhousemetricswrite/prometheus: endpoint: tcp://clickhouse:9000/?database=signoz_metrics prometheus: endpoint: 0.0.0.0:8889 # logging: {} clickhouselogsexporter: dsn: tcp://clickhouse:9000/ docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} timeout: 5s sending_queue: queue_size: 100 retry_on_failure: enabled: true initial_interval: 5s max_interval: 30s max_elapsed_time: 300s service: telemetry: metrics: address: 0.0.0.0:8888 extensions: - health_check - zpages - pprof pipelines: traces: receivers: [jaeger, otlp] processors: [signozspanmetrics/prometheus, batch] exporters: [clickhousetraces] metrics: receivers: [otlp] processors: [batch] exporters: [clickhousemetricswrite] metrics/generic: receivers: [hostmetrics] processors: [resourcedetection, batch] exporters: [clickhousemetricswrite] metrics/prometheus: receivers: [prometheus] processors: [batch] exporters: [clickhousemetricswrite/prometheus] metrics/spanmetrics: receivers: [otlp/spanmetrics] exporters: [prometheus] logs: receivers: [otlp, filelog/dockercontainers] processors: [batch] exporters: [clickhouselogsexporter]
Copy code
on line 70 you will see:

  Apache:
    endpoint: "<http://localhost:8080/server-status?auto>"
s
You only added the receiver to the list but didn’t add it to the pipeline. As I mentioned earlier, just adding receiver alone doesn’t enable it you need to add it to the metrics pipeline.
j
Copy code
Would it also be added in?:

otel-collector-metrics-config.yaml

and how would it be done? I don't see anything in the documentation
s
Add the apache in the pipeline here
Copy code
metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [clickhousemetricswrite]
Copy code
metrics:
      receivers: [otlp, apache]
      processors: [batch]
      exporters: [clickhousemetricswrite]
j
Copy code
Thank you very much for the help.

I see the apache metrics