Nitish
05/28/2024, 7:04 PMNitish
05/28/2024, 7:05 PM<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send>
<http://go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/retry_sender.go:145|go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/retry_sender.go:145>
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send>
<http://go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/metrics.go:176|go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/metrics.go:176>
<http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).start.func1|go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).start.func1>
<http://go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/queue_sender.go:126|go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/queue_sender.go:126>
<http://go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).Start.func1|go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).Start.func1>
<http://go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/internal/bounded_memory_queue.go:52|go.opentelemetry.io/collector/exporter@v0.88.0/exporterhelper/internal/bounded_memory_queue.go:52>
2024-05-28T18:34:50.513Z error exporterhelper/retry_sender.go:145 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "Permanent error: rpc error: code = Unauthenticated desc = unexpected HTTP status code received from server: 401 (Unauthorized); transport: received unexpected content-type \"application/octet-stream\"", "dropped_items": 737}
Nitish
05/28/2024, 7:06 PMreceivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
hostmetrics:
collection_interval: 60s
scrapers:
cpu: {}
disk: {}
load: {}
filesystem: {}
memory: {}
network: {}
paging: {}
process:
mute_process_name_error: true
mute_process_exe_error: true
mute_process_io_error: true
processes: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-collector-binary
static_configs:
- targets:
# - localhost:8888
filelog/app:
include: [ /home/ubuntu/otelcol-contrib/app.log ] #include the full path to your log file
start_at: beginning
processors:
batch:
send_batch_size: 1000
timeout: 10s
# Ref: <https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md>
resourcedetection:
detectors: [env, system] # Before system detector, include ec2 for AWS, gcp for GCP and azure for Azure.
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
timeout: 2s
system:
hostname_sources: [os] # alternatively, use [dns,os] for setting FQDN as host.name and os as fallback
extensions:
health_check: {}
zpages: {}
exporters:
# highlight-start
otlp:
endpoint: "ingest.in.signoz.cloud:443"
tls:
insecure: false
headers:
"signoz-access-token": "51....."
# highlight-end
logging:
verbosity: normal
service:
telemetry:
metrics:
address: 0.0.0.0:8888
extensions: [health_check, zpages]
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
metrics/internal:
receivers: [prometheus, hostmetrics]
processors: [resourcedetection, batch]
exporters: [otlp]
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
logs:
receivers: [otlp, filelog/app]
processors: [batch]
exporters: [otlp]
Nitish
05/28/2024, 7:22 PMNitish
05/28/2024, 7:24 PMNitish
05/28/2024, 8:08 PMAnurag Vishwakarma
06/03/2024, 8:32 AMfilelog/app:
include: [ /home/anuragvishwakarma/signoz/docker-container-logs/animals.log ]
start_at: beginning
---
pipelines:
logs:
receivers: [tcplog/docker, otlp, filelog/app, syslog]
processors: [batch]
exporters: [otlp, otlp/log]