Mathias Wegner
01/10/2022, 4:31 PM~/src/apm-poc/example> OTEL_RESOURCE_ATTRIBUTES=service.name=example OTEL_EXPORTER_OTLP_ENDPOINT="<http://127.0.0.1:4317>" OTEL_LOG_LEVEL=debug opentelemetry-instrument python3 manage.py runserver --noreload
Performing system checks...
System check identified no issues (0 silenced).
January 10, 2022 - 11:22:20
Django version 4.0.1, using settings 'example.settings'
Starting development server at <http://127.0.0.1:8000/>
Quit the server with CONTROL-C.
[10/Jan/2022 11:22:21] "GET /admin/ HTTP/1.1" 302 0
[10/Jan/2022 11:22:21] "GET /admin/login/?next=/admin/ HTTP/1.1" 200 2211
[10/Jan/2022 11:22:26] "POST /admin/login/?next=/admin/ HTTP/1.1" 302 0
[10/Jan/2022 11:22:26] "GET /admin/ HTTP/1.1" 200 3974
[10/Jan/2022 11:22:26] "GET /static/admin/css/dashboard.css HTTP/1.1" 304 0
[10/Jan/2022 11:22:26] "GET /static/admin/img/icon-addlink.svg HTTP/1.1" 304 0
[10/Jan/2022 11:22:26] "GET /static/admin/img/icon-changelink.svg HTTP/1.1" 304 0
edr
01/13/2022, 7:22 AMVinícius Costa
01/13/2022, 2:01 PMSagar Dutta
01/17/2022, 10:13 AMSagar Dutta
01/17/2022, 10:14 AMSagar Dutta
01/17/2022, 10:54 AMDitty K.M
01/17/2022, 5:06 PMRanjit Barsa
01/24/2022, 7:31 AMRanjit Barsa
01/24/2022, 7:32 AMMartin Pola
01/26/2022, 5:56 PMOnur Yikilmazoglu
01/27/2022, 1:06 PMPrabhu Chawandi
01/30/2022, 6:19 AMOnur Yikilmazoglu
01/31/2022, 2:35 PMJingqi Huang
02/02/2022, 7:08 PMkubectl describe pvc
for all the pvc and I got the same message
FailedBinding 23s (x903 over 3h45m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Jingqi Huang
02/02/2022, 7:09 PMThiago Alexandria
02/02/2022, 8:44 PMPoco::Exception. Code: 1000, e.code() = 0, Not found: user_files_path (version 21.10.5.3 (official build))
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/config.xml'.
Processing configuration file '/etc/clickhouse-server/users.xml'.
Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/users.xml'.
ClickHouse init process failed.
I looked for the chat and I couldn't find something that helps me, has anyone here gone through this?Onur Yikilmazoglu
02/03/2022, 6:06 PMPrabhu Chawandi
02/07/2022, 5:11 AMPatrik Potocki
02/09/2022, 12:56 PMPatrik Potocki
02/09/2022, 1:36 PM<http://signoz-alertmanager:9093/api/>
? 🙂Patrik Potocki
02/10/2022, 9:10 AMkubeletstats
collectorDaniel Lima
02/10/2022, 2:41 PMSelva
02/12/2022, 3:34 PMSelva
02/14/2022, 11:10 AMpraddy
02/15/2022, 5:35 AMgo get <http://go.signoz.io/query-service/version|go.signoz.io/query-service/version>
I am getting following error
go get: unrecognized import path "<http://go.signoz.io/query-service/version|go.signoz.io/query-service/version>": https fetch: Get "<https://go.signoz.io/query-service/version?go-get=1>": x509: certificate has expired or is not yet valid: current time 2022-02-15T11:01:04+05:30 is after 2021-06-12T15:18:58Z
Can someone please provide inputs hereNico Van Wyk
02/16/2022, 12:16 PMapiVersion: <http://traefik.containo.us/v1alpha1|traefik.containo.us/v1alpha1>
kind: IngressRoute
metadata:
annotations:
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: traefik
name: signoz
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`tools.localhost`) && PathPrefix(`/signoz`)
middlewares:
- name: signoz
services:
- name: signoz-frontend
port: 3301
---
apiVersion: <http://traefik.containo.us/v1alpha1|traefik.containo.us/v1alpha1>
kind: Middleware
metadata:
name: signoz
spec:
headers:
customRequestHeaders:
X-Script-Name: "signoz"
Axay Sagathiya
02/18/2022, 5:42 AMKaram Ashqar
02/18/2022, 11:44 PM{
code: 12,
message: "Not Implemented",
details: [ ]
}
and when I put this link to my nestjs app on my local machine or cluster, it doesn't show any metrics/applications at all, anyone knows why?paz.lucky
02/21/2022, 1:10 AMRajneesh Mehta
02/22/2022, 9:30 PMOverriding of current TracerProvider is not allowed
and application is not sending any data to signoz backend.
Here's what configuration I've done.
installed on: minikube
installed with: helm
release name: signoz-uat
backend framework: django
running server with : gunicorn
command: OTEL_METRICS_EXPORTER=none DJANGO_SETTINGS_MODULE=shopfloor.settings OTEL_RESOURCE_ATTRIBUTES=service.name=stitch OTEL_EXPORTER_OTLP_ENDPOINT="signoz-uat-otel-collector.platform.svc.cluster.local:4317" opentelemetry-instrument gunicorn shopfloor.wsgi -c gunicorn.config.py --bind 0.0.0.0:8000
content of gunicorn.config.py
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
def post_fork(server, worker):
<http://server.log.info|server.log.info>("Worker spawned (pid: %s)", worker.pid)
resource = Resource.create(attributes={
"service.name": "stitch"
})
trace.set_tracer_provider(TracerProvider(resource=resource))
span_processor = BatchSpanProcessor(
OTLPSpanExporter(endpoint="signoz-uat-otel-collector.platform.svc.cluster.local:4317")
)
trace.get_tracer_provider().add_span_processor(span_processor)
result from running Troubleshooting binary:-
root@django-backend-deployment-bb8875d7c-h9v4l:/app# ./troubleshoot checkEndpoint --endpoint=signoz-uat-otel-collector.platform.svc.cluster.local:4317
2022-02-22T21:27:43.780Z INFO workspace/main.go:28 STARTING!
2022-02-22T21:27:43.780Z INFO checkEndpoint/checkEndpoint.go:41 checking reachability of SigNoz endpoint
2022-02-22T21:27:43.796Z INFO workspace/main.go:46 Successfully sent sample data to signoz ...
this is what <http://localhost:8888/metrics> contains
# HELP otelcol_exporter_queue_size Current size of the retry queue (in batches)
# TYPE otelcol_exporter_queue_size gauge
otelcol_exporter_queue_size{exporter="clickhousemetricswrite",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 0
# HELP otelcol_exporter_send_failed_metric_points Number of metric points in failed attempts to send to destination.
# TYPE otelcol_exporter_send_failed_metric_points counter
otelcol_exporter_send_failed_metric_points{exporter="prometheus",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 0
# HELP otelcol_exporter_send_failed_spans Number of spans in failed attempts to send to destination.
# TYPE otelcol_exporter_send_failed_spans counter
otelcol_exporter_send_failed_spans{exporter="clickhouse",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 0
# HELP otelcol_exporter_sent_metric_points Number of metric points successfully sent to destination.
# TYPE otelcol_exporter_sent_metric_points counter
otelcol_exporter_sent_metric_points{exporter="prometheus",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 0
# HELP otelcol_exporter_sent_spans Number of spans successfully sent to destination.
# TYPE otelcol_exporter_sent_spans counter
otelcol_exporter_sent_spans{exporter="clickhouse",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 3
# HELP otelcol_process_cpu_seconds Total CPU user and system time in seconds
# TYPE otelcol_process_cpu_seconds gauge
otelcol_process_cpu_seconds{service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 15.25
# HELP otelcol_process_memory_rss Total physical memory (resident set size)
# TYPE otelcol_process_memory_rss gauge
otelcol_process_memory_rss{service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 1.3193216e+08
# HELP otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc')
# TYPE otelcol_process_runtime_heap_alloc_bytes gauge
otelcol_process_runtime_heap_alloc_bytes{service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 4.912016e+07
# HELP otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc')
# TYPE otelcol_process_runtime_total_alloc_bytes gauge
otelcol_process_runtime_total_alloc_bytes{service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 2.645922376e+09
# HELP otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys')
# TYPE otelcol_process_runtime_total_sys_memory_bytes gauge
otelcol_process_runtime_total_sys_memory_bytes{service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 8.30064824e+08
# HELP otelcol_process_uptime Uptime of the process
# TYPE otelcol_process_uptime counter
otelcol_process_uptime{service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 7485.002079331997
# HELP otelcol_processor_batch_batch_send_size Number of units in the batch
# TYPE otelcol_processor_batch_batch_send_size histogram
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="10"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="25"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="50"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="75"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="100"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="250"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="500"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="750"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="1000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="2000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="3000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="4000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="5000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="6000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="7000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="8000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="9000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="10000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="20000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="30000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="50000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="100000"} 3
otelcol_processor_batch_batch_send_size_bucket{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",le="+Inf"} 3
otelcol_processor_batch_batch_send_size_sum{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 3
otelcol_processor_batch_batch_send_size_count{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 3
# HELP otelcol_processor_batch_timeout_trigger_send Number of times the batch was sent due to a timeout trigger
# TYPE otelcol_processor_batch_timeout_trigger_send counter
otelcol_processor_batch_timeout_trigger_send{processor="batch",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c"} 3
# HELP otelcol_receiver_accepted_spans Number of spans successfully pushed into the pipeline.
# TYPE otelcol_receiver_accepted_spans counter
otelcol_receiver_accepted_spans{receiver="otlp",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",transport="grpc"} 3
# HELP otelcol_receiver_refused_spans Number of spans that could not be pushed into the pipeline.
# TYPE otelcol_receiver_refused_spans counter
otelcol_receiver_refused_spans{receiver="otlp",service_instance_id="0d36facb-49c4-4315-af49-10bea6e2c55c",transport="grpc"} 0
Can you please help what should I do?