https://signoz.io logo
Join the conversationJoin Slack
Channels
contributing
contributing-frontend
general
github-activities
incidents
k8s_operator_helm
reactjs-config
signoz-alert-testing
support
testsupport
watercooler
webhook-dev
write-for-signoz
Powered by Linen
support
  • a

    Anil Kumar Bandrapalli

    09/13/2022, 7:43 AM
    Hi @Ankit Nayan, I would like to know is there any way to download the traces ? For one of my trace is having more than 5000 span which leads to hanging of the ui
    a
    • 2
    • 14
  • a

    Andrew Grishin

    09/13/2022, 2:03 PM
    Hello, We're using SigNoz on k8s (helm) to monitor some nodejs apps. signoz-otel-collector spams this in logs:
    2022-09-13T13:34:21.183Z	error	exporterhelper/queued_retry.go:183	Exporting failed. The error is not retryable. Dropping data.	{"kind": "exporter", "name": "clickhousemetricswrite", "error": "Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality and type combination; Permanent error: invalid temporality 
    temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}, {"error": "Permanent error: invalid temporality and type combination"}], "dropped_items": 1024}
    <http://go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send>
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.43.0/exporter/exporterhelper/queued_retry.go:183
    <http://go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send|go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send>
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.43.0/exporter/exporterhelper/metrics.go:134
    <http://go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1|go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1>
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.43.0/exporter/exporterhelper/queued_retry_inmemory.go:105
    <http://go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume|go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume>
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.43.0/exporter/exporterhelper/internal/bounded_memory_queue.go:99
    <http://go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2|go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2>
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.43.0/exporter/exporterhelper/internal/bounded_memory_queue.go:78
    A quick glance at source code suggests there's a possibility to debug https://github.com/SigNoz/opentelemetry-collector-contrib/blob/develop/exporter/clickhousemetricsexporter/exporter.go#L164 but I'm not sure how to configure this zap.S() logger, it seems to be noop by default. Any way to enable this logging to get more detailed error messages?
    p
    s
    a
    • 4
    • 8
  • c

    Contrebande Labs

    09/14/2022, 12:04 AM
    👋 Hello, team! I have an otelcol-contrib instance running on the local bare-metal OS sending logs, traces and metrics to the Signoz clickhouse "writer" otelcol (localhost:4317). The .bare-metal instance is sending stuff (I see it in the logs) but nothing appears in the UI. What do you need to know to help me solve this? Thanks!
    p
    • 2
    • 1
  • s

    sudhanshu dev

    09/14/2022, 6:52 AM
    @Srikanth Chekuri @Ankit Nayan @Prashant Shahi I configured infra metrics
  • s

    sudhanshu dev

    09/14/2022, 6:52 AM
    I added this env variable in daemonset.yaml
  • s

    sudhanshu dev

    09/14/2022, 6:53 AM
    - name: OTEL_RESOURCE_ATTRIBUTES value: host.name=$(HOST_NAME),cluster=uat-mum
  • s

    sudhanshu dev

    09/14/2022, 6:54 AM
    This cluster=uat-mum we will use to create diff infra metrics from diff infra
  • s

    sudhanshu dev

    09/14/2022, 6:54 AM
    as we will use it as centralizwed
  • s

    sudhanshu dev

    09/14/2022, 6:54 AM
    I am not getting cluster=uat-mum
  • s

    sudhanshu dev

    09/14/2022, 6:54 AM
    in the seach
  • s

    sudhanshu dev

    09/14/2022, 6:54 AM
    earlier I was getting it
  • s

    sudhanshu dev

    09/14/2022, 6:54 AM
    but now
  • s

    sudhanshu dev

    09/14/2022, 6:55 AM
    I am not getting it
  • s

    sudhanshu dev

    09/14/2022, 6:58 AM
    Earlier this field is there
  • s

    sudhanshu dev

    09/14/2022, 6:59 AM
    AWS EKS
  • s

    sudhanshu dev

    09/14/2022, 6:59 AM
    version is 1.20
  • p

    Pranay

    09/14/2022, 6:59 AM
    which version of SigNoz are you on @sudhanshu dev. Also, would be good if you can ask question in a thread rather than multiple lines. It becomes difficult to follow for someone who visits in future. Threads help keep discussions in context
    s
    • 2
    • 5
  • s

    sudhanshu dev

    09/14/2022, 6:59 AM
    and images is otel/opentelemetry-collector-contrib:0.43.0
    p
    n
    +2
    • 5
    • 64
  • s

    sudhanshu dev

    09/14/2022, 7:00 AM
    signoz is n-1
  • s

    sudhanshu dev

    09/14/2022, 7:00 AM
    0.11.0
  • s

    sudhanshu dev

    09/14/2022, 7:00 AM
    ok
  • s

    sudhanshu dev

    09/14/2022, 7:26 AM
    @Pranay @Ankit Nayan In the below metrics , I am getting k8s_cluster_name k8s_container_cpu_limit{container_id="0220e57f7ceff72f2395310eb064dcd0f07ada31c84bd78c92bf7b680f224ca0",container_image_name="otel/opentelemetry-collector-contrib",container_image_tag="0.43.0",k8s_cluster_name="",k8s_container_name="otel-collector-agent",k8s_namespace_name="signoz-infra-metrics",k8s_node_name="ip-10-134-69-59.ap-south-1.compute.internal",k8s_pod_name="otel-collector-agent-dbcn6",k8s_pod_uid="021a4159-2660-48c7-950b-f3f5b0a9aa79",opencensus_resourcetype="container"}
    a
    p
    • 3
    • 22
  • o

    Oluwatomisin Lalude

    09/14/2022, 8:20 AM
    Hi, support team. Amazing work you’re doing with SigNoz. My company is using Signoz in K8s (Helm) and we need to create dashboards that monitor metrics like Network traffic, (Average) Response time and Error rate. Looks like these are the metrics available for now. Can we get support for these metrics mentioned above?
    s
    • 2
    • 6
  • o

    Oluwatomisin Lalude

    09/14/2022, 8:50 AM
    Also, the query-service pod is currently failing so I cannot access SigNoz UI. This is the second time it’s happening - first time, I had to reinstall Signoz. How do I manage this?
    s
    • 2
    • 7
  • v

    Valentin Baert

    09/14/2022, 12:01 PM
    Hello I tried to enable the ingress when deploying signoz with helm chart, the ingress works correctly and it serves the html landing page I served it under the path mydomain.com/signoz However subsequent requests to load js and css files are requested relative to mydomain.com instead of mydomain.com/signoz this is a classical issue when deploying frontend apps Usually we need to configure the html base tag how do I do that with signoz ? Or is there some other way to make these relative http requests properly relative to my /signoz path ?
    p
    • 2
    • 2
  • p

    Paulo Junior

    09/14/2022, 9:07 PM
    Hello. My name is Paulo. I use windows + wsl ( UBuntu ) in my local environment https://signoz.io/docs/install/docker/ When running docker-compose , the following error occurs: System has not been booted with systemd as init system (PID 1) Can anyone help?
  • a

    Abhisek Datta

    09/15/2022, 8:26 AM
    Team, I want to deploy SigNoz UI/Backend behind OAuth2 Proxy which handles authentication as per our need. Is it possible to: 1. Disable AuthN in SigNoz and let OAuth2 proxy take care of it 2. SigNoz authenticates users based on OIDC token provided by OAuth2 proxy through some HTTP header
  • m

    manohar mirle

    09/15/2022, 9:13 AM
    Hello SigNoZ team, We tried to instrument NodeJS service by following the steps mentioned in the below link but its not successful. Collector is not receiving the data. Debug level has been enabled in the Collector. We are not seeing any traces in the logs. Pls guide us. https://signoz.io/blog/nodejs-opensource-application-monitoring/
    a
    v
    • 3
    • 13
  • p

    Prakshal Shah

    09/15/2022, 2:30 PM
    I have a VM of SigNoz and on another VM I'm executing Java app using below command. (Both VMs are in same network) OTEL_EXPORTER_OTLP_ENDPOINT=http:&lt;SigNoz__VM_IP&gt;:4317 OTEL_RESOURCE_ATTRIBUTES=service.name=partner-service java -javaagent:/opt/opentelemetry-javaagent.jar -jar /opt/backend-partner-service.jar Error I'm facing is: [otel.javaagent 2022-09-15 02:17:46:993 +0000] [main] INFO io.opentelemetry.javaagent.tooling.VersionLogger - opentelemetry-javaagent - version: 1.18.0 OpenTelemetry Javaagent failed to start io.opentelemetry.sdk.autoconfigure.spi.ConfigurationException: OTLP endpoint must not have a path: <SigNoz_server_IP>:4317
    p
    • 2
    • 3
  • v

    Valentin Baert

    09/16/2022, 8:17 AM
    Hi, I'm using the helm charts to deploy signoz. Initially it deploys the clickhouse instance with a 20GB disk I only have a few apps with low traffic connected to the otel collector and after 20k traces in a day or two the disk of 20GB is already full. So it seems each trace occupy around 1MB of space in clickhouse. It seems to be huge ! Is it a normal behavior ?
    a
    • 2
    • 13
Powered by Linen
Title
v

Valentin Baert

09/16/2022, 8:17 AM
Hi, I'm using the helm charts to deploy signoz. Initially it deploys the clickhouse instance with a 20GB disk I only have a few apps with low traffic connected to the otel collector and after 20k traces in a day or two the disk of 20GB is already full. So it seems each trace occupy around 1MB of space in clickhouse. It seems to be huge ! Is it a normal behavior ?
a

Ankit Nayan

09/16/2022, 8:23 AM
can you show output of below command when run over clickhouse client
SELECT
                    database,
                    table,
                    formatReadableSize(sum(data_compressed_bytes) AS size) AS compressed,
                    formatReadableSize(sum(data_uncompressed_bytes) AS usize) AS uncompressed,
                    round(usize / size, 2) AS compr_rate,
                    sum(rows) AS rows,
                    count() AS part_count
                FROM system.parts
                WHERE (active = 1) AND (database='signoz_traces') AND (table LIKE '%')
                GROUP BY
                    database,
                    table
                ORDER BY size DESC;
a span is the smallest entity. A trace might consist of many spans.
should be usually 150 bytes per span
v

Valentin Baert

09/16/2022, 8:28 AM
Initially I ran select count() from signoz_metrics.time_series_v2; but seems I was on a different table
note sure why I have 29574447 traces in two days though
a

Ankit Nayan

09/16/2022, 8:28 AM
so 29M spans till now. Can you run the same query for logs and metrics dbs?
by changing
(database='signoz_traces')
in above query to
signoz_logs
and
signoz_metrics
v

Valentin Baert

09/16/2022, 8:30 AM
I only index traces SO the traces table is what I was looking for normally
a

Ankit Nayan

09/16/2022, 8:30 AM
ok..
v

Valentin Baert

09/16/2022, 8:30 AM
So the issue seem to be on my side, I must figure out where thse 29M traces in a single day come from
thx
a

Ankit Nayan

09/16/2022, 8:31 AM
maybe...though we will be optimizing the storage in a couple of weeks...a couple of features on duration and timestamp sorting in the trace filter page heavy on storage
we will make those optional which should reduce the size to half
View count: 2