https://signoz.io logo
Join the conversationJoin Slack
Channels
contributing
contributing-frontend
general
github-activities
incidents
k8s_operator_helm
reactjs-config
signoz-alert-testing
support
testsupport
watercooler
webhook-dev
write-for-signoz
Powered by Linen
support
  • s

    Sindhu S

    03/17/2023, 6:35 PM
    hi, I have installed signoz locally via Docker compose on m1 mac. All services are up and running. Able to access the frontend, however XHR call on /signup is pending, I don't see it in the Logs of the frontend docker also. How can I fix this?
    s
    • 2
    • 13
  • v

    Vaibhavi

    03/18/2023, 1:15 AM
    #signoz #support #installation I'm doing a setup in the AKS of signoz i've used below helm charts helm repo add signoz https://charts.signoz.io changed the namespace to signoz and used our own standalone clickhouse cluster setup along with zookeeper Post installation i'm obeserving below issues in OTEL_COLLECTOR Error is below PS D:\Signoz> kubectl logs po/signoz-otel-collector-65fbf66f4f-zdntv -n signoz Defaulted container "signoz-otel-collector" out of: signoz-otel-collector, signoz-otel-collector-init (init) 2023-03-17T10:58:29.571Z info service/telemetry.go:111 Setting up own telemetry... 2023-03-17T10:58:29.572Z info service/telemetry.go:141 Serving Prometheus metrics {"address": "0.0.0.0:8888", "level": "Basic"} 2023-03-17T10:58:29.572Z info components/components.go:30 Stability level of component is undefined {"kind": "exporter", "data_type": "metrics", "name": "clickhousemetricswrite", "stability": "Undefined"} time="2023-03-17T10:58:29Z" level=info msg="Executing:\nCREATE DATABASE IF NOT EXISTS signoz_metrics ON CLUSTER cluster\n" component=clickhouse time="2023-03-17T10:58:29Z" level=info msg="Executing:\nCREATE TABLE IF NOT EXISTS signoz_metrics.samples_v2 ON CLUSTER cluster (\n\t\t\tmetric_name LowCardinality(String),\n\t\t\tfingerprint UInt64 Codec(DoubleDelta, LZ4),\n\t\t\ttimestamp_ms Int64 Codec(DoubleDelta, LZ4),\n\t\t\tvalue Float64 Codec(Gorilla, LZ4)\n\t\t)\n\t\tENGINE = MergeTree\n\t\t\tPARTITION BY toDate(timestamp_ms / 1000)\n\t\t\tORDER BY (metric_name, fingerprint, timestamp_ms)\n\t\t\tTTL toDateTime(timestamp_ms/1000) + INTERVAL 2592000 SECOND DELETE;\n" component=clickhouse time="2023-03-17T10:58:29Z" level=info msg="Executing:\nCREATE TABLE IF NOT EXISTS signoz_metrics.distributed_samples_v2 ON CLUSTER cluster AS signoz_metrics.samples_v2 ENGINE = Distributed(\"cluster\", \"signoz_metrics\", samples_v2, cityHash64(metric_name, fingerprint));\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nALTER TABLE signoz_metrics.samples_v2 ON CLUSTER cluster MODIFY SETTING ttl_only_drop_parts = 1;\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nSET allow_experimental_object_type = 1\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nCREATE TABLE IF NOT EXISTS signoz_metrics.time_series_v2 ON CLUSTER cluster(\n\t\t\tmetric_name LowCardinality(String),\n\t\t\tfingerprint UInt64 Codec(DoubleDelta, LZ4),\n\t\t\ttimestamp_ms Int64 Codec(DoubleDelta, LZ4),\n\t\t\tlabels String Codec(ZSTD(5))\n\t\t)\n\t\tENGINE = ReplacingMergeTree\n\t\t\tPARTITION BY toDate(timestamp_ms / 1000)\n\t\t\tORDER BY (metric_name, fingerprint)\n\t\t\tTTL toDateTime(timestamp_ms/1000) + INTERVAL 2592000 SECOND DELETE;\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nCREATE TABLE IF NOT EXISTS signoz_metrics.distributed_time_series_v2 ON CLUSTER cluster AS signoz_metrics.time_series_v2 ENGINE = Distributed(\"cluster\", signoz_metrics, time_series_v2, cityHash64(metric_name, fingerprint));\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nALTER TABLE signoz_metrics.time_series_v2 ON CLUSTER cluster DROP COLUMN IF EXISTS labels_object\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nALTER TABLE signoz_metrics.distributed_time_series_v2 ON CLUSTER cluster DROP COLUMN IF EXISTS labels_object\n" component=clickhouse time="2023-03-17T10:58:30Z" level=info msg="Executing:\nALTER TABLE signoz_metrics.time_series_v2 ON CLUSTER cluster MODIFY SETTING ttl_only_drop_parts = 1;\n" component=clickhouse 2023-03-17T10:58:31.476Z info kube/client.go:101 k8s filtering {"kind": "processor", "name": "k8sattributes", "pipeline": "metrics", "labelSelector": "", "fieldSelector": "spec.nodeName=aks-nodepool1-18518278-vmss00000f"} 2023-03-17T10:58:31.477Z info components/components.go:30 Stability level of component is undefined {"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "stability": "Undefined"} 2023-03-17T10:58:31.676Z info clickhousetracesexporter/clickhouse_factory.go:146 Patching views {"kind": "exporter", "data_type": "traces", "name": "clickhousetraces"} 2023-03-17T10:58:32.786Z info clickhousetracesexporter/clickhouse_factory.go:116 Running migrations from path: {"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "test": "/migrations"} 2023-03-17T10:58:41.160Z info clickhousetracesexporter/clickhouse_factory.go:128 Clickhouse Migrate finished {"kind": "exporter", "data_type": "traces", "name": "clickhousetraces"} Error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": code: 62, message: Syntax error: failed at position 2290 (')') (line 43, col 3): ) ENGINE MergeTree() PARTITION BY toDate(timestamp) ORDER BY (durationNano, timestamp) TTL toDateTime(timestamp) + INTERVAL 604800 SECOND DELETE SETTING. Expected one of: table property (column, index, constraint) declaration, INDEX, CONSTRAINT, PROJECTION, PRIMARY KEY, column declaration, identifier 2023/03/17 10:58:41 application run finished with error: cannot build pipelines: failed to create "clickhousetraces" exporter, in pipeline "traces": code: 62, message: Syntax error: failed at position 2290 (')') (line 43, col 3): ) ENGINE MergeTree() PARTITION BY toDate(timestamp) ORDER BY (durationNano, timestamp) TTL toDateTime(timestamp) + INTERVAL 604800 SECOND DELETE SETTING. Expected one of: table property (column, index, constraint) declaration, INDEX, CONSTRAINT, PROJECTION, PRIMARY KEY, column declaration, identifier need help to understand this at the earliest please
  • z

    Zeid ALSeryani

    03/18/2023, 6:30 AM
    Greetings, Kindly I need help to get all logs and traces for specific trace_id from clickhouse database ? Thank You
  • s

    Shravan Kgl

    03/19/2023, 7:23 PM
    I have upgraded signoz from 0.14.0 to 0.17.0 In logs i see k8s_cluster_name as empty k8s_cluster_name was not there in 0.14.0
    p
    n
    p
    • 4
    • 11
  • d

    D Norul BSH

    03/20/2023, 4:33 AM
    Hi. I need help, how to monitor php application server using SIGNOZ & Telementary?
    p
    • 2
    • 3
  • d

    Divyanshu Negi

    03/20/2023, 10:39 AM
    Hi Team, can someone please direct me to the docs to understand how the otel collector agent fit into the picture with Signoz. We have 8 microservices running using ECS with docker containers (which have 2 dedicated EC2 instances and scaling upto 8 ) I have hosted Signoz as 1 another dedicated EC2 instance. using a centOS machine. Now I want to pass all the hardware data for each service to the Signoz dashboard, how to do that ?
    s
    • 2
    • 4
  • j

    Juha Patrikainen

    03/20/2023, 1:48 PM
    Hello! I would be glad if someone could assist me. We have installed SigNoz Helm chart (0.12.1) and also k8s-infra chart as well. K8s-infra chart is with default values (Oracle OKE 1.23.4). Problem is that kubernetes pod logs are not visible in SigNoz. In otel agent logs I can see that it detects the file to watch in node:
    2023-03-20T11:57:39.619Z    info    fileconsumer/file.go:171    Started watching file   {"kind": "receiver", "name": "filelog/k8s", "pipeline": "logs", "component": "fileconsumer", "path": "/var/log/pods/*********/configuration/0.log"}
    I have verified that file contains all the logs for the container. There are no errors visible in agent logs, signoz-otel-collector logs or clickhouse logs, but the logs are not visible in SigNoz.
    n
    • 2
    • 6
  • k

    Kurt Crockett

    03/20/2023, 2:20 PM
    Team, I am trying to get Signoz standalone running using S3 bucket. Do you have detailed steps on how to do this? 1. I have edit the clickhouse-storage.xml and changed the endpoint, access_key_id, and secret_access_key. 2. I have edit the clickhouse-config.xml and change the database to s3 <!-- Default database. --> <default_database>s3</default_database> 3. Stopped and restart the containers but it’s not writing to s3. I know I am missing something.
    a
    • 2
    • 6
  • n

    Nilanjan Roy

    03/20/2023, 5:20 PM
    Hi Team, I am facing an issue while running the sample Golang app following this document https://signoz.io/blog/distributed-tracing-golang/. While running serve -l 5000 frontend I get the following error : serve -l 5000 frontend file😕//usr/local/lib/node_modules/serve/build/main.js:169 const ipAddress = request.socket.remoteAddress?.replace("::ffff:", "") ?? "unknown"; ^ SyntaxError: Unexpected token '.' at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18) at async link (internal/modules/esm/module_job.js:42:21) Can you please help ? Thanks
  • n

    Nilanjan Roy

    03/21/2023, 5:44 AM
    hi team, I need help on the following issue : serve -l 5000 frontend file😕//usr/local/lib/node_modules/serve/build/main.js:169 const ipAddress = request.socket.remoteAddress?.replace("::ffff:", "") ?? "unknown"; ^ SyntaxError: Unexpected token '.' at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18) at async link (internal/modules/esm/module_job.js:42:21)
  • n

    Nilanjan Roy

    03/21/2023, 10:22 AM
    Thanks @Prashant Shahi...i updated Node.js version and the issue got resolved....Its not an issue in Signoz
  • n

    Nilanjan Roy

    03/21/2023, 11:32 AM
    Following this tutorial https://signoz.io/blog/distributed-tracing-golang/ i started all three microservices ...however in the signoz dashboard services are not appearing...I am running signoz as a docker container
  • d

    David Glick

    03/21/2023, 6:27 PM
    On the “Services” page, how is the “Operations per Second” calculated? I see a number here which does not match what I see in the Rate chart when I click through to a specific service.
    s
    • 2
    • 2
  • s

    sudhanshu dev

    03/22/2023, 10:16 AM
    Hi All, We are working on the parsing of logs and we are facing multiple issues. Below is the one issue that we are facing when we are trying to parse the access logs. Sample log line
    "10.21.18.240 - - [22/Mar/2023:08:19:04 +0000] \"GET /status HTTP/1.1\" 200 1249 \"-\" \"kube-probe/1.22+\"\n"
    Below is the configuration of custom processor.
    logstransform/keepattrs:
            operators:
              - type: remove
                id: remove-1
                field: attributes
          logstransform/parselog:
            operators:
              - type: router
                id: get-access
                routes:
                  - output: parser-access
                    expr: 'body matches "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}"'
              - type: grok_parser
                id: parser-access
                parse_from: body
                pattern: '%{IP:ip}%{SPACE}-%{SPACE}-%{SPACE}\[%{HTTPDATE:timestamp}\]%{SPACE}\\"%{WORD:httpmethod}%{SPACE}%{DATA:request}\"%{SPACE}%{NUMBER:status}'
                parse_to: attributes
    Grok debugger output. I am getting below error.
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x24701e1]
    
    goroutine 1 [running]:
    <http://github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/adapter.(*receiver).Shutdown(0xc00107ccf0|github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/adapter.(*receiver).Shutdown(0xc00107ccf0>, {0x52eba28, 0xc000076028})
    	/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza@v0.66.0/adapter/receiver.go:148 +0x81
    <http://go.opentelemetry.io/collector/service/internal/pipelines.(*Pipelines).ShutdownAll(0xc000f781e0|go.opentelemetry.io/collector/service/internal/pipelines.(*Pipelines).ShutdownAll(0xc000f781e0>, {0x52eba28, 0xc000076028})
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/internal/pipelines/pipelines.go:121 +0x499
    <http://go.opentelemetry.io/collector/service.(*service).Shutdown(0xc000f5a800|go.opentelemetry.io/collector/service.(*service).Shutdown(0xc000f5a800>, {0x52eba28, 0xc000076028})
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/service.go:121 +0xd4
    <http://go.opentelemetry.io/collector/service.(*Collector).shutdownServiceAndTelemetry(0xc001717a88|go.opentelemetry.io/collector/service.(*Collector).shutdownServiceAndTelemetry(0xc001717a88>, {0x52eba28?, 0xc000076028?})
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/collector.go:264 +0x36
    <http://go.opentelemetry.io/collector/service.(*Collector).setupConfigurationComponents(0xc001717a88|go.opentelemetry.io/collector/service.(*Collector).setupConfigurationComponents(0xc001717a88>, {0x52eba28, 0xc000076028})
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/collector.go:166 +0x27d
    <http://go.opentelemetry.io/collector/service.(*Collector).Run(0xc001717a88|go.opentelemetry.io/collector/service.(*Collector).Run(0xc001717a88>, {0x52eba28, 0xc000076028})
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/collector.go:190 +0x46
    <http://go.opentelemetry.io/collector/service.NewCommand.func1(0xc0005cac00|go.opentelemetry.io/collector/service.NewCommand.func1(0xc0005cac00>, {0x490a0cd?, 0x1?, 0x1?})
    	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/command.go:53 +0x479
    <http://github.com/spf13/cobra.(*Command).execute(0xc0005cac00|github.com/spf13/cobra.(*Command).execute(0xc0005cac00>, {0xc00006e070, 0x1, 0x1})
    	/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:916 +0x862
    <http://github.com/spf13/cobra.(*Command).ExecuteC(0xc0005cac00)|github.com/spf13/cobra.(*Command).ExecuteC(0xc0005cac00)>
    	/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044 +0x3bc
    <http://github.com/spf13/cobra.(*Command).Execute(...)|github.com/spf13/cobra.(*Command).Execute(...)>
    	/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968
    main.runInteractive({{0xc0006bdb90, 0xc00082aea0, 0xc00082a120, 0xc0006bd7d0}, {{0x4943e48, 0x15}, {0x49417a5, 0x15}, {0x49065d8, 0x6}}, ...})
    	/src/cmd/signozcollector/main.go:37 +0x5e
    main.run(...)
    	/src/cmd/signozcollector/main_others.go:8
    main.main()
    	/src/cmd/signozcollector/main.go:30 +0x1d8
  • a

    Alok Singh

    03/23/2023, 8:06 AM
    Hello all, We have a requirement to monitor the thread dump. Anyone with experience on how to do this with SigNoz? @Prashant Shahi, @nitya-signoz, @Ankit Nayan
    n
    s
    • 3
    • 2
  • s

    Sachin Kumar

    03/23/2023, 8:32 AM
    I hope you are doing well. I have one requirement to monitor my react application and for that I am following for blog (https://signoz.io/blog/opentelemetry-react/) and video (

    https://youtu.be/IsOQxc3wqyc▾

    ). I have done the same thing for my react application and not able to get this application on signoz dashboard. otel-collector-config.yaml file
    receivers:
      filelog/dockercontainers:
        include: [  "/var/lib/docker/containers/*/*.log" ]
        start_at: end
        include_file_path: true
        include_file_name: false
        operators:
        - type: json_parser
          id: parser-docker
          output: extract_metadata_from_filepath
          timestamp:
            parse_from: attributes.time
            layout: '%Y-%m-%dT%H:%M:%S.%LZ'
        - type: regex_parser
          id: extract_metadata_from_filepath
          regex: '^.*containers/(?P<container_id>[^_]+)/.*log$'
          parse_from: attributes["log.file.path"]
          output: parse_body
        - type: move
          id: parse_body
          from: attributes.log
          to: body
          output: time
        - type: remove
          id: time
          field: attributes.time
      opencensus:
        endpoint: 0.0.0.0:55678
      otlp/spanmetrics:
        protocols:
          grpc:
            endpoint: localhost:12345
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            cors:
              allowed_origins:
                - <http://65.0.130.56:3000>
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250
          thrift_http:
            endpoint: 0.0.0.0:14268
          # thrift_compact:
          #   endpoint: 0.0.0.0:6831
          # thrift_binary:
          #   endpoint: 0.0.0.0:6832
    Please help me figure out the error
  • s

    sudhanshu dev

    03/23/2023, 9:55 AM
    @Prashant Shahi @Ankit Nayan If we start passing TraceID in our logs. Will signoz/otelcollector automatically parse it?
    n
    • 2
    • 1
  • a

    Ashna

    03/23/2023, 12:52 PM
    Hi
  • a

    Ashna

    03/23/2023, 12:53 PM
    While connecting to grpc using nginx we are getting the below error. Please have a check. [error] 30#30: *1 upstream rejected request with error 1 while reading response header from upstream, client: , server: , request: "GET /favicon.ico HTTP/1.1", u
    a
    d
    • 3
    • 3
  • s

    Sindhu S

    03/23/2023, 4:20 PM
    Hi, I have an app that's pushing metrics in prometheus format on
    localhost:3334/metrics
    . I do not want to run a prometheus instance to consume these. What configuration should I add to signoz to consume the metrics from my app's endpoint? Also I do not see a "Metrics" menu item in the Signoz dashboard. Does this need some sort of config to enable?
    a
    • 2
    • 7
  • a

    Ashna

    03/24/2023, 5:02 AM
    Hi
  • a

    Ashna

    03/24/2023, 5:02 AM
    issue in accessing grpc port via azure application gateway if signoz deployed in azure.
  • a

    Ashna

    03/24/2023, 5:05 AM
    Hi
  • s

    sudhanshu dev

    03/24/2023, 6:47 AM
    @Prashant Shahi @nitya-signoz Is there any relationship between kubernets container metrics and logging? As I stopped pushing logs few metrics of kubernets container like k8s_container_cpu_limit also stopped?
  • a

    Alex Grönholm

    03/24/2023, 9:20 AM
    I'm a bit confused about the logs view. When I choose "Last 1 hour", "Last 5 minutes" in the logs view, what does that actually give me? On my installation, choosing either one gives me 3 day old logs. What does it actually filter on?
    a
    • 2
    • 23
  • a

    Alex Grönholm

    03/24/2023, 9:47 AM
    It's not even consistent. One moment, I get a bunch of 3 day old logs. Then I click refresh, with the SAME settings, and I get no logs at all!
    a
    • 2
    • 1
  • s

    Stewart Thomson

    03/24/2023, 8:40 PM
    I'm trying to move logs to s3 that are older than 1 hour, I have set the retention policy as such. I've configured clickhouse to have s3 storage, and clickhouse is not showing any errors. However, my logs are not appearing in s3. Where should I look for logs to begin debugging this?
  • b

    Bhuwan Kaushik

    03/24/2023, 8:45 PM
    Hi, When I try to create a dashboard and add a time-series panel, i choose clickhouse query option and when i write a query i have to hard code the time range. Tried to use $timeFilter like we do in grafana but i guess it will not work. Don't we have support to provide time interval in query and it will change as user changes the interval from the top-right?
    p
    s
    • 3
    • 3
  • b

    Bhuwan Kaushik

    03/24/2023, 11:23 PM
    Hi, Can anyone tell what is the reason of getting missing spans? I don't find any detail in the link too https://signoz.io/docs/userguide/traces/#missing-spans
    a
    • 2
    • 5
  • l

    Leong

    03/27/2023, 7:13 AM
    Hi, can anyone help here? I am trying to do manual instrumenting as I find auto-instrument pumps in too much data. I am using Python, and I have created a HTTP OTLPSpanExporter, with the endpoint pointing to the clickhouse url, port=4318. But it always return an error "Failed to export batch code: 404". Does anyone know how to fix this?
    • 1
    • 1
Powered by Linen
Title
l

Leong

03/27/2023, 7:13 AM
Hi, can anyone help here? I am trying to do manual instrumenting as I find auto-instrument pumps in too much data. I am using Python, and I have created a HTTP OTLPSpanExporter, with the endpoint pointing to the clickhouse url, port=4318. But it always return an error "Failed to export batch code: 404". Does anyone know how to fix this?
I have found the fix. When you create our own OTLPMetricExporter or OTLPSpanExporter, the endpoint has to be postfixed with v1/metrics and v1/traces, respectively.
View count: 3