Hey all, I am having trouble setting up the signoz...
# support
t
Hey all, I am having trouble setting up the signoz otel collector (receiver side) with TLS. I'll add the config and error messages I am getting as responses to this message.
The configuration I am using is:
Copy code
otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14317
        tls:
          cert_file: /etc/signoz-collector/ssl/server-cert.pem
          key_file: /etc/signoz-collector/ssl/server-key.pem
          ca_file: /etc/signoz-collector/ssl/ca-cert.pem
          client_ca_file: /etc/signoz-collector/ssl/client-cert.pem
The docker compose file is correctly volume binding the needed folder via:
Copy code
otel-collector:
    image: signoz/signoz-otel-collector:0.66.1
    command: ["--config=/etc/otel-collector-config.yaml"]
    user: root # required for reading docker container logs
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
      - /:/hostfs
      - /signoz-files/config/ssl:/etc/signoz-collector/ssl
    environment:
      - OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
      - DOCKER_MULTI_NODE_CLUSTER=false
The files should be readable from the container.
Copy code
total 24
drwxr-xr-x 2 root root 4096 Jan 17 13:53 ./
drwxr-xr-x 6 root root 4096 Jan 17 14:13 ../
-rw-r--r-- 1 root root 1821 Jan 17 13:53 ca-cert.pem
-rw-r--r-- 1 root root 1683 Jan 17 13:53 client-cert.pem
-rw-r--r-- 1 root root 1817 Jan 17 13:53 server-cert.pem
-rw-r--r-- 1 root root 3482 Jan 17 13:06 server-key.pem
Does the
server-key.pem
need to be in mode
600
similar to SSH keys? Don't think so, but just checking.
The problem is that the error message I am getting when starting up the collector is not very helpful:
Copy code
2023-01-17T14:15:17.557282997Z panic: runtime error: invalid memory address or nil pointer dereference
2023-01-17T14:15:17.557303103Z [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x31e251d]
2023-01-17T14:15:17.557309327Z 
2023-01-17T14:15:17.557314963Z goroutine 1 [running]:
2023-01-17T14:15:17.558779099Z <http://github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver.(*pReceiver).Shutdown(0xc000ee6f00|github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver.(*pReceiver).Shutdown(0xc000ee6f00>, {0x0, 0x0})
2023-01-17T14:15:17.559383013Z 	/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver@v0.66.0/metrics_receiver.go:315 +0x1d
2023-01-17T14:15:17.559514440Z <http://go.opentelemetry.io/collector/service/internal/pipelines.(*Pipelines).ShutdownAll(0xc000e8db30|go.opentelemetry.io/collector/service/internal/pipelines.(*Pipelines).ShutdownAll(0xc000e8db30>, {0x52dc448, 0xc00012a000})
2023-01-17T14:15:17.559725248Z 	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/internal/pipelines/pipelines.go:121 +0x499
2023-01-17T14:15:17.559739170Z <http://go.opentelemetry.io/collector/service.(*service).Shutdown(0xc000598800|go.opentelemetry.io/collector/service.(*service).Shutdown(0xc000598800>, {0x52dc448, 0xc00012a000})
2023-01-17T14:15:17.559949422Z 	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/service.go:121 +0xd4
2023-01-17T14:15:17.560634762Z <http://go.opentelemetry.io/collector/service.(*Collector).shutdownServiceAndTelemetry(0xc000d1fa88|go.opentelemetry.io/collector/service.(*Collector).shutdownServiceAndTelemetry(0xc000d1fa88>, {0x52dc448?, 0xc00012a000?})
2023-01-17T14:15:17.560651689Z 	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/collector.go:264 +0x36
2023-01-17T14:15:17.560657027Z <http://go.opentelemetry.io/collector/service.(*Collector).setupConfigurationComponents(0xc000d1fa88|go.opentelemetry.io/collector/service.(*Collector).setupConfigurationComponents(0xc000d1fa88>, {0x52dc448, 0xc00012a000})
2023-01-17T14:15:17.560661955Z 	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/collector.go:166 +0x27d
2023-01-17T14:15:17.560666519Z <http://go.opentelemetry.io/collector/service.(*Collector).Run(0xc000d1fa88|go.opentelemetry.io/collector/service.(*Collector).Run(0xc000d1fa88>, {0x52dc448, 0xc00012a000})
2023-01-17T14:15:17.560671124Z 	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/collector.go:190 +0x46
2023-01-17T14:15:17.560675490Z <http://go.opentelemetry.io/collector/service.NewCommand.func1(0xc000760900|go.opentelemetry.io/collector/service.NewCommand.func1(0xc000760900>, {0x48fdbae?, 0x1?, 0x1?})
2023-01-17T14:15:17.560680089Z 	/go/pkg/mod/go.opentelemetry.io/collector@v0.66.0/service/command.go:53 +0x479
2023-01-17T14:15:17.560684564Z <http://github.com/spf13/cobra.(*Command).execute(0xc000760900|github.com/spf13/cobra.(*Command).execute(0xc000760900>, {0xc000128010, 0x1, 0x1})
2023-01-17T14:15:17.560700070Z 	/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:916 +0x862
2023-01-17T14:15:17.560704816Z <http://github.com/spf13/cobra.(*Command).ExecuteC(0xc000760900)|github.com/spf13/cobra.(*Command).ExecuteC(0xc000760900)>
2023-01-17T14:15:17.560709382Z 	/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044 +0x3bc
2023-01-17T14:15:17.560713972Z <http://github.com/spf13/cobra.(*Command).Execute(...)|github.com/spf13/cobra.(*Command).Execute(...)>
2023-01-17T14:15:17.560718296Z 	/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968
2023-01-17T14:15:17.560723305Z main.runInteractive({{0xc0006f6f00, 0xc00090a0c0, 0xc0006f7320, 0xc0006f6b40}, {{0x493758c, 0x15}, {0x4934ee9, 0x15}, {0x48fa0e3, 0x6}}, ...})
2023-01-17T14:15:17.560728163Z 	/src/cmd/signozcollector/main.go:37 +0x5e
2023-01-17T14:15:17.560732468Z main.run(...)
2023-01-17T14:15:17.560736648Z 	/src/cmd/signozcollector/main_others.go:8
2023-01-17T14:15:17.560740978Z main.main()
2023-01-17T14:15:17.560745384Z 	/src/cmd/signozcollector/main.go:30 +0x1d8
In the past, whenever I had runtime error on otel collectors, it was mainly due to a wrong syntax in the config. But the syntax above looks correct, right? According to https://github.com/open-telemetry/opentelemetry-collector/blob/main/config/configtls/README.md
Btw, removing the whole
tls:
block fixes the problem and the collector starts up correctly. So it's pretty certain it's the TLS block that is causing me issues.
At this point I am uncertain if it's a configuration or a content issue (i.e. certs being the problem).
The certs should be fine though, I used the same process to create them as I did with uptrace and the client collector -> uptrace communication works fine with TLS
273 Views