Hey! How's it going? I'm having some issues self hosting SigNoz. I'm running the recommended clickho...
c

Carlos Eduardo Medim

5 months ago
Hey! How's it going? I'm having some issues self hosting SigNoz. I'm running the recommended clickhouse server (the one specified in the
docker-compose
from the github) with zookeeper. Now i'm trying to deploy the schema migrators, starting with the sync one first. I deployed the image from dockerhub and added an env var to point to my clickhouse dsn TCP port. This is my start command (this is the only place I use the env var).
/bin/sh -c "/signoz-schema-migrator sync --dsn=http://${CLICKHOUSE_DSN} --up= --replication=false"
And I get a weird error, first it says it can't connect to the host but it tries to run the migrations? clickhouse server instance logs the connection
Error: failed to bootstrap migrations: failed to create schema_migrations_v2 table
failed to get conn
dial tcp: address fd12:f72a:6185:0:a000:12:d517:ae32:9000: too many colons in address
code: 60, message: There was an error on [signoz-clickhouse-server.railway.internal:9000]: Code: 60. DB::Exception: Could not find table: schema_migrations_v2. (UNKNOWN_TABLE) (version 24.1.2.5 (official build))
Usage:
  signoz-schema-migrator sync [flags]
clickhouse logs:
Code: 60. DB::Exception: Could not find table: schema_migrations_v2. (UNKNOWN_TABLE) (version 24.1.2.5 (official build)) (from 0.0.0.0:0) (in query: /* ddl_entry=query-0000001328 */ ALTER TABLE signoz_logs.schema_migrations_v2 UPDATE status = 'failed', error = 'failed to get conn\ndial tcp: address fd12:f72a:6185:0:a000:12:d517:ae32:9000: too many colons in address', updated_at = '2025-04-07 16:59:52' WHERE migration_id = 1), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::Exception::Exception<String&>(int, FormatStringHelperImpl<std::type_identity<String&>::type>, String&) @ 0x0000000007232663 in /usr/bin/clickhouse
2. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x00000000113926b9 in /usr/bin/clickhouse
3. DB::InterpreterAlterQuery::execute() @ 0x000000001138dd71 in /usr/bin/clickhouse
4. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000011904974 in /usr/bin/clickhouse
5. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, DB::QueryFlags, std::optional<DB::FormatSettings> const&, std::function<void (DB::IOutputFormat&)>) @ 0x000000001190824a in /usr/bin/clickhouse
6. DB::DDLWorker::tryExecuteQuery(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x0000000010d02188 in /usr/bin/clickhouse
7. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x0000000010d00755 in /usr/bin/clickhouse
8. DB::DDLWorker::scheduleTasks(bool) @ 0x0000000010cfdd33 in /usr/bin/clickhouse
9. DB::DDLWorker::runMainThread() @ 0x0000000010cf708e in /usr/bin/clickhouse
10. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x0000000010d0ebd4 in /usr/bin/clickhouse
11. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in /usr/bin/clickhouse
12. ? @ 0x00007ff5fa1fa609
13. ? @ 0x00007ff5fa11f353

Code: 60. DB::Exception: There was an error on [signoz-clickhouse-server.railway.internal:9000]: Code: 60. DB::Exception: Could not find table: schema_migrations_v2. (UNKNOWN_TABLE) (version 24.1.2.5 (official build)). (UNKNOWN_TABLE) (version 24.1.2.5 (official build)) (from 100.64.0.4:15668) (in query: ALTER TABLE signoz_logs.schema_migrations_v2 ON CLUSTER cluster UPDATE status = 'failed', error = 'failed to get conn
dial tcp: address fd12:f72a:6185:0:a000:12:d517:ae32:9000: too many colons in address', updated_at = '2025-04-07 16:59:52' WHERE migration_id = 1), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::DDLQueryStatusSource::generate() @ 0x00000000118efe71 in /usr/bin/clickhouse
2. DB::ISource::tryGenerate() @ 0x000000001297acf5 in /usr/bin/clickhouse
3. DB::ISource::work() @ 0x000000001297a743 in /usr/bin/clickhouse
4. DB::ExecutionThreadContext::executeTask() @ 0x000000001299371a in /usr/bin/clickhouse
5. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000001298a170 in /usr/bin/clickhouse
6. DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000012989380 in /usr/bin/clickhouse
7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000129970a3 in /usr/bin/clickhouse
8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c8ed6fe in /usr/bin/clickhouse
9. ? @ 0x00007ff5fa1fa609
10. ? @ 0x00007ff5fa11f353
Super weird, what exactly is failling to connect since it seems like it tries to run migrations on the clickhouse server? Why isn't it creating the
schema_migrations_v2
table? Clickhouse instance works fine, i've tested it. Any insight on this is greatly appreciated.
hello, anyone help for this, I deploy signoz onto eks frist time. I got bug . This logs from <signoz...
n

nguyen chuong

6 months ago
hello, anyone help for this, I deploy signoz onto eks frist time. I got bug . This logs from signoz-query-service-0 pod .
{"level":"INFO","timestamp":"2025-03-12T08:58:07.562Z","caller":"license/manager.go:125","msg":"No active license found, defaulting to basic plan"}
{"level":"INFO","timestamp":"2025-03-12T08:58:07.564Z","caller":"app/server.go:150","msg":"Using ClickHouse as datastore ..."}
2025-03-12T08:58:07.564101488Z {"level":"INFO","timestamp":"2025-03-12T08:58:07.564Z","caller":"clickhouseReader/options.go:120","msg":"Connecting to Clickhouse","at":"signoz-clickhouse:9000","MaxIdleConns":50,"MaxOpenConns":100,"DialTimeout":5000}
2025-03-12T08:58:07.565320164Z {"level":"FATAL","timestamp":"2025-03-12T08:58:07.565Z","caller":"clickhouseReader/reader.go:179","msg":"failed to initialize ClickHouse","error":"error connecting to primary db: code: 516, message: admin: Authentication failed: password is incorrect, or there is no user with such name.","stacktrace":"<http://go.signoz.io/signoz/pkg/query-service/app/clickhouseReader.NewReader|go.signoz.io/signoz/pkg/query-service/app/clickhouseReader.NewReader>\n\t/home/runner/work/signoz/signoz/pkg/query-service/app/clickhouseReader/reader.go:179\ngo.signoz.io/signoz/ee/query-service/app/db.NewDataConnector\n\t/home/runner/work/signoz/signoz/ee/query-service/app/db/reader.go:31\ngo.signoz.io/signoz/ee/query-service/app.NewServer\n\t/home/runner/work/signoz/signoz/ee/query-service/app/server.go:151\nmain.main\n\t/home/runner/work/signoz/signoz/ee/query-service/main.go:189\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.7.linux-amd64/src/runtime/proc.go:271"}
Hi All, I need to scrap data from prometheus, done the changes in _`my-release-signoz-otel-collect...
s

s mohamed

over 1 year ago
Hi All, I need to scrap data from prometheus, done the changes in
my-release-signoz-otel-collector.yaml
config file by adding below snippet
prometheus:
        config:
          scrape_configs:
            - job_name: "otel-collector"
              scrape_interval: 30s
              static_configs:
                - targets: ["otel-collector:8889", "prometheus-operated:9090"]
my prometheus is running in
prometheus-operated:9090
i have port-forward locally and confirmed on the service it is fine and restarted otel-deployment and otel-collector deployment pods and getting below error in collector pods
{"level":"error","timestamp":"2024-04-04T11:36:23.737Z","caller":"opamp/server_client.go:268","msg":"Collector failed for restart during rollback","component":"opamp-server-client","error":"failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:\n\n* error decoding 'exporters': error reading configuration for \"prometheus\": 1 error(s) decoding:\n\n* '' has invalid keys: config","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).reload|github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).reload>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:268\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*agentConfigManager).applyRemoteConfig\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/config_manager.go:173\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*agentConfigManager).Apply\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/config_manager.go:159\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onRemoteConfigHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:209\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onMessageFuncHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:199\ngithub.com/open-telemetry/opamp-go/client/types.CallbacksStruct.OnMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/types/callbacks.go:162\ngithub.com/open-telemetry/opamp-go/client/internal.(*receivedProcessor).ProcessReceivedMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/receivedprocessor.go:131\ngithub.com/open-telemetry/opamp-go/client/internal.(*wsReceiver).ReceiverLoop\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/wsreceiver.go:57\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:243\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}
{"level":"error","timestamp":"2024-04-04T11:36:23.737Z","caller":"opamp/server_client.go:216","msg":"failed to apply config","component":"opamp-server-client","error":"failed to reload config: /var/tmp/collector-config.yaml: collector failed to restart: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:\n\n* error decoding 'exporters': error reading configuration for \"prometheus\": 1 error(s) decoding:\n\n* '' has invalid keys: config","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onRemoteConfigHandler|github.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onRemoteConfigHandler>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:216\ngithub.com/SigNoz/signoz-otel-collector/opamp.(*serverClient).onMessageFuncHandler\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/opamp/server_client.go:199\ngithub.com/open-telemetry/opamp-go/client/types.CallbacksStruct.OnMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/types/callbacks.go:162\ngithub.com/open-telemetry/opamp-go/client/internal.(*receivedProcessor).ProcessReceivedMessage\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/receivedprocessor.go:131\ngithub.com/open-telemetry/opamp-go/client/internal.(*wsReceiver).ReceiverLoop\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/wsreceiver.go:57\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runOneCycle\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:243\ngithub.com/open-telemetry/opamp-go/client.(*wsClient).runUntilStopped\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/wsclient.go:265\ngithub.com/open-telemetry/opamp-go/client/internal.(*ClientCommon).StartConnectAndRun.func1\n\t/home/runner/go/pkg/mod/github.com/open-telemetry/opamp-go@v0.5.0/client/internal/clientcommon.go:197"}