:wave: Hello, team! I'm using signoz local and try...
# support
a
👋 Hello, team! I'm using signoz local and try to trace the DB call metrics (Mongo DB) , but I'm unable to do so. Can anyone help me to figure it out ?
Something more that can be helpful: https://signoz.io/blog/opentelemetry-mongodb/
a
Thanks for your support , But I was not able to see logs in DB call Metrics Tab when i make an interaction with the DB . Can you guide me to figure it out
n
Can you share your otel-configuration file?
a
import { BatchLogRecordProcessor } from '@opentelemetry/sdk-logs'; import { NodeSDK } from '@opentelemetry/sdk-node'; import { HttpInstrumentation } from '@opentelemetry/instrumentation-http'; import { ExpressInstrumentation } from '@opentelemetry/instrumentation-express'; import resource from 'src/otel/resource'; import traceExporter from 'src/otel/traces/trace-exporter'; import loggerProvider from 'src/otel/logs/logger-provider'; import logExporter from 'src/otel/logs/log-exporter'; import { MongoDBInstrumentation } from '@opentelemetry/instrumentation-mongodb'; const instrumentations = [ new HttpInstrumentation(), new ExpressInstrumentation(), new MongoDBInstrumentation({ enhancedDatabaseReporting: true, }), ]; const logRecordProcessor = new BatchLogRecordProcessor(logExporter); loggerProvider.addLogRecordProcessor(logRecordProcessor); // Initialize the NodeSDK const otelSDK = new NodeSDK({ traceExporter, resource, instrumentations, logRecordProcessor, }); process.on('SIGTERM', () => { otelSDK .shutdown() .then(() => console.log('OpenTelemetry terminated')) .catch(error => console.error('Error terminating OpenTelemetry', error)) .finally(() => process.exit(0)); }); export default otelSDK;
n
This seems like the tracing file. Can you share the
otel-config.yml
file? https://github.com/SigNoz/signoz/blob/develop/deploy/docker/clickhouse-setup/otel-collector-config.yaml
a
receivers: tcplog/docker: listen_address: "0.0.0.0:2255" operators: - type: regex_parser regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}[0 9]{2}[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?' timestamp: parse_from: attributes.timestamp layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: move from: attributes["body"] to: body - type: remove field: attributes.timestamp # please remove names from below if you want to collect logs from them - type: filter id: signoz_logs_filter expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|clickhouse|zookeeper)"' opencensus: endpoint: 0.0.0.0:55678 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_http: endpoint: 0.0.0.0:14268 # thrift_compact: # endpoint: 0.0.0.0:6831 # thrift_binary: # endpoint: 0.0.0.0:6832 hostmetrics: collection_interval: 30s scrapers: cpu: {} load: {} memory: {} disk: {} filesystem: {} network: {} prometheus: config: global: scrape_interval: 60s scrape_configs: # otel-collector internal metrics - job_name: otel-collector static_configs: - targets: - localhost:8888 labels: job_name: otel-collector - job_name: mongo-collector scrape_interval: 1s static_configs: - targets: ["172.17.0.1:9216"] processors: batch: send_batch_size: 10000 send_batch_max_size: 11000 timeout: 10s signozspanmetrics/cumulative: metrics_exporter: clickhousemetricswrite metrics_flush_interval: 60s latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ] dimensions_cache_size: 100000 dimensions: - name: service.namespace default: default - name: deployment.environment default: default # This is added to ensure the uniqueness of the timeseries # Otherwise, identical timeseries produced by multiple replicas of # collectors result in incorrect APM metrics - name: 'signoz.collector.id' # memory_limiter: # # 80% of maximum memory up to 2G # limit_mib: 1500 # # 25% of limit up to 2G # spike_limit_mib: 512 # check_interval: 5s # # # 50% of the maximum memory # limit_percentage: 50 # # 20% of max memory usage spike expected # spike_limit_percentage: 20 # queued_retry: # num_workers: 4 # queue_size: 100 # retry_on_failure: true resourcedetection: # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure. timeout: 2s signozspanmetrics/delta: metrics_exporter: clickhousemetricswrite metrics_flush_interval: 60s latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ] dimensions_cache_size: 100000 aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA enable_exp_histogram: true dimensions: - name: service.namespace default: default - name: deployment.environment default: default # This is added to ensure the uniqueness of the timeseries # Otherwise, identical timeseries produced by multiple replicas of # collectors result in incorrect APM metrics - name: signoz.collector.id extensions: health_check: endpoint: 0.0.0.0:13133 zpages: endpoint: 0.0.0.0:55679 pprof: endpoint: 0.0.0.0:1777 exporters: clickhousetraces: datasource: tcp://clickhouse:9000/signoz_traces docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING} clickhousemetricswrite: endpoint: tcp://clickhouse:9000/signoz_metrics resource_to_telemetry_conversion: enabled: true clickhousemetricswrite/prometheus: endpoint: tcp://clickhouse:9000/signoz_metrics clickhouselogsexporter: dsn: tcp://clickhouse:9000/signoz_logs docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} timeout: 10s # logging: {} service: telemetry: metrics: address: 0.0.0.0:8888 extensions: - health_check - zpages - pprof pipelines: traces: receivers: [jaeger, otlp] processors: [signozspanmetrics/cumulative, signozspanmetrics/delta, batch] exporters: [clickhousetraces] metrics: receivers: [otlp] processors: [batch] exporters: [clickhousemetricswrite] metrics/generic: receivers: [hostmetrics] processors: [resourcedetection, batch] exporters: [clickhousemetricswrite] metrics/prometheus: receivers: [prometheus] processors: [batch] exporters: [clickhousemetricswrite/prometheus] logs: receivers: [otlp, tcplog/docker] processors: [batch] exporters: [clickhouselogsexporter]
I have added this configuration in this file .Is anything need to change
n
Thanks for sharing! I'd actually start with troubleshooting bunch of things, example: • Verify that your MongoDB exporter is running and accessible at the address specified (172.17.0.1:9216) . • Check if the metrics are being scraped by looking at the logs of your OpenTelemetry Collector. • Check the SigNoz UI to see if you're receiving any metrics or traces at all. If you are, but just not seeing DB calls, the issue might be in your application instrumentation
a
I have instrumented these in my tracing file , Is anything need too add ?
n
Are you using the SigNoz cloud/self-host?
a
Signoz Cloud
a
I Can able to see the logs
But when i Switch over to the DB call tab It just shows like
Also able to see these logs with promql query and Dashboard
Any Comments @Nitish?
n
I'm figuring out why is this happening. cc: @Srikanth Chekuri
a
Thanks @Nitish, I have also checked the endpoint for metrics ,
s
Hi, The Metrics tab is based on the traces ingested. First thing to troubleshoot is if your mongodb instrumentation is working. Do you have traces with mongodb traced?
a
I can't get you @Srikanth Chekuri Could you tell deeply so that I can troubleshoot it further
s
I am not sure what more deeply I can tell. Do you have any traces from the mongodb instrumentation?
a
Hi @Srikanth Chekuri let me know If possible we can get on a huddle to discuss this.
s
Can you please try to answer the question? In the traces explorer, search by endpoint you make the mongodb call. After applying filter and running the query. See if any of the traces have mongodb calls.
a
Yes I can
s
Please click on any trace and go to detailed view and check if there are any mongo calls
a
Let me check
No calls found @Srikanth Chekuri
Is any additional configuration needed apart from this ?