please how do i fix this errors in otel-collector?...
# support
a
please how do i fix this errors in otel-collector?
Copy code
{"level":"info","ts":1749784833.8750758,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"33.43241017s"}
{"level":"info","ts":1749784834.854519,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"10.298972847s"}
{"level":"info","ts":1749784836.09816,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"4.058984498s"}
{"level":"info","ts":1749784838.6349702,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"2.577215766s"}
{"level":"info","ts":1749784850.1593175,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"7.84091135s"}
{"level":"info","ts":1749784851.2132058,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"10.360665119s"}
{"level":"info","ts":1749784852.4822192,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"26.716444271s"}
{"level":"info","ts":1749784852.7835574,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"metrics","name":"clickhousemetricswrite","error":"context deadline exceeded","interval":"31.866769043s"}
{"level":"info","ts":1749784855.1549704,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"11.269614762s"}
{"level":"info","ts":1749784857.9503,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"37.636314979s"}
{"level":"info","ts":1749784863.0528462,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"27.286178514s"}
{"level":"info","ts":1749784865.6311586,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"38.264130446s"}
{"level":"info","ts":1749784868.0015748,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"11.122094322s"}
{"level":"info","ts":1749784871.5744464,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"10.521283723s"}
{"level":"info","ts":1749784871.6017382,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"43.461381439s"}
{"level":"info","ts":1749784876.4260118,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"21.042483971s"}{"level":"error","ts":1749785466.8774161,"caller":"internal/base_exporter.go:153","msg":"Exporting failed. Rejecting data.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"sending queue is full","rejected_items":36,"stacktrace":"<http://go.opentelemetry.io/collector/exporter/exporterhelper/internal|go.opentelemetry.io/collector/exporter/exporterhelper/internal>.
@Nagesh Bansal @Vishal Sharma please i need help 🙏
g
Hey... Your collector have trouble to send its data. W/o more info like your config it's hard to tell. Also add exporters: debug: verbosity: detailed To have more info in the journalctl of your collector
a
thanks for the response, here is my config
Copy code
global:
  storageClass: signoz-efs-sc
  cloud: aws

schemaMigrator:
  enabled: true
  name: "schema-migrator"
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"

alertmanager:
  enabled: true
  name: "alertmanager"
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"
  replicaCount: 1
  persistence:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 1Gi
    annotations:
      "<http://helm.sh/resource-policy|helm.sh/resource-policy>": keep
      "<http://volume.kubernetes.io/storage-provisioner|volume.kubernetes.io/storage-provisioner>": "<http://efs.csi.aws.com|efs.csi.aws.com>"
    storageClass: signoz-efs-sc

queryService:
  name: "query-service"
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"
  replicaCount: 1
  persistence:
    enabled: true
    accessModes:
      - ReadWriteMany
    size: 1Gi
    annotations:
      "<http://helm.sh/resource-policy|helm.sh/resource-policy>": keep
      "<http://volume.kubernetes.io/storage-provisioner|volume.kubernetes.io/storage-provisioner>": "<http://efs.csi.aws.com|efs.csi.aws.com>"
    storageClass: signoz-efs-sc

otelCollector:
  name: "otel-collector"
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"
  annotations:
    "<http://helm.sh/hook-weight|helm.sh/hook-weight>": "3"
  podAnnotations:
    <http://signoz.io/scrape|signoz.io/scrape>: 'true'
    <http://signoz.io/port|signoz.io/port>: '8888'
  config:
    receivers:
      otlp:
        protocols:
          http:
            endpoint: 0.0.0.0:4318
            cors:
              allowed_origins:
                - "*"
              allowed_headers:
                - "*"
      filelog:
        exclude: []
        include:
          - /var/log/pods/**/*.log
        include_file_name: false
        include_file_path: true
        start_at: beginning
        preserve_trailing_whitespaces: true
        preserve_leading_whitespaces: false
        operators:
          - id: parser-containerd
            regex: ^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}) \| (?P<log>.*)
            type: regex_parser
            output: containerd-recombine
          - id: containerd-recombine
            type: recombine
            combine_field: attributes.log
            source_identifier: attributes["log.file.path"]
            is_first_entry: body matches '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} [|] '
            max_log_size: 102400
            output: extract_metadata_from_filepath
          - id: extract_metadata_from_filepath
            parse_from: attributes["log.file.path"]
            regex: ^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]+)\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log$
            type: regex_parser
          - from: attributes.container_name
            to: resource["k8s.container.name"]
            type: move
          - from: attributes.namespace
            to: resource["k8s.namespace.name"]
            type: move
          - from: attributes.pod_name
            to: resource["k8s.pod.name"]
            type: move
          - from: attributes.restart_count
            to: resource["k8s.container.restart_count"]
            type: move
          - from: attributes.uid
            to: resource["k8s.pod.uid"]
            type: move
          - from: attributes.log
            to: body
            type: move

    processors:
      tail_sampling:
        policies:
          - name: error_traces
            type: status_code
            status_code:
              status_codes: [ERROR]
          - name: drop_noisy_traces_url
            type: string_attribute
            string_attribute:
              key: http.target
              values:
                - \/metrics
                - \/actuator*
                - opentelemetry\.proto
                - favicon\.ico
                - \/api\/[^/]+\/(?:svc|live)
              enabled_regex_matching: true
              invert_match: true
    exporters:
      debug:
        verbosity: detailed
    service:
      telemetry:
        logs:
          encoding: json
        metrics:
          address: 0.0.0.0:8888
      pipelines:
        traces:
          receivers: [otlp, jaeger]
          processors: [tail_sampling, signozspanmetrics/delta, batch]
          exporters: [clickhousetraces]
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [clickhousemetricswrite, metadataexporter, signozclickhousemetrics]
        logs:
          receivers: [otlp, httplogreceiver/heroku, httplogreceiver/json, filelog]
          processors: [batch]
          exporters: [clickhouselogsexporter, metadataexporter]

signoz:
  name: "signoz"
  replicaCount: 1
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"

clickhouse:
  enabled: true
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"
  installCustomStorageClass: false
  annotations:
  coldStorage:
    enabled: true
    defaultKeepFreeSpaceBytes: "2147483648"
    type: s3
    endpoint: <https://infra.s3.us-west-2.amazonaws.com/data/>
    role:
      enabled: true
      annotations:
        <http://eks.amazonaws.com/role-arn|eks.amazonaws.com/role-arn>: ${cold_storage_arn}
  persistence:
    enabled: true
    accessModes:
      - ReadWriteMany
    size: 30Gi
    annotations:
      "<http://helm.sh/resource-policy|helm.sh/resource-policy>": keep
      "<http://volume.kubernetes.io/storage-provisioner|volume.kubernetes.io/storage-provisioner>": "<http://efs.csi.aws.com|efs.csi.aws.com>"
    storageClass: signoz-efs-sc
  allowedNetworkIps:
    - "${vpc_cidr_block}"
    - "${secondary_cidr_block}"

clickhouseOperator:
  name: operator
  nodeSelector:
    role: observability
    <http://kubernetes.io/arch|kubernetes.io/arch>: arm64
    <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>: r7g.large
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values: ["observability"]
          - key: <http://kubernetes.io/arch|kubernetes.io/arch>
            operator: In
            values: ["arm64"]
          - key: <http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
            operator: In
            values: ["r7g.large"]
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: role
              operator: In
              values: ["observability"]
          topologyKey: <http://kubernetes.io/hostname|kubernetes.io/hostname>
  tolerations:
  - key: "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>"
    operator: "Equal"
    value: "r7g.large"
    effect: "NoSchedule"
the errors
Copy code
{"level":"info","ts":1749958398.5876358,"caller":"service@v0.111.0/service.go:234","msg":"Everything is ready. Begin running and processing data."}
{"level":"info","ts":1749958398.592236,"caller":"clickhousemetricsexporter/clickhouse.go:200","msg":"Shard count changed. Resetting time series map.","kind":"exporter","data_type":"metrics","name":"clickhousemetricswrite","prev":0,"current":4}
{"level":"info","timestamp":"2025-06-15T03:33:18.759Z","caller":"signozcol/collector.go:121","msg":"Collector service is running"}
{"level":"info","timestamp":"2025-06-15T03:33:18.759Z","logger":"agent-config-manager","caller":"opamp/config_manager.go:168","msg":"Config has not changed"}
{"level":"info","timestamp":"2025-06-15T03:33:19.361Z","caller":"service/service.go:73","msg":"Client started successfully"}
{"level":"info","timestamp":"2025-06-15T03:33:19.361Z","caller":"opamp/client.go:49","msg":"Ensuring collector is running","component":"opamp-server-client"}
{"level":"error","ts":1749958411.61286,"caller":"signozclickhousemetrics/exporter.go:1095","msg":"error writing metadata","kind":"exporter","data_type":"metrics","name":"signozclickhousemetrics","error":"code: 60, message: Table signoz_metrics.distributed_metadata does not exist. Maybe you meant signoz_metrics.distributed_usage?","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8|github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/exporter/signozclickhousemetrics/exporter.go:1095"}
{"level":"error","ts":1749958416.614171,"caller":"signozclickhousemetrics/exporter.go:1095","msg":"error writing metadata","kind":"exporter","data_type":"metrics","name":"signozclickhousemetrics","error":"code: 60, message: Table signoz_metrics.distributed_metadata does not exist. Maybe you meant signoz_metrics.distributed_usage?","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8|github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/exporter/signozclickhousemetrics/exporter.go:1095"}
{"level":"error","ts":1749958417.7051444,"caller":"signozclickhousemetrics/exporter.go:1095","msg":"error writing metadata","kind":"exporter","data_type":"metrics","name":"signozclickhousemetrics","error":"code: 60, message: Table signoz_metrics.distributed_metadata does not exist. Maybe you meant signoz_metrics.distributed_usage?","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8|github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/exporter/signozclickhousemetrics/exporter.go:1095"}
{"level":"error","ts":1749958418.6471806,"caller":"signozclickhousemetrics/exporter.go:1095","msg":"error writing metadata","kind":"exporter","data_type":"metrics","name":"signozclickhousemetrics","error":"code: 60, message: Table signoz_metrics.distributed_metadata does not exist. Maybe you meant signoz_metrics.distributed_usage?","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8|github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/exporter/signozclickhousemetrics/exporter.go:1095"}
{"level":"info","ts":1749958420.5934873,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"6.499943835s"}
{"level":"error","ts":1749958420.6267393,"caller":"signozclickhousemetrics/exporter.go:1095","msg":"error writing metadata","kind":"exporter","data_type":"metrics","name":"signozclickhousemetrics","error":"code: 60, message: Table signoz_metrics.distributed_metadata does not exist. Maybe you meant signoz_metrics.distributed_usage?","stacktrace":"<http://github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8|github.com/SigNoz/signoz-otel-collector/exporter/signozclickhousemetrics.(*clickhouseMetricsExporter).writeBatch.func8>\n\t/home/runner/work/signoz-otel-collector/signoz-otel-collector/exporter/signozclickhousemetrics/exporter.go:1095"}
{"level":"info","ts":1749958421.5960984,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"5.952350102s"}
{"level":"info","ts":1749958422.594202,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"3.60806893s"}
{"level":"info","ts":1749958423.5970109,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"3.92054924s"}
{"level":"info","ts":1749958424.595732,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"2.780249743s"}
{"level":"info","ts":1749958425.597292,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"2.531997932s"}
{"level":"info","ts":1749958426.645978,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"metrics","name":"clickhousemetricswrite","error":"context deadline exceeded","interval":"7.492980876s"}
{"level":"info","ts":1749958426.656024,"caller":"internal/retry_sender.go:118","msg":"Exporting failed. Will retry the request after interval.","kind":"exporter","data_type":"logs","name":"clickhouselogsexporter","error":"StatementSend:context deadline exceeded","interval":"7.48332567s"}