Hello everyone, I installed SigNoz on October 2nd ...
# support
n
Hello everyone, I installed SigNoz on October 2nd on our AKS cluster and set up cold storage on S3. However, the S3 bucket is receiving too many GET requests from SigNoz, which has resulted in a bill of around $600 just from these requests. The data collected on S3 is only 41GB so far. Does anyone know why it's making so many GET requests to AWS S3?
s
@Nagesh Rathod what is your TTL config? You should atleast keep the data on disk for reasonable time so that small parts get merged and won't cause too many requests.
n
Copy code
# Query Log table configuration
    queryLog:
      # -- The number of days to keep the data in the query_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the query_log table.
      flushInterval: 7500
    # Part Log table configuration
    partLog:
      # -- The number of days to keep the data in the part_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the part_log table.
      flushInterval: 7500
    # Trace Log table configuration
    traceLog:
      # -- The number of days to keep the data in the trace_log table.
      ttl: 7
      # -- Time interval in milliseconds between flushes of the trace_log table.
      flushInterval: 7500

    asynchronousInsertLog:
      # -- The number of days to keep the data in the asynchronous_insert_log table.
      ttl: 7
      # -- Time interval in milliseconds between flushes of the asynchronous_insert_log table.
      flushInterval: 7500
    asynchronousMetricLog:
      # -- The number of days to keep the data in the asynchronous_metric_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the asynchronous_metric_log table.
      flushInterval: 7500
    backupLog:
      # -- The number of days to keep the data in the backup_log table.
      ttl: 7
      # -- Time interval in milliseconds between flushes of the backup_log table.
      flushInterval: 7500
    blobStorageLog:
      # -- The number of days to keep the data in the blob_storage_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the blob_storage_log table.
      flushInterval: 7500
    crashLog:
      # -- The number of days to keep the data in the crash_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the crash_log table.
      flushInterval: 7500
    metricLog:
      # -- The number of days to keep the data in the metric_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the metric_log table.
      flushInterval: 7500
    queryThreadLog:
      # -- The number of days to keep the data in the query_thread_log table.
      ttl: 7
      # -- Time interval in milliseconds between flushes of the query_thread_log table.
      flushInterval: 7500
    queryViewsLog:
      # -- The number of days to keep the data in the query_views_log table.
      ttl: 15
      # -- Time interval in milliseconds between flushes of the query_views_log table.
      flushInterval: 7500
    sessionLog:
      # -- The number of days to keep the data in the session_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the session_log table.
      flushInterval: 7500
    zookeeperLog:
      # -- The number of days to keep the data in the zookeeper_log table.
      ttl: 30
      # -- Time interval in milliseconds between flushes of the zookeeper_log table.
      flushInterval: 7500
    processorsProfileLog:
      # -- The number of days to keep the data in the processors_profile_log table.
      ttl: 7
      # -- Time interval in milliseconds between flushes of the processors_profile_log table.
      flushInterval: 7500
metrics, traces & logs are set for days
to move in s bucket
k
I would not recommend using S3 cold storage since clickhouse will access S3 constantly for querying. We use standard S3 storage, and make sure the bucket is in the same region as the EC2 instance running Signoz for prevent bandwidth costs. We have about 600GB of data and get charged only $15 on S3.