Gil
06/05/2025, 8:56 AMSELECT *
FROM signoz_metrics.time_series_v4
LIMIT 2
Query id: fc206a85-735e-45be-aef0-abb2a8fb61d8
Row 1:
──────
env: DEV08
temporality: Cumulative
metric_name: httpcheck_status
description: 1 if the check resulted in status_code matching the status_class, otherwise 0.
unit: 1
type: Sum
is_monotonic: false
fingerprint: 2673516990800643293
unix_milli: 1749110400000 -- 1.75 trillion
labels: {"__name__":"httpcheck_status","__temporality__":"Cumulative","deployment_environment":"DEV08","health_score":"3","host_name":"FRPAR3DXDEV08","http_method":"GET","http_status_class":"4xx","http_status_code":"200","http_url":"<https://xxx/healthcheck>","os_type":"linux","service_name":"PASSPORT"}
attrs: {}
scope_attrs: {}
resource_attrs: {}
__normalized: true
Row 2:
──────
env: DEV08
temporality: Cumulative
metric_name: httpcheck_status
description: 1 if the check resulted in status_code matching the status_class, otherwise 0.
unit: 1
type: Sum
is_monotonic: false
fingerprint: 5068959940230499294
unix_milli: 1749110400000 -- 1.75 trillion
labels: {"__name__":"httpcheck_status","__temporality__":"Cumulative","deployment_environment":"DEV08","health_score":"3","host_name":"FRPAR3DXDEV08","http_method":"GET","http_status_class":"1xx","http_status_code":"200","http_url":"<https://xxx/healthcheck>","os_type":"linux","service_name":"PASSPORT"}
attrs: {}
scope_attrs: {}
resource_attrs: {}
__normalized: true
I can see the column unix_milli is rounded up to the hour
when I do
SELECT *
FROM signoz_metrics.time_series_v2
LIMIT 2
Query id: 906ee324-9dee-4475-90d2-ee8d9ceedca6
Row 1:
──────
metric_name: httpcheck_duration
fingerprint: 15655094791583317282
timestamp_ms: 1749111867755 -- 1.75 trillion
labels: {"__name__":"httpcheck_duration","__temporality__":"Unspecified","deployment_environment":"DEV08","host_name":"FRPAR3DXDEV08","http_url":"<https://passport.3dx-dev08.plma.capgemini.com/3dpassport/healthcheck>","os_type":"linux","service_name":"PASSPORT"}
temporality: Unspecified
description: Measures the duration of the HTTP check.
unit: ms
type: Gauge
is_monotonic: false
Row 2:
──────
metric_name: httpcheck_status
fingerprint: 2673516990800643293
timestamp_ms: 1749111867755 -- 1.75 trillion
labels: {"__name__":"httpcheck_status","__temporality__":"Cumulative","deployment_environment":"DEV08","health_score":"3","host_name":"FRPAR3DXDEV08","http_method":"GET","http_status_class":"4xx","http_status_code":"200","http_url":"<https://passport.3dx-dev08.plma.capgemini.com/3dpassport/healthcheck>","os_type":"linux","service_name":"PASSPORT"}
temporality: Cumulative
description: 1 if the check resulted in status_code matching the status_class, otherwise 0.
unit: 1
type: Sum
is_monotonic: false
The column timestamp_ms have the right timestamp.
Is that normal ?Gil
06/05/2025, 9:18 AMNagesh Bansal
06/05/2025, 9:24 AMDependent on the granularity of the time series table. Thetable has a granularity of 1 hour.time_series_v4
Example: If the granularity is 1 hour, then the value will be the start of the hour in milliseconds. Why do we do this? If we were to store the timestamp of the metric, then we would have to store the timestamp for each sample. This would increase the size of the table and slow down the queries. By storing the start of the time interval, we try to balance the ability to know the most recent appearance of the metric and the speed of the queries. Even then, the scans on the time series table can be slow if we are looking for a week's worth of data. Hence, we have additional tables with different granularities to speed up the queries.Ref: https://signoz.io/docs/userguide/write-a-metrics-clickhouse-query/#schema-for-time-series-tables
Gil
06/05/2025, 9:27 AMGil
06/05/2025, 9:28 AMNagesh Bansal
06/05/2025, 9:37 AM