i have a weird question. I am noticing that the me...
# support
n
i have a weird question. I am noticing that the memory utilization displayed in signoz for alerts and stuff is way higher then what i see live when i am watching the pods. I know the signoz is showing container memory limit utilization, but its the same for pod memory limit utilization. I just don't get why its so far off. And its more frustrating cause its setting off warning alerts when the pods go over 90% usage only for me to discover as i go into kubernetes that they are at 40% or less. I even sit and watch both to make sure I didn't miss a spike or something but nope if anything the real values in k8s go down while the metrics in signoz go up. Part of me thinks I am missing something obvious. I also tried doing memory usage divided by the limit as a function and see the same results. So yeah, kinda just super confused and would love some insight. Thanks!
Had to remove some info from the k8s screenshot lol
So I have been learning a lot and found out the to correctly track clickhouse pods' memory/limit as it relates to what i see with
kubectl top pod
, I have to do the following: Query A:
container_memory_working_set
Average by
k8s_pod_name, k8s_container_name
Query B:
k8s_container_memory_limit
same Average by as above Function F1:
A/B
Y-axis unit: Percent (0.0-1.0) Apparently the
k8s_container_memory_limit_utlization
includes all the memory stuff the container claims and not just the actual memory that is actively being used
Hopefully that helps someone else that was as confused as I was lol
s
I can see the confusion. These metrics need explanation and more out of the box dashboards and alerts.