Hi, I'm using micrometer to expose my JVM metrics ...
# support
v
Hi, I'm using micrometer to expose my JVM metrics and have been scraping those metrics using the otel-metrics-collector, But i see some issue with the data conversions. micrometer exposes jvm_memory_used_bytes metric in scientific notation like 4.194304E9, which is ~3.9GB but on signoz dashboards it shows as ~78.1GB. (unit in dashboard is bytes(IEC) ) Any suggestions on this?
s
How are you looking at the data? The value shown depends on the aggregation.
v
yeah actually i was using sum aggregator, but the issue is if i use noop i can see 3 values for max_heap_bytes (eden,survivor,g1) with values 3.91GB, -1 bytes,-1 bytes. When i sum it the total is going to 7.91GB. -1 bytes is not valid, but still how is the total going up to 7.91 on sum aggregate. Any idea?
s
The aggregation sums the value in the aggregation interval. For instance, if you scrape once every 30s and use sum, it sums the 3.91 and 3.91, which is why you could be seeing this. Isn’t the average appropriate here?
v
true, that explains why i was seeing 27GB for long intervals and 7.9GB for small intervals. but If i use average, the -1 values will be used and the the value will be way low.🥲