I solved my diskspace issue after turning off the ...
# support
I solved my diskspace issue after turning off the default application and shortening the log and trace retention period! I've setup Signoz on an EC2 instance - t2.medium, 40-50 GB EBS - Docker install. I have 44 GB free space. The problem I'm now facing is that overnight Signoz is constantly locking up the EC2 instance. I can't ssh into the box and I'm forced to restart the instance via AWS console. Once I restart, Signoz runs great for 20 hours before locking up again - any thoughts, tips to help me resolve this.
My instinct says you're burning down your t* credits after 20 hours and there's no CPU left to service your connection - have you checked the instance credit balance?
@James Nurmi Thx for the input - I think I found the issue
Copy code
[ 3343.944480] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/containerd.service,task=signoz-collecto,pid=7486,uid=0
[ 3343.966806] Out of memory: Killed process 7486 (signoz-collecto) total-vm:3537956kB, anon-rss:1981204kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:5400kB oom_score_adj:0
[ 3347.391516] oom_reaper: reaped process 7486 (signoz-collecto), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[ 3354.008485] br-05df5b30603c: port 1(veth60da415) entered disabled state
In order to validate I set up a cron job that reboots Signoz every night. If this works, I've nailed it down to a memory issue, and will setup on a larger instance
Thanks for the input to Praveen @James Nurmi You make our community stronger 💪