Hi team, One quick question is there a way to impl...
# support
v
Hi team, One quick question is there a way to implement retention of logs data via helm chart and not fromt the UI, or do we have a way to clear that vial clickhouse command?
n
As of now, it’s not possible via the helm chart. What do you mean by
or do we have a way to clear that vial clickhouse command
? do you want to remove ttl?
v
actually, my pvc is full and it cannot be increased as its limit is reached. Whole signoz system is stuck because DB is not responding because of the disk issue. So if i clear the logs i think i can login from the UI
n
If you are able to exec into clickhouse then try deleting the data
Copy code
kubectl exec -n platform -it chi-my-release-clickhouse-cluster-0-0-0 -- sh

clickhouse client

use signoz_logs;

truncate table signoz_logs.logs;
v
okay thanks, logs will be created automatically in the next run
n
yeah, once you are done cleaning up, do set up retention period from the UI.
v
@nitya-signoz Actually it didnt help, im not able to login to signoz with admin account it says user does not exists. Can you help
i enable the s3 cold storage and in the middle of everything i believe because my data is huge it stuck again, Now im not able to login from admin account. Do you know in which table we are storing these user details
and anlso is there any way to refresh all the data like logs traces and metrices
hen i try with admin user it say account doesn't exists, but i can see all the user in the sqlite DB. When enabling s3 cold storage is there any migration scripts that we can run manually, just to make sure everything moved correctly, As i believe because the data is huge when i enable the s3 cold storage and somehow process stopped, it stopped the migration script. Any help guys..
n
@Vishal Sharma any idea about the admin user issue?
You can’t move things manually to s3, it’s done by clickhouse internally. You can just check the current running process in clickhouse. you can check them by
SELECT query_id, query FROM system.processes;
It will show you if the ttl is applied or not
How much storage have you allocated to clickhouse and how much data are you ingesting ?
v
initially 200 gb
and the ingestion is around 300 GB i gues
n
Is it 300 GB per day?
v
no iniatilly we are dumping some data to test, daily 10-15 GB i guess
n
Okay just to clear out you are facing two issue • Not able to login • ClickHouse disk full right ?
v
yes not able to login, for clickhouse ive incresed the pvc to 500 with AWS support.
n
can do a huddle for the login issue
v
i think its working now, im able to login and able to see few items after increasing pvc
n
Cool that’s great.
v
also i truncate a table in signoz_traces, will it recreated in next run
n
To stop ingestion for somtime, you can stop the otel collector and the otel-collector metrics.
Then run truncate, also you can change TTL at this point of time and configure S3 as it will be fast.
v
yes that is what i did yesterday. It is stopped for now.
As of now it look like something happened when i enabled s3 cold storage. sometimes im able to see things and sometime i'm getting authorization error while accessing the api's from frontend.
n
Can you check if any of the pods are restarting ?
v
no its not restarting, but i deployed the new setup and sometime im able to see thing and sometimes it logging me out
aby idea
looks like when we are running the setup with replica count 2, with some calls it gives 403
n
ahh as of now you will have to run one replica of query service
v
yes did the same and it works, thanks