https://signoz.io logo
#support
Title
# support
v

Vinayak Singh

03/01/2023, 4:30 AM
Hi team, One quick question is there a way to implement retention of logs data via helm chart and not fromt the UI, or do we have a way to clear that vial clickhouse command?
n

nitya-signoz

03/01/2023, 4:40 AM
As of now, it’s not possible via the helm chart. What do you mean by
or do we have a way to clear that vial clickhouse command
? do you want to remove ttl?
v

Vinayak Singh

03/01/2023, 4:59 AM
actually, my pvc is full and it cannot be increased as its limit is reached. Whole signoz system is stuck because DB is not responding because of the disk issue. So if i clear the logs i think i can login from the UI
n

nitya-signoz

03/01/2023, 5:01 AM
If you are able to exec into clickhouse then try deleting the data
Copy code
kubectl exec -n platform -it chi-my-release-clickhouse-cluster-0-0-0 -- sh

clickhouse client

use signoz_logs;

truncate table signoz_logs.logs;
v

Vinayak Singh

03/01/2023, 5:02 AM
okay thanks, logs will be created automatically in the next run
n

nitya-signoz

03/01/2023, 5:02 AM
yeah, once you are done cleaning up, do set up retention period from the UI.
v

Vinayak Singh

03/01/2023, 6:21 AM
@nitya-signoz Actually it didnt help, im not able to login to signoz with admin account it says user does not exists. Can you help
i enable the s3 cold storage and in the middle of everything i believe because my data is huge it stuck again, Now im not able to login from admin account. Do you know in which table we are storing these user details
and anlso is there any way to refresh all the data like logs traces and metrices
hen i try with admin user it say account doesn't exists, but i can see all the user in the sqlite DB. When enabling s3 cold storage is there any migration scripts that we can run manually, just to make sure everything moved correctly, As i believe because the data is huge when i enable the s3 cold storage and somehow process stopped, it stopped the migration script. Any help guys..
n

nitya-signoz

03/01/2023, 2:37 PM
@Vishal Sharma any idea about the admin user issue?
You can’t move things manually to s3, it’s done by clickhouse internally. You can just check the current running process in clickhouse. you can check them by
SELECT query_id, query FROM system.processes;
It will show you if the ttl is applied or not
How much storage have you allocated to clickhouse and how much data are you ingesting ?
v

Vinayak Singh

03/01/2023, 2:40 PM
initially 200 gb
and the ingestion is around 300 GB i gues
n

nitya-signoz

03/01/2023, 2:41 PM
Is it 300 GB per day?
v

Vinayak Singh

03/01/2023, 2:42 PM
no iniatilly we are dumping some data to test, daily 10-15 GB i guess
n

nitya-signoz

03/01/2023, 2:43 PM
Okay just to clear out you are facing two issue • Not able to login • ClickHouse disk full right ?
v

Vinayak Singh

03/01/2023, 2:45 PM
yes not able to login, for clickhouse ive incresed the pvc to 500 with AWS support.
n

nitya-signoz

03/01/2023, 2:53 PM
can do a huddle for the login issue
v

Vinayak Singh

03/01/2023, 2:55 PM
i think its working now, im able to login and able to see few items after increasing pvc
n

nitya-signoz

03/01/2023, 2:55 PM
Cool that’s great.
v

Vinayak Singh

03/01/2023, 2:56 PM
also i truncate a table in signoz_traces, will it recreated in next run
n

nitya-signoz

03/01/2023, 2:56 PM
To stop ingestion for somtime, you can stop the otel collector and the otel-collector metrics.
Then run truncate, also you can change TTL at this point of time and configure S3 as it will be fast.
v

Vinayak Singh

03/01/2023, 2:57 PM
yes that is what i did yesterday. It is stopped for now.
As of now it look like something happened when i enabled s3 cold storage. sometimes im able to see things and sometime i'm getting authorization error while accessing the api's from frontend.
n

nitya-signoz

03/02/2023, 3:35 AM
Can you check if any of the pods are restarting ?
v

Vinayak Singh

03/02/2023, 6:05 AM
no its not restarting, but i deployed the new setup and sometime im able to see thing and sometimes it logging me out
aby idea
looks like when we are running the setup with replica count 2, with some calls it gives 403
n

nitya-signoz

03/02/2023, 6:34 AM
ahh as of now you will have to run one replica of query service
v

Vinayak Singh

03/02/2023, 6:34 AM
yes did the same and it works, thanks
20 Views