Hi. This is Lakshyajit. We have been getting a pa...
# support
l
Hi. This is Lakshyajit. We have been getting a particular error in our kafka + druid setup of signoz today. The error is as follows: 2021-10-18T074456.354Z    [31mERROR[0m    druidQuery/mysql-query.go:427    &{2021-10-18T072943.095Z 2021-10-18T073443.095Z 300 2021-10-18 072943.095 +0000 UTC 2021-10-18 073443.095 +0000 UTC} 400 Bad Request: {"error":"Plan validation failed","errorMessage":"org.apache.calcite.runtime.CalciteContextException: From line 1, column 164 to line 1, column 178: Object 'flattened_spans' not found","errorClass":"org.apache.calcite.tools.ValidationException","host":null} P.S. The signoz server is running on a GKE cluster using Helm. Any help would be appreciated. Thanks.
p
@User Can you check this?
a
The above error means - the datasource in druid is not available anymore
@User Have you set up s3 config in
values.yaml
? If not check this section of docs
l
Thanks for your reply @User! Will check this and get back to you.
👍 1
a
you can also explore Druid UI and look under
datasources
tab by port-fowarding
svc/signoz-druid-router
(pls recheck with
kubectl -n platform get svc
) using below command -
kubectl -n platform port-forward svc/signoz-druid-router 8888:8888
l
Okay...actually we have setup on GCP. Is there any similar setup for GCP equivalent to the s3 settings u told?
a
GKE or GCP? running on K8s right?
l
Yeah
It's on GKE
a
ok..then the above way to set flags in
values.yaml
of helm chart still works
l
Ok so do we need to explicitly configure anything on AWS S3?
l
Okay..
a
In short no specific permission apart from GetObject and PutObject
Did it work?
l
Hi @User. Apologies for such a delayed response. I understood the s3 config that is to be done as per the doc links you shared above. I have one more query though, there is a certain field called as "druid_storage_baseKey" - could you please explain what that is exactly? I suppose the rest of the keys(aws access/secret key) we'll get it while setting up the s3 bucket itself, this is the only thing that I am unable to figure out.
a
Hi @User It's a prefix string that will be prepended to the object names for the segments published to S3 deep storage
l
Oh okay...so it's the default initial string which is there for objects inside an s3 bucket right?
a
yes
l
got it
👍 1
hi @User I have been trying to setup, but looks like in order to get the prefix string, I would have to make the s3 bucket public. Is it desirable? I mean we wouldn't want to make all the log metrics public
a
I don't think you need to make things public to get prefix to work. A client was able to set up private s3 and link it with Druid. Please send me any link/discussion which says so. I will look deeper
l
I tried to implement it but looks like there is still some issue with the configuration. I tried looking for the docs and github issues related to setup but they weren't of much help. Could you point me in a direction where I can find some sources as to how to setup with the help of helm?
a
ohkk.... seems like little we could help with druid setup. We are depricating druid setup and building more on clickhouse setup. We have release new features in clickhouse including, metrics-ingestion, building custom dashboards and alerting. I would highly recommend to moving to clickhouse setup. Though it does not have installation instructions in k8s, it will be available around 7-10th Dec.
Would it be possible to setup signoz using docker-compose in an independent VM and send telemetry from K8s applications to that instance? You can test out signoz there and for prod config we shall have helm charts released by then
l
Will talk to my lead and let u know about this
🆗 1