This message was deleted.
# support
s
This message was deleted.
v
1. What is the scale you are running at? ServiceMap does has some known perf issues. 2. Can you please record a loom video, @User please help him out. 3. External calls are identifed using span type, not sure about db calls. @User please help here.
a
@User external call and db calls are part of automatically generated data. If you use some
http
library to do requests and similarly db clients libs to query a db server the data will be automatically available if the libs are supported by opentelemetry. If that is not working you can always add data using sdks. For the semantics, refer to this official otel docs which we use to extract data.
BTW which language and libs do you use to make
requests
and
db
queries. I can do a quick lookup in otel docs for support
h
I used golang and made grpc calls and postgres db calls. I am seeingt he db.system as postgres and also the queries being executed but they are not coming as database calls
1. What is the scale you are running at? ServiceMap does has some known perf issues.
Right now we are running in non prod which is low scale. Roughly 1L traces generated in few hours. In production we will have atleast 500X of this.
a
I am seeingt he db.system as postgres and also the queries being executed but they are not coming as database calls
If you are seeing that, then it should be there in charts also. Possible to DM me the API response where you see
db.system as postgres and also the queries being executed
?
@User Are you using the service map as it is? We can optimise it in our next release. I think it can be made multifold faster using simple DB optimizations. @User might be a good time to improve perf of service map now
In production we will have atleast 500X of this.
Should not be any issue unless you query spans over very large duration > 6hrs. If I got you correctly your production data will be around 15K spans/s, right? Right now we are benchmarking with 50K spans/s ingestion. We shall publish the performance results in 2 weeks. Anyhow, query speeds linearly with added CPU resources to clickhouse which can scale horizontally and vertically both.
🙌 1
For even higher scale we recommend to do introduce sampling to save disk space
we can help you with prod setup later when the PoC is done 🙂
h
Got it thanks! Do you have any example on sampling? Should we do it a collector level or application level
@User Are you using the service map as it is? We can optimise it in our next release. I think it can be made multifold faster using simple DB optimizations. @User might be a good time to improve perf of service map now
This is major drawback for us since we have a lot of microservices. Would really help if we can get this since in non prod even with less data it does not load
a
Got it thanks! Do you have any example on sampling? Should we do it a collector level or application level
collector level. Will share the guidelines soon. If you need to kickstart now, you can use config from https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor
🙌 1
This is major drawback for us since we have a lot of microservices. Would really help if we can get this since in non prod even with less data it does not load
got it. Let me try for this early next week
🙏 1
p
Hi @User For 2. We just released v0.7.5 which will solve your issue
🙌 1