Hey Signoz team, So we have been using Signoz at D...
# general
d
Hey Signoz team, So we have been using Signoz at Dukaan as a replacement of Jaeger for quite some time now and find it amazing(especially the ui, so much richer than jaeger). I noticed something, that Jaeger has this concept of side car where it expects clients to send traces in UDP and then its this agents job to send them traces to collector. But i couldn’t find such a equivalent in signoz. So while trying to trace nginx ingress controller, according to the official docs, they expects either
jaeger-collector-host
or
jaeger-endpoint
. Now i could successfully trace nginx using signoz if i use
jaeger-endpoint
and simply give it the signoz collector’s endpoint but how would i go on tracing nginx using
jaeger-collector-host
. The follow up question to above would be, is their any difference in tracing through agent or directly collector. My assumption is their might be some performance impact on application if directly sending trace data to collector as compared to agent since the agent works on UDP. attaching the nginx ingress doc for reference https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/opentracing/
s
but how would i go on tracing nginx using
jaeger-collector-host
.
Depending on the protocol it would
host:14250
(grpc) or
host:14268
(thrift/http)
My assumption is their might be some performance impact on application if directly sending trace data to collector as compared to agent since the agent works on UDP.
Are you talking about the jaeger-client instrumentation on your application side?
d
this is in context of instrumenting nginx! Also Lemme try the protocol thing you said
s
Slightly confused with the mixed use of application and nginx. In my experience of maintaining the jaeger components in OTEL Python, the agent exporters are used for some operation advantages than any performance reasons. Also, these exporters have limitations with the UDP message size and often dropped spans.
d
So the use case goes like this. We use nginx ingress in k8s as reverse proxy and Originally we had our entire backend architecture traced with jaeger. And we also had a logging system which used to store nginx logs in elastic. Logs were enriched with up stream response time. And then many a times in our front end we could see, api’s sometimes taking a lotta time. Api’s which were doing trivial task. We had included the trace id in response header, and whenever we used to check on these traces in jaeger we could see the trace duration was very short as compared to the api response time (even the nginx logs were showing high upstream response time). Analysing this, we were sure backend was working perfectly fine so the only thing that was a blackbox became nginx. So the intent came that, we wanna trace the request the moment it hits our k8s cluster i.e. trace the request from nginx level to figure out where the time is spent on such request.
s
Ok. I don’t have any information on the performance gains of the agent vs direct collector. One thing to remember is that the instrumentation configured with UDP may drop the spans because of the message size restrictions. The SigNoz collector can receive the data in different modes, i.e UDP/thrift for agent more and grpc, thrift/HTTP for collector mode. Just the endpoint configuration changes. I hope this helps.
d
Thanks. Will try over udp, i believe we can increase the udp packet size limit. I did this with jaeger in osx. Will check how to do in k8s as well. But can you point me to someone who might have more information on this one.
s
But can you point me to someone who might have more information on this one.
Did you mean someone who knows more about the perf part of agent vs collector? or something else?
d
the perf part right now. As i dont wanna directly set nginx to send traces to collector over http if it will have any kind of performance issues. Since nginx is one of the most important service .
s
You may want to ask this in the jaeger community GitHub repo https://github.com/jaegertracing or their slack #jaeger in cncf slack. Some relevant info here but no mention of perf gain anywhere https://github.com/jaegertracing/jaeger/discussions/1488.
d
alright thanks