Hi team, One of my services is not appearing on th...
# support
a
Hi team, One of my services is not appearing on the services tab at all and the clickhouse logs are peppered with
Max query size exceeded
/
failed at position 262082
Copy code
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::Exception::createDeprecated(String const&, int, bool) @ 0x000000000c857c6d in /usr/bin/clickhouse
2. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, String const&, bool, unsigned long, unsigned long) @ 0x000000001316c11c in /usr/bin/clickhouse
3. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000119004f3 in /usr/bin/clickhouse
4. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x00000000118ff77a in /usr/bin/clickhouse
5. DB::TCPHandler::runImpl() @ 0x000000001291be29 in /usr/bin/clickhouse
6. DB::TCPHandler::run() @ 0x0000000012933eb9 in /usr/bin/clickhouse
7. Poco::Net::TCPServerConnection::start() @ 0x00000000153a5a72 in /usr/bin/clickhouse
8. Poco::Net::TCPServerDispatcher::run() @ 0x00000000153a6871 in /usr/bin/clickhouse
9. Poco::PooledThread::run() @ 0x000000001549f047 in /usr/bin/clickhouse
10. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001549d67d in /usr/bin/clickhouse
11. ? @ 0x00007fb5e52f9609
12. ? @ 0x00007fb5e521e353
The attempted query is about 101000 characters long The service appears on the service-map tab and i can query service.namespace successfully, but it seems the above error is blocking it from appearing on the services tab. Thanks.
s
Please follow the span names recommendations
The span name SHOULD be the most general string that identifies a (statistically) interesting class of Spans, rather than individual Span instances while still being human-readable. That is, "get_user" is a reasonable name, while "get_user/314159", where "314159" is a user ID, is not a good name due to its high cardinality. Generality SHOULD be prioritized over human-readability.
https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#span
a
Hi @Srikanth Chekuri had a look and those offending span names are generated by the Elasticsearch client, MS .NET client and MS .NET SQL client libraries. Any additional thoughts or guidance, as these span names are not under our control. Perhaps we can increase max_query_size ? Thanks for the response.
s
You could increase the query size but I would suggest adding the span names issue. You can either normalize them by adding a processor https://opentelemetry.io/docs/collector/configuration/#processors
a
I have reduced the cardinality and shortened long span names. I am still experiencing an error, when attempting to load the services tab.
Copy code
62. DB::Exception: Syntax error: failed at position 262103 (''Elasticsearch DELETE domain_objects_971gaf95-5a65-4eef-bcd5-cec23fd391d6_23fd9d6c-cdec-435c-a8d5-390169141a32_temp'') (line 6, col 261938): 'Elasticsearch DELETE domain_objects_971gaf95-5a65-4eef-bcd5-cec23fd391d6_23fd9d6c-cdec-435c-a8d5-390169141a32_temp', 'Elasticsearch DELETE domain_ob. Max query size exceeded: ''Elasticsearch DELETE domain_objects_971gaf95-5a65-4eef-bcd5-cec23fd391d6_23fd9d6c-cdec-435c-a8d5-390169141a32_temp''. (SYNTAX_ERROR):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c800f1b in /usr/bin/clickhouse
1. DB::Exception::createDeprecated(String const&, int, bool) @ 0x000000000c857c6d in /usr/bin/clickhouse
2. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, String const&, bool, unsigned long, unsigned long) @ 0x000000001316c11c in /usr/bin/clickhouse
I am confused why that span name still exists, due to the following transform:
- replace_match(name, "Elasticsearch DELETE domain_objects_*", "Elasticsearch DELETE")
Also, if I query for that span name, it does not exist:
Copy code
SELECT name,
from signoz_traces.distributed_signoz_index_v2
where startsWith(name,'Elasticsearch DELETE domain')
order by `timestamp` desc LIMIT 10


Ok.

0 rows in set. Elapsed: 0.014 sec. Processed 930.10 thousand rows, 931.02 KB (67.50 million rows/s., 67.56 MB/s.)
Is there any manual purging required of the no longer relevant, previously long, high cardinality span names? Thanks @Srikanth Chekuri
s
Is there any manual purging required of the no longer relevant, previously long, high cardinality span names
Yes, can you truncate the
signoz_traces.top_level_operations
and check? It should fill up back fairly quickly.
a
That cleared the error, the missing service now appears on the services tab! Let's see if it continues. Thank you for the guidance.