Thai Son Tran
10/19/2022, 3:10 AMMallesh Kannan
10/19/2022, 5:56 AMMallesh Kannan
10/19/2022, 6:21 AMKasim Ali
10/19/2022, 7:04 AMBhavesh Patel
10/19/2022, 7:07 AMPrajyod Arudra
10/19/2022, 9:57 AMSiddhartha
10/19/2022, 12:54 PMGraphQL
application.
I believe this is happening because of the way my host is set up. It's a VM with docker + nginx proxy manager. What should be the value for OTEL_EXPORTER_OTLP_ENDPOINT
in this case?Abhinav Ramana
10/19/2022, 5:58 PMTypeError: an integer is required (got type NoneType)
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in Nones.
Exception while exporting metrics an integer is required (got type NoneType)
Traceback (most recent call last):
File "/opt/venv/lib/python3.9/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py", line 305, in _export
self._client.Export(
File "/opt/venv/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/opt/venv/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2022-10-19T17:51:34.004255785+00:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused {created_time:"2022-10-19T17:51:34.004247397+00:00", grpc_status:14}]}"
>
It send the request successfully however doesn't send to signoz due to this exceptionZoop
10/19/2022, 8:33 PM{
"timestamp": 1666209977113678300,
"id": "2GMrb6aQr7vufL4DoehXo21U7Dl",
"trace_id": "",
"span_id": "",
"trace_flags": 0,
"severity_text": "",
"severity_number": 0,
"body": "[longhorn-instance-manager] time=\"2022-10-19T20:06:17Z\" level=debug msg=\"Getting snapshot clone status\" serviceURL=\"10.42.0.236:10007\"",
"resources_string": {},
"attributes_string": {
"k8s_container_name": "engine-manager",
"k8s_container_restart_count": "0",
"k8s_namespace_name": "longhorn-system",
"k8s_pod_name": "instance-manager-e-0df031cc",
"k8s_pod_uid": "9f5aeea7-52ac-4fa6-83de-ad957792842c",
"log_file_path": "/var/log/pods/longhorn-system_instance-manager-e-0df031cc_9f5aeea7-52ac-4fa6-83de-ad957792842c/engine-manager/0.log",
"log_iostream": "stderr",
"logtag": "F",
"time": "2022-10-19T20:06:17.113678307Z"
},
"attributes_int": {},
"attributes_float": {}
}
error:
* 'logs.exclude' has invalid keys: bodies
2022/10/19 20:12:25 application run finished with error: failed to get config: cannot unmarshal the configuration: error reading processors configuration for "filter/1": 1 error(s) decoding:
* 'logs.exclude' has invalid keys: bodies
My configuration:
processors:
filter/1:
logs/bodies:
exclude:
match_type: regexp
bodies:
- "longhorn.*level=debug"
Thank you for your help. I also tried filtering the namespace (longhorn-system) but it didn't work either....Kasim Ali
10/20/2022, 5:11 AMSAMEEL .N
10/21/2022, 3:20 AMJericho Siahaya
10/21/2022, 4:25 AMJericho Siahaya
10/21/2022, 4:26 AMMallesh Kannan
10/21/2022, 6:59 AMMallesh Kannan
10/21/2022, 7:00 AMJason Loong
10/22/2022, 3:01 AM{"level":"info","message":"PassportModule dependencies initialized","service":"logger-service","timestamp":"2022-10-21T10:51:31.050Z"}
Following this doc: https://signoz.io/docs/userguide/collect_logs_from_file/
Docker run command:
docker run -d --name signoz-otel-collector --user root -v $(pwd)/backend.log:/tmp/backend.log:ro -v $(pwd)/signoz-winston-collector.yaml:/etc/otel/config.yaml signoz/signoz-otel-collector:0.55.3
signoz-winston-collector.yaml
receivers:
filelog:
include: [/tmp/backend.log]
start_at: beginning
operators:
- type: json_parser
id: nest-winston-json
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes.message
to: body
- type: remove
field: attributes.timestamp
processors:
batch:
send_batch_size: 10 #low num for testing
send_batch_max_size: 10 #low num for testing
timeout: 10s
exporters:
otlp/log:
endpoint: <http://192.168.200.20:4317>
tls:
insecure: true
service:
telemetry:
logs:
level: 'debug'
pipelines:
logs:
receivers: [filelog]
processors: [batch]
exporters: [otlp/log]
Docker collector container logs
2022-10-22T02:31:52.529Z info service/telemetry.go:103 Setting up own telemetry...
2022-10-22T02:31:52.530Z info service/telemetry.go:138 Serving Prometheus metrics {"address": ":8888", "level": "basic"}
2022-10-22T02:31:52.530Z debug pipelines/pipelines.go:343 Stability level {"kind": "exporter", "data_type": "logs", "name": "otlp/log", "stability": "beta"}
2022-10-22T02:31:52.530Z debug pipelines/pipelines.go:343 Stability level {"kind": "processor", "name": "batch", "pipeline": "logs", "stability": "stable"}
2022-10-22T02:31:52.530Z debug pipelines/pipelines.go:345 Stability level of component undefined {"kind": "receiver", "name": "filelog", "pipeline": "logs", "stability": "undefined"}
2022-10-22T02:31:52.532Z info extensions/extensions.go:42 Starting extensions...
2022-10-22T02:31:52.532Z info pipelines/pipelines.go:74 Starting exporters...
2022-10-22T02:31:52.532Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "logs", "name": "otlp/log"}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] Channel created {"grpc_log": true}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] original dial target is: "192.168.200.20:4317" {"grpc_log": true}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] dial target "192.168.200.20:4317" parse failed: parse "192.168.200.20:4317": first path segment in URL cannot contain colon {"grpc_log": true}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] fallback to scheme "passthrough" {"grpc_log": true}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] parsed dial target is: {Scheme:passthrough Authority: Endpoint:192.168.200.20:4317 URL:{Scheme:passthrough Opaque: User: Host: Path:/192.168.200.20:4317 RawPath: ForceQuery:false RawQuery: Fragment: RawFragment:}} {"grpc_log": true}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] Channel authority set to "192.168.200.20:4317" {"grpc_log": true}
2022-10-22T02:31:52.532Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] Resolver state updated: {
"Addresses": [
{
"Addr": "192.168.200.20:4317",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}
],
"ServiceConfig": null,
"Attributes": null
} (resolver returned new addresses) {"grpc_log": true}
2022-10-22T02:31:52.533Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] Channel switches to new LB policy "pick_first" {"grpc_log": true}
2022-10-22T02:31:52.533Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1 SubChannel #2] Subchannel created {"grpc_log": true}
2022-10-22T02:31:52.533Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING{"grpc_log": true}
2022-10-22T02:31:52.533Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1 SubChannel #2] Subchannel picks a new address "192.168.200.20:4317" to connect {"grpc_log": true}
2022-10-22T02:31:52.533Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "logs", "name": "otlp/log"}
2022-10-22T02:31:52.533Z info pipelines/pipelines.go:86 Starting processors...
2022-10-22T02:31:52.533Z info pipelines/pipelines.go:90 Processor is starting... {"kind": "processor", "name": "batch", "pipeline": "logs"}
2022-10-22T02:31:52.534Z info zapgrpc/zapgrpc.go:174 [core] pickfirstBalancer: UpdateSubConnState: 0xc0008d10d0, {CONNECTING <nil>}{"grpc_log": true}
2022-10-22T02:31:52.534Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] Channel Connectivity change to CONNECTING {"grpc_log": true}
2022-10-22T02:31:52.534Z info pipelines/pipelines.go:94 Processor started. {"kind": "processor", "name": "batch", "pipeline": "logs"}
2022-10-22T02:31:52.534Z info pipelines/pipelines.go:98 Starting receivers...
2022-10-22T02:31:52.534Z info pipelines/pipelines.go:102 Receiver is starting... {"kind": "receiver", "name": "filelog", "pipeline": "logs"}
2022-10-22T02:31:52.534Z info adapter/receiver.go:54 Starting stanza receiver {"kind": "receiver", "name": "filelog", "pipeline": "logs"}
2022-10-22T02:31:52.534Z debug pipeline/directed.go:70 Starting operator {"kind": "receiver", "name": "filelog", "pipeline": "logs"}
2022-10-22T02:31:52.534Z debug pipeline/directed.go:74 Started operator {"kind": "receiver", "name": "filelog", "pipeline": "logs"}
2022-10-22T02:31:52.534Z debug pipeline/directed.go:70 Starting operator {"kind": "receiver", "name": "filelog", "pipeline": "logs", "operator_id": "remove", "operator_type": "remove"}
2022-10-22T02:31:52.534Z debug pipeline/directed.go:74 Started operator {"kind": "receiver", "name": "filelog", "pipeline": "logs", "operator_id": "remove", "operator_type": "remove"}
2022-10-22T02:31:52.534Z debug adapter/converter.go:143 Starting log converter {"kind": "receiver", "name": "filelog", "pipeline": "logs", "worker_count": 1}
2022-10-22T02:31:52.534Z info pipelines/pipelines.go:106 Receiver started. {"kind": "receiver", "name": "filelog", "pipeline": "logs"}
2022-10-22T02:31:52.535Z info service/collector.go:215 Starting signoz-otel-collector... {"Version": "latest", "NumCPU": 6}
2022-10-22T02:31:52.535Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-10-22T02:31:52.738Z info fileconsumer/file.go:178 Started watching file {"kind": "receiver", "name": "filelog", "pipeline": "logs", "component": "fileconsumer", "path": "/tmp/backend.log"}
2022-10-22T02:31:52.744Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY {"grpc_log": true}
2022-10-22T02:31:52.744Z info zapgrpc/zapgrpc.go:174 [core] pickfirstBalancer: UpdateSubConnState: 0xc0008d10d0, {READY <nil>} {"grpc_log": true}
2022-10-22T02:31:52.744Z info zapgrpc/zapgrpc.go:174 [core] [Channel #1] Channel Connectivity change to READY {"grpc_log": true}
Tried the follow troubleshoot:
docker run -it --rm signoz/troubleshoot checkEndpoint --endpoint=192.168.200.20:4317
2022-10-22T01:40:57.763Z INFO troubleshoot/main.go:28 STARTING!
2022-10-22T01:40:57.764Z INFO checkEndpoint/checkEndpoint.go:41 checking reachability of SigNoz endpoint
2022-10-22T01:40:57.776Z INFO troubleshoot/main.go:46 Successfully sent sample data to signoz ...
Apoorva
10/24/2022, 10:03 AM2022-10-24T09:49:29.995Z error helper/transformer.go:110 Failed to process entry {"kind": "receiver", "name": "filelog/k8s", "pipeline": "logs", "operator_id": "parser-nucash-regex", "operator_type": "regex_parser", "error": "regex pattern does not match", "action": "send", "entry": {"observed_timestamp":"2022-10-24T09:49:29.995336396Z","timestamp":"2022-10-24T09:49:29.908076398Z","body":"{\"component\":\"aaa\",\"env\":\"dev\",\"file\":\"/go/src/bitbucket.org/nucashin/sample/main.go:73\",\"func\":\"main.main\",\"level\":\"info\",\"msg\":\"Something\",\"service\":\"aaa\",\"time\":\"2022-10-24T09:49:29Z\"}\n","attributes":{"k8s.container.name":"sample","k8s.container.restart_count":"0","k8s.namespace.name":"services","k8s.pod.name":"sample-6c879cc484-6vmbr","k8s.pod.uid":"94b40e31-b854-4af4-9a83-f4daadadd951","log.file.path":"/var/log/pods/services_sample-6c879cc484-6vmbr_94b40e31-b854-4af4-9a83-f4daadadd951/sample/0.log","log.iostream":"stderr","time":"2022-10-24T09:49:29.908076398Z"},"severity":0,"scope_name":""}}
Getting this error in k8s-agent below is the updated configmap for k8s-agent, any idea why ..
- from: attributes.restart_count
to: attributes["k8s.container.restart_count"]
type: move
- from: attributes.uid
to: attributes["k8s.pod.uid"]
type: move
- from: attributes.log
to: body
type: move
- id: parser-nucash-regex
parse_from: body
regex: ^"(?P<nucash_json>{.*})\\n"
type: regex_parser
- id: parser-nucash-json
parse_from: attributes.nucash_json
type: json_parser
Wejdan
10/24/2022, 10:07 AMWejdan
10/24/2022, 10:15 AMHenrik
10/25/2022, 6:58 AMAbhinav Ramana
10/25/2022, 6:47 PMWARNING: Published ports are discarded when using host network mode
ERROR:opentelemetry.launcher.configuration:Invalid configuration: token missing. Must be set to send data to <https://ingest.lightstep.com:443>.Set environment variable LS_ACCESS_TOKEN
ERROR:opentelemetry.launcher.configuration:application instrumented via opentelemetry-instrument. all required configuration must be set via environment variables
Traceback (most recent call last):
File "/opt/venv/lib/python3.9/site-packages/opentelemetry/launcher/configuration.py", line 383, in _configure
configure_opentelemetry(_auto_instrumented=True)
File "/opt/venv/lib/python3.9/site-packages/opentelemetry/launcher/configuration.py", line 237, in configure_opentelemetry
raise InvalidConfigurationError(message)
opentelemetry.launcher.configuration.InvalidConfigurationError: Invalid configuration: token missing. Must be set to send data to <https://ingest.lightstep.com:443>.Set environment variable LS_ACCESS_TOKEN
This is a harmless log but I wanted to know why is it happening? I have not used configure_opentelemetry in my code anywhereAbhinav Ramana
10/26/2022, 1:51 AMopentelemetry-distro==0.34b0 \
opentelemetry-exporter-otlp==1.13.0 \
opentelemetry-launcher==1.9.0 \
opentelemetry-instrumentation-celery==0.34b0 \
RUN opentelemetry-bootstrap --action=install
CMD opentelemetry-instrument --traces_exporter otlp_proto_grpc celery -A wombo.celery_paint.celeryapp worker --loglevel=info --pool=threads
code:
@worker_process_init.connect(weak=False)
def init_celery_tracing(*args, **kwargs):
"""
When tracing a celery worker process, tracing and instrumention both must be initialized after the celery worker
process is initialized. This is required for any tracing components that might use threading to work correctly
such as the BatchSpanProcessor. Celery provides a signal called worker_process_init that can be used to
accomplish this
"""
CeleryInstrumentor().instrument()
celeryapp = Celery('paints')
celeryapp.conf.task_default_queue = sqsurl
Not sure what else to do?nathan
10/26/2022, 12:17 PMCreating network "clickhouse-setup_default" with the default driver
Creating hotrod ... done
Creating clickhouse-setup_clickhouse_1 ... done
Creating load-hotrod ... done
ERROR: for otel-collector-metrics Container "f5fa3ef5a7a3" is unhealthy.
ERROR: for otel-collector Container "f5fa3ef5a7a3" is unhealthy.
ERROR: for query-service Container "f5fa3ef5a7a3" is unhealthy.
ERROR: Encountered errors while bringing up the project.
Waiting for all containers to start. This check will timeout in 1 seconds ....
+++++++++++ ERROR ++++++++++++++++++++++
🔴 The containers didn't seem to start correctly. Please run the following command to check containers that may have errored out:
sudo docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml ps -a
Please read our troubleshooting guide <https://signoz.io/docs/deployment/docker/#troubleshooting-of-common-issues>
or reach us on SigNoz for support <https://signoz.io/slack>
++++++++++++++++++++++++++++++++++++++++
🔴 The containers didn't seem to start correctly. Please run the following command to check containers that may have errored out:
sudo docker-compose -f ./docker/clickhouse-setup/docker-compose.yaml ps -a
or reach us for support in #help channel in our Slack Community <https://signoz.io/slack>
++++++++++++++++++++++++++++++++++++++++
I figure this must be something really basic, I tried a debian VM to see if it made a difference but getting the same result. Any help much appreciated.Nestor René Juárez Montes de Oca
10/26/2022, 3:34 PMLuke Hsiao
10/26/2022, 6:07 PMAbhinav Ramana
10/26/2022, 6:45 PMAllan Li
10/26/2022, 7:47 PMOTEL_EXPORTER_OTLP_ENDPOINT=127.0.0.1:4317 \
OTEL_RESOURCE_ATTRIBUTES=service.name=graphql-service
Can we set these inside of the code somehow instead of in the terminal? For example inside the config object of OTLPTraceExporter
?Allan Li
10/26/2022, 8:24 PMOTEL_EXPORTER_OTLP_ENDPOINT=127.0.0.1:4317
variable worked, in the guide it says we are suppose to set it locally in the terminal.
I tried to set it as a different port and see if signoz can still detect data coming from my graphql server. I was expecting it to break (not detect the service) but it was still able to detect, why is this? Is it overriding the incorrect port I set in the terminal and using some kind of default value? Thanks.Shifna
10/27/2022, 4:39 AMSAMEEL .N
10/27/2022, 6:05 AM