https://signoz.io logo
#support
Title
# support
a

Abhinav Ramana

10/19/2022, 5:58 PM
Hi, I have another issue with sending metrics via FastAPI python application
Copy code
TypeError: an integer is required (got type NoneType)
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in Nones.
Exception while exporting metrics an integer is required (got type NoneType)
Traceback (most recent call last):
  File "/opt/venv/lib/python3.9/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py", line 305, in _export
    self._client.Export(
  File "/opt/venv/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/opt/venv/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNAVAILABLE
	details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
	debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2022-10-19T17:51:34.004255785+00:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused {created_time:"2022-10-19T17:51:34.004247397+00:00", grpc_status:14}]}"
>
It send the request successfully however doesn't send to signoz due to this exception
This is my docker command:
Copy code
CMD OTEL_RESOURCE_ATTRIBUTES=service.name=python_app OTEL_EXPORTER_OTLP_ENDPOINT="<http://localhost:4318>"  opentelemetry-instrument --traces_exporter otlp_proto_http uvicorn --workers 2 --host 0.0.0.0 --port 8000 --ws websockets --loop asyncio --ws-max-size 100000 --ws-ping-interval 20.00 --ws-ping-timeout 20.00 wombo.fastapi:app
And this is my local docker signoz is running
It doesn't help even if I use GRPC and without quotes
Copy code
CMD OTEL_RESOURCE_ATTRIBUTES=service.name=paint-service-backend OTEL_EXPORTER_OTLP_ENDPOINT=<http://0.0.0.0:4317> opentelemetry-instrument --traces_exporter otlp_proto_grpc,console uvicorn --workers 2 --host 0.0.0.0 --port 8000 --ws websockets --loop asyncio --ws-max-size 100000 --ws-ping-interval 20.00 --ws-ping-timeout 20.00 wombo.fastapi:app
s

Srikanth Chekuri

10/19/2022, 11:04 PM
It send the request successfully however doesn’t send to signoz due to this exception
I don’t get it. What do you mean “it send request successfully”? It says unable to connect to remote host in the error messsage.
a

Abhinav Ramana

10/20/2022, 3:51 PM
Oh I mean I got the request being sent to the main DB etc, means on hitting my endpoint it works successfully but doesn't work signoz one gives error
Copy code
:~/WORKSPACE/paint-service-backend$ docker ps
CONTAINER ID   IMAGE                                        COMMAND                  CREATED        STATUS                  PORTS                                                           NAMES
bcb623908a62   signoz/frontend:0.11.2                       "nginx -g 'daemon of…"   21 hours ago   Up 21 hours             80/tcp, 0.0.0.0:3301->3301/tcp, :::3301->3301/tcp               frontend
2118c1f7ec97   signoz/alertmanager:0.23.0-0.2               "/bin/alertmanager -…"   21 hours ago   Up 21 hours             9093/tcp                                                        clickhouse-setup_alertmanager_1
65a0dfdcd029   signoz/query-service:0.11.2                  "./query-service -co…"   21 hours ago   Up 21 hours (healthy)   8080/tcp                                                        query-service
ae4cc0d5a36d   signoz/signoz-otel-collector:0.55.3          "/signoz-collector -…"   21 hours ago   Up 10 seconds           4317-4318/tcp                                                   clickhouse-setup_otel-collector-metrics_1
58cb2715bb33   signoz/signoz-otel-collector:0.55.3          "/signoz-collector -…"   21 hours ago   Up 10 seconds           0.0.0.0:4317-4318->4317-4318/tcp, :::4317-4318->4317-4318/tcp   clickhouse-setup_otel-collector_1
555e9b8e3c1b   clickhouse/clickhouse-server:22.4.5-alpine   "/entrypoint.sh"         21 hours ago   Up 21 hours (healthy)   8123/tcp, 9000/tcp, 9009/tcp                                    clickhouse-setup_clickhouse_1
Copy code
ubuntu@ip-172-31-10-250:~/WORKSPACE$ ./troubleshoot checkEndpoint --endpoint=44.193.198.87:4317
2022-10-20T16:39:37.657Z	INFO	workspace/main.go:28	STARTING!
2022-10-20T16:39:37.657Z	INFO	checkEndpoint/checkEndpoint.go:41	checking reachability of SigNoz endpoint
2022-10-20T16:39:37.659Z	INFO	workspace/main.go:46	Successfully sent sample data to signoz ...
ubuntu@ip-172-31-10-250:~/WORKSPACE$ ./troubleshoot checkEndpoint --endpoint=44.193.198.87:4318
2022-10-20T16:39:39.583Z	INFO	workspace/main.go:28	STARTING!
2022-10-20T16:39:39.583Z	INFO	checkEndpoint/checkEndpoint.go:41	checking reachability of SigNoz endpoint
Error: not able to send data to SigNoz endpoint ...
rpc error: code = Unavailable desc = connection closed before server preface received
Usage:
  signoz checkEndpoint [flags]

Examples:
checkEndpoint -e localhost:4317

Flags:
  -e, --endpoint string   URL to SigNoz with port
  -h, --help              help for checkEndpoint
but it doesn't work either with 4317 or 4318
s

Srikanth Chekuri

10/21/2022, 4:16 AM
It says the collector endpoint is not accessible both in Python SDK and troubleshoot script. Please make sure the n/w settings are correct and host is accessible.
a

Abhinav Ramana

10/21/2022, 5:48 PM
Copy code
<http://github.com/SigNoz/signoz-otel-collector/exporter/clickhousetracesexporter.(*SpanWriter).backgroundWriter|github.com/SigNoz/signoz-otel-collector/exporter/clickhousetracesexporter.(*SpanWriter).backgroundWriter>
	/src/exporter/clickhousetracesexporter/writer.go:101
2022-10-20T16:42:42.244Z	info	fileconsumer/file.go:180	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/1fba324946fb0354cf3b84a5013c849502c7c1590b8f6078d81ee5073be5aea4/1fba324946fb0354cf3b84a5013c849502c7c1590b8f6078d81ee5073be5aea4-json.log"}
2022-10-20T21:04:50.718Z	error	clickhousetracesexporter/writer.go:101	Could not write a batch of spans	{"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "error": "clickhouse: dateTime overflow. timestamp must be between 1925-01-01 00:00:00 and 2283-11-11 00:00:00"}
<http://github.com/SigNoz/signoz-otel-collector/exporter/clickhousetracesexporter.(*SpanWriter).backgroundWriter|github.com/SigNoz/signoz-otel-collector/exporter/clickhousetracesexporter.(*SpanWriter).backgroundWriter>
	/src/exporter/clickhousetracesexporter/writer.go:101
2022-10-20T21:08:20.445Z	info	fileconsumer/file.go:180	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/ed3182c934f5b4754f7565ec001a19985e9d5ab6d2fd88726f525361a1c3af91/ed3182c934f5b4754f7565ec001a19985e9d5ab6d2fd88726f525361a1c3af91-json.log"}
2022-10-20T21:08:42.245Z	info	fileconsumer/file.go:180	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/aaa07b12684d86298d75a832ff960587c364364f3d7b9be1bc2f5fc7b7ddb459/aaa07b12684d86298d75a832ff960587c364364f3d7b9be1bc2f5fc7b7ddb459-json.log"}
2022-10-20T21:50:56.445Z	info	fileconsumer/file.go:180	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/7bd060221986d99a60ce42ace533f7187a6f64e5fcaf40fec6f0f1ccfa422bc5/7bd060221986d99a60ce42ace533f7187a6f64e5fcaf40fec6f0f1ccfa422bc5-json.log"}
2022-10-20T21:57:51.045Z	info	fileconsumer/file.go:180	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/787b28fb3e0577a859df5ac87f4b4e6a1022576175321ef936214c36ebb3042f/787b28fb3e0577a859df5ac87f4b4e6a1022576175321ef936214c36ebb3042f-json.log"}
2022-10-21T16:24:33.845Z	info	fileconsumer/file.go:180	Started watching file from end. To read preexisting logs, configure the argument 'start_at' to 'beginning'	{"kind": "receiver", "name": "filelog/dockercontainers", "pipeline": "logs", "component": "fileconsumer", "path": "/var/lib/docker/containers/f07ad577996378e1ede2f5a7345555b8af10c129cb616ae3e1ba429a9e8972f8/f07ad577996378e1ede2f5a7345555b8af10c129cb616ae3e1ba429a9e8972f8-json.log"}
Getting this on
Copy code
58cb2715bb33   signoz/signoz-otel-collector:0.55.3          "/signoz-collector -…"   45 hours ago    Up 24 hours             0.0.0.0:4317-4318->4317-4318/tcp, :::4317-4318->4317-4318/tcp   clickhouse-setup_otel-collector_1 @Srikanth Chekuri
and on my main python fastapi I get :
Copy code
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in 1s.
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in 2s.
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in 4s.
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in 8s.
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in 16s.
Transient error StatusCode.UNAVAILABLE encountered while exporting metrics, retrying in 32s.
Keeping it on this thread for convinience
My Cloudwatch aws timestamp logs look like this :
Copy code
2022-10-21T12:30:42.504-04:00	INFO:wombo.fastapi:Graphdb healthcheck with num_edges: 3784
@Srikanth Chekuri The collector endpoint is definitely accessible, we did sync with Pranay and he confirmed that collector is getting the data
maybe we can get on a call to debug this?
s

Srikanth Chekuri

10/21/2022, 6:35 PM
Let’s huddle?
a

Abhinav Ramana

10/21/2022, 9:55 PM
ok at what time does it work for you?
s

Srikanth Chekuri

10/22/2022, 1:02 AM
Anytime IST 8 am to 11pm. Let me whatever works for you @Abhinav Ramana.
@Prashant Shahi can you help @Abhinav Ramana with ip:port setup where application is running with ECS and SigNoz is running separately.
p

Prashant Shahi

10/25/2022, 4:09 AM
@Srikanth Chekuri Sure. @Abhinav Ramana are you using valid publicly (or privately) accessible address of SigNoz Otel when sending the telemetry data?
a

Abhinav Ramana

10/25/2022, 6:45 PM
@Prashant Shahi Its public
To close this thread, the issue was uvicorn doesn't support multiple workers for fastapi with opentelemetry, https://github.com/open-telemetry/opentelemetry-python-contrib/issues/385#issuecomment-1199088668 Solution is to instead use gunicorn
12 Views