This message was deleted.
# support
s
This message was deleted.
a
@User it's the
alertmanager
which finally integrates to slack, webhook etc. The query-service calls the alertmanager internally
Can you share the logs from alertmanager?
p
Here the log of frontend. And a pic of f12 chrome debug print.
Copy code
2023/01/11 13:32:44 [error] 8#8: *171 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 10.88.7.10, server: _, request: "POST /api/v1/testChannel HTTP/1.1", upstream: "<http://100.126.65.203:8080/api/v1/testChannel>", host: "mypublicip", referrer: "<http://mypublicip/setting/channels/edit/1>"
Nice! I found a error log on query-service pod, i think calling alertmanager. Here the log:
Copy code
2023-01-11T13:31:40.406Z	ERROR	alertManager/manager.go:177	Received Server Error response for API call to alertmanager(POST <http://signoz-apm-alertmanager:9093/api/v1/testReceiver>)
%!(EXTRA string=500 Internal Server Error)
<http://go.signoz.io/signoz/pkg/query-service/integrations/alertManager.(*manager).TestReceiver|go.signoz.io/signoz/pkg/query-service/integrations/alertManager.(*manager).TestReceiver>
	/go/src/github.com/signoz/signoz/pkg/query-service/integrations/alertManager/manager.go:177
<http://go.signoz.io/signoz/pkg/query-service/app.(*APIHandler).testChannel|go.signoz.io/signoz/pkg/query-service/app.(*APIHandler).testChannel>
	/go/src/github.com/signoz/signoz/pkg/query-service/app/http_handler.go:1031
<http://go.signoz.io/signoz/pkg/query-service/app.EditAccess.func1|go.signoz.io/signoz/pkg/query-service/app.EditAccess.func1>
	/go/src/github.com/signoz/signoz/pkg/query-service/app/http_handler.go:271
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2047
<http://go.signoz.io/signoz/ee/query-service/app.loggingMiddleware.func1|go.signoz.io/signoz/ee/query-service/app.loggingMiddleware.func1>
	/go/src/github.com/signoz/signoz/ee/query-service/app/server.go:234
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2047
<http://go.signoz.io/signoz/ee/query-service/app.(*Server).analyticsMiddleware.func1|go.signoz.io/signoz/ee/query-service/app.(*Server).analyticsMiddleware.func1>
	/go/src/github.com/signoz/signoz/ee/query-service/app/server.go:340
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2047
<http://go.signoz.io/signoz/ee/query-service/app.setTimeoutMiddleware.func1|go.signoz.io/signoz/ee/query-service/app.setTimeoutMiddleware.func1>
	/go/src/github.com/signoz/signoz/ee/query-service/app/server.go:368
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2047
<http://github.com/gorilla/mux.(*Router).ServeHTTP|github.com/gorilla/mux.(*Router).ServeHTTP>
	/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210
<http://github.com/rs/cors.(*Cors).Handler.func1|github.com/rs/cors.(*Cors).Handler.func1>
	/go/pkg/mod/github.com/rs/cors@v1.7.0/cors.go:219
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2047
<http://github.com/gorilla/handlers.CompressHandlerLevel.func1|github.com/gorilla/handlers.CompressHandlerLevel.func1>
	/go/pkg/mod/github.com/gorilla/handlers@v1.5.1/compress.go:141
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2047
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2879
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:1930
2023-01-11T13:31:40.407Z	INFO	app/server.go:235	/api/v1/testChannel	timeTaken: 19m40.911784749s
@Ankit Nayan
a
@Amol Umbark can you please drill deeper into the issue?
p
I’m open for a call if need. Thanks
a
@Paulo Henrique de Morais Santiago at the moment we don't natively support http proxy through channel config (please feel free to create an issue for support of proxy). but you are right about setting env variables to achieve this. can you try setting the env vars for alert manager pod.
p
@Amol Umbark Thanks for response bro. I open the issue https://github.com/SigNoz/signoz/issues/2025#issuecomment-1379424121
Im making some new trys with NO_PROXY config, without success yet. But if i get i back you here.
You think the only pod that need proxys configs is alert manager?
a
yes. Only alert manager connects with third parties like slack. did you try with http proxy env var, does it work?
p
@Amol Umbark I tryed without success,
I try use this proxy_url, on alertmanager.yml config file. And no success too. https://prometheus.io/docs/alerting/latest/configuration/#http_config
Like this example: https://github.com/prometheus/alertmanager/blob/main/config/testdata/conf.good.yml I think the alert-manager dont user the environment from linux host to proxy.
I create a VM and try to use Docker instead kubernetes. And i have the same problem.
a
let me check and get back.
@Paulo Henrique de Morais Santiago I looked into this. We will have to change alert manager to handle proxy server. We will resolve the issue soon. Please follow the issue for further updates. https://github.com/SigNoz/signoz/issues/2025#issuecomment-1379424121