Hello, I am trying to deploy Signoz in AWS EKS clu...
# general
v
Hello, I am trying to deploy Signoz in AWS EKS cluster, How to pass SMTP HOST, PORT, AUTh information to Helm chart?
p
v
Hi Prashant, I managed to configure SMPT details to Query Service, but in Alert Manager there is no attribute like “additionalEnvs” in the latest version of helm chart
p
additionalEnvs
is supported however, the latest charts README is not up-to-date.
v
Pasted below the latest from values.yaml file
alertmanager: enabled: true name: “alertmanager” replicaCount: 1 image: registry: docker.io repository: signoz/alertmanager pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: 0.23.4 # -- Image Registry Secret Names for Alertmanager # If set, this has higher precedence than the root level or global value of imagePullSecrets. imagePullSecrets: [] # -- Alertmanager custom command override command: [] # -- Alertmanager extra Arguments extraArgs: {} # Alertmanager Service Account serviceAccount: # Specifies whether a service account should be created create: true # Annotations to add to the service account annotations: {} # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: # Alertmanager service service: # -- Annotations to use by service associated to Alertmanager annotations: {} # -- Labels to use by service associated to Alertmanager labels: {} # -- Service Type: LoadBalancer (allows external access) or NodePort (more secure, no extra cost) type: ClusterIP # -- Alertmanager HTTP port port: 9093 # -- Alertmanager cluster port clusterPort: 9094 # -- Set this to you want to force a specific nodePort. Must be use with service.type=NodePort nodePort: null initContainers: init: enabled: true image: registry: docker.io repository: busybox tag: 1.35 pullPolicy: IfNotPresent command: delay: 5 endpoint: /api/v1/health?live=1 waitMessage: “waiting for query-service” doneMessage: “query-service ready, starting alertmanager now” resources: {} # requests: # cpu: 100m # memory: 100Mi # limits: # cpu: 100m # memory: 100Mi podSecurityContext: fsGroup: 65534 dnsConfig: {} # nameservers: # - 1.2.3.4 # searches: # - ns1.svc.cluster-domain.example # - my.dns.search.suffix # options: # - name: ndots # value: “2" # - name: edns0 securityContext: # capabilities: # drop: # - ALL # readOnlyRootFilesystem: true runAsUser: 65534 runAsNonRoot: true runAsGroup: 65534 additionalPeers: [] livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http ingress: # -- Enable ingress for Alertmanager enabled: false # -- Ingress Class Name to be used to identify ingress controllers className: “” # -- Annotations to Alertmanager Ingress annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: “true” # cert-manager.io/cluster-issuer: letsencrypt-prod # -- Alertmanager Ingress Host names with their path details hosts: - host: alertmanager.domain.com paths: - path: / pathType: ImplementationSpecific port: 9093 # -- Alertmanager Ingress TLS tls: [] # - secretName: chart-example-tls # hosts: # - alertmanager.domain.com # -- Configure resource requests and limits. Update according to your own use # case as these values might not be suitable for your workload. # Ref: http://kubernetes.io/docs/user-guide/compute-resources/ # # @default -- See
values.yaml
for defaults resources: requests: cpu: 100m memory: 100Mi # limits: # cpu: 200m # memory: 200Mi # -- Alertmanager priority class name priorityClassName: “” # -- Node selector for settings for Alertmanager pod nodeSelector: {} # -- Toleration labels for Alertmanager pod assignment tolerations: [] # -- Affinity settings for Alertmanager pod affinity: {} # -- TopologySpreadConstraints describes how Alertmanager pods ought to spread topologySpreadConstraints: [] statefulSet: annotations: “helm.sh/hook-weight”: “4” podAnnotations: {} podLabels: {} # Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ podDisruptionBudget: {} # maxUnavailable: 1 # minAvailable: 1 persistence: # -- Enable data persistence using PVC for Alertmanager data. enabled: true # -- Name of an existing PVC to use (only when deploying a single replica) existingClaim: “” # -- Persistent Volume Storage Class to use. # If defined,
storageClassName: <storageClass>
. # If set to “-”,
storageClassName: “”
, which disables dynamic provisioning # If undefined (the default) or set to
null
, no storageClassName spec is # set, choosing the default provisioner. # storageClass: null # -- Access Modes for persistent volume accessModes: - ReadWriteOnce # -- Persistent Volume size size: 100Mi ## Using the config, alertmanager.yml file is created. ## We no longer need the config file as query services ## delivers the required config. # config: # global: # resolve_timeout: 1m # slack_api_url: ‘https://hooks.slack.com/services/xxx’ # templates: # - ‘/etc/alertmanager/*.tmpl’ # receivers: # - name: ‘slack-notifications’ # slack_configs: # - channel: ‘#alerts’ # send_resolved: true # icon_url: https://avatars3.githubusercontent.com/u/3380462 # title: ‘{{ template “slack.title” . }}’ # text: ‘{{ template “slack.text” . }}’ # route: # receiver: ‘slack-notifications’ ## Templates are no longer needed as they are included ## from frontend placeholder while creating alert channels. # templates: # title.tmpl: |- # {{ define “slack.title” }} # [{{ .Status | toUpper }}{{ if eq .Status “firing” }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .CommonLabels.alertname }} for {{ .CommonLabels.job }} # {{- if gt (len .CommonLabels) (len .GroupLabels) -}} # {{” “}}( # {{- with .CommonLabels.Remove .GroupLabels.Names }} # {{- range $index, $label := .SortedPairs -}} # {{ if $index }}, {{ end }} # {{- $label.Name }}=“{{ $label.Value -}}” # {{- end }} # {{- end -}} # ) # {{- end }} # {{ end }} # text.tmpl: |- # {{ define “slack.text” }} # {{ range .Alerts -}} # Alert: {{ .Labels.alertname }}{{ if .Labels.severity }} - `{{ .Labels.severity }}`{{ end }} # Summary: {{ .Annotations.summary }} # Description: {{ .Annotations.description }} # Details: # {{ range .Labels.SortedPairs }} • {{ .Name }}:
{{ .Value }}
# {{ end }} # {{ end }} # {{ end }} ## Monitors ConfigMap changes and POSTs to a URL ## Ref: https://github.com/jimmidyson/configmap-reload ## configmapReload: ## If false, the configmap-reload container will not be deployed ## enabled: false ## configmap-reload container name ## name: configmap-reload ## configmap-reload container image ## image: repository: jimmidyson/configmap-reload tag: v0.5.0 pullPolicy: IfNotPresent # containerPort: 9533 # -- Configure resource requests and limits. Update as per your need. # Ref: http://kubernetes.io/docs/user-guide/compute-resources/ # # @default -- See
values.yaml
for defaults resources: requests: cpu: 100m memory: 100Mi # limits: # cpu: 200m # memory: 200Mi
no support for ‘additionalEnv’ in alertmanager
p
Oh, sorry, my bad there. It seems to be missing from OSS charts.
v
Then, How to resolve this issue? Can you guide me on this?
p
The support changes are shipped a while back.
You can update the helm repository and run `helm upgrade`:
Copy code
helm repo update signoz

helm upgrade ...
v
Thanks Prashant.. I will try that
p
Do let us know if you face any issues.
v
Hi Prashant, now that I am able pass SMTP details as part of additionalEnvs.. but when i try to send some test mail i am getting below error in alertmanager console
level=error ts=2024-05-22T151025.611Z caller=api.go:808 component=api version=v1 msg=“API error” err=“server_error: invalid receiver type”
p
I believe that happens when there is no alert channel configured. Can you try saving the alert channel and then try testing? @Srikanth Chekuri any more suggestions here?
v
Yes.. I did tried that.. I created Email Notification Channel and saved it.. After saving, when I click “Test” button, i saw the above reported error
Hi @Prashant Shahi @Srikanth Chekuri, please find below my alertmanager values configuration.. I am still getting “Failed to send a test message to this channel, please confirm that the parameters are set correctly” when i try to send test mail.. Not sure what I am missing? Kindly help
Failed to send a test message to this channel, please confirm that the parameters are set correctly
alertmanager: config: global: resolve_timeout: 5m smtp_smarthost: * smtp_from: * smtp_auth_username: * smtp_auth_password: * route: receiver: email-notifications receivers: - name: email-notifications email_configs: - send_resolved: true to: * additionalEnvs: ALERTMANAGER_SMTP_FROM: * ALERTMANAGER_SMTP_HOST: * ALERTMANAGER_SMTP_PORT: * ALERTMANAGER_SMTP_AUTH_USERNAME: * ALERTMANAGER_SMTP_AUTH_PASSWORD: *
alertmanager logs:
level=info ts=2024-05-23T073647.193Z caller=main.go:241 msg=“Starting Alertmanager” version=“(version=0.23.0, branch=HEAD, revision=a3d2884f2732d67aef35eecea75d6a4db67b5802)” level=info ts=2024-05-23T073647.194Z caller=main.go:242 build_context=“(go=go1.17.13, user=root@d42a45e2526d, date=20230913-211441)” level=info ts=2024-05-23T073647.195Z caller=cluster.go:679 component=cluster msg=“Waiting for gossip to settle...” interval=2s level=info ts=2024-05-23T073647.227Z caller=coordinator.go:141 component=configuration msg=“Loading a new configuration” level=info ts=2024-05-23T073647.246Z caller=coordinator.go:156 component=configuration msg=“Completed loading of configuration file” RouteOpts: {default-receiver map[alertname:{}] false 30s 5m0s 4h0m0s []} RouteOpts: {Email Notification map[alertname:{}] false 30s 5m0s 4h0m0s []} RouteOpts: {default-receiver map[alertname:{}] false 30s 5m0s 4h0m0s []} RouteOpts: {Email Notification map[alertname:{}] false 30s 5m0s 4h0m0s []} RouteOpts: {default-receiver map[alertname:{}] false 30s 5m0s 4h0m0s []} RouteOpts: {Email Notification map[alertname:{}] false 30s 5m0s 4h0m0s []} level=info ts=2024-05-23T073647.249Z caller=main.go:574 msg=Listening address=:9093 level=info ts=2024-05-23T073647.249Z caller=tls_config.go:195 msg=“TLS is disabled.” http2=false level=info ts=2024-05-23T073649.195Z caller=cluster.go:704 component=cluster msg=“gossip not settled” polls=0 before=0 now=1 elapsed=2.000182876s level=info ts=2024-05-23T073657.199Z caller=cluster.go:696 component=cluster msg=“gossip settled; proceeding” elapsed=10.00372338s level=error ts=2024-05-23T073716.231Z caller=api.go:808 component=api version=v1 msg=“API error” err=“server_error: invalid receiver type” level=error ts=2024-05-23T073935.357Z caller=api.go:808 component=api version=v1 msg=“API error” err=“server_error: invalid receiver type”
Helm chart has older version of alertmanager.. pulling the latest version solved the issue
alertmanager: image: registry: docker.io repository: signoz/alertmanager pullPolicy: IfNotPresent tag: 0.23.5