Hello, I'm trying to work on the Alertmanager for ...
# contributing
i
Hello, I'm trying to work on the Alertmanager for this issue: https://github.com/SigNoz/signoz/issues/5095. I installed Go version 1.22.5 (linux/amd64), but when I execute "make," I get an error related to the Go version.
n
@Srikanth Chekuri might have some idea on this.
s
@Ileo i am not sure what might be the issue here.
i
@Srikanth Chekuri I want to add a new channel for Google Chat, similar to this commit: [https://github.com/SigNoz/alertmanager/commit/409a76c68ff6b356a523482e94ab3d8fbfbb3b07] I'm trying to set up the development environment to test my changes. Here's what I've done: • Cloned the signoz/alertmanager repository. • Executed make build. • Changed directory to .build/linux-amd64. • Created a configuration file named alertmanager.yml with some sample content.
Copy code
global:
  # The smarthost and SMTP sender used for mail notifications.
  smtp_smarthost: 'localhost:25'
  smtp_from: '<mailto:alertmanager@example.org|alertmanager@example.org>'

# The root route on which each incoming alert enters.
route:
  # The root route must not have any matchers as it is the entry point for
  # all alerts. It needs to have a receiver configured so alerts that do not
  # match any of the sub-routes are sent to someone.
  receiver: 'team-X-mails'

  # The labels by which incoming alerts are grouped together. For example,
  # multiple alerts coming in for cluster=A and alertname=LatencyHigh would
  # be batched into a single group.
  #
  # To aggregate by all possible labels use '...' as the sole label name.
  # This effectively disables aggregation entirely, passing through all
  # alerts as-is. This is unlikely to be what you want, unless you have
  # a very low alert volume or your upstream notification system performs
  # its own grouping. Example: group_by: [...]
  group_by: ['alertname', 'cluster']

  # When a new group of alerts is created by an incoming alert, wait at
  # least 'group_wait' to send the initial notification.
  # This way ensures that you get multiple alerts for the same group that start
  # firing shortly after another are batched together on the first
  # notification.
  group_wait: 30s

  # When the first notification was sent, wait 'group_interval' to send a batch
  # of new alerts that started firing for that group.
  group_interval: 5m

  # If an alert has successfully been sent, wait 'repeat_interval' to
  # resend them.
  repeat_interval: 3h

  # All the above attributes are inherited by all child routes and can
  # overwritten on each.

  # The child route trees.
  routes:
  # This routes performs a regular expression match on alert labels to
  # catch alerts that are related to a list of services.
  - match_re:
      service: ^(foo1|foo2|baz)$
    receiver: team-X-mails

    # The service has a sub-route for critical alerts, any alerts
    # that do not match, i.e. severity != critical, fall-back to the
    # parent node and are sent to 'team-X-mails'
    routes:
    - match:
        severity: critical
      receiver: team-X-pager

  - match:
      service: files
    receiver: team-Y-mails

    routes:
    - match:
        severity: critical
      receiver: team-Y-pager

  # This route handles all alerts coming from a database service. If there's
  # no team to handle it, it defaults to the DB team.
  - match:
      service: database

    receiver: team-DB-pager
    # Also group alerts by affected database.
    group_by: [alertname, cluster, database]

    routes:
    - match:
        owner: team-X
      receiver: team-X-pager

    - match:
        owner: team-Y
      receiver: team-Y-pager


# Inhibition rules allow to mute a set of alerts given that another alert is
# firing.
# We use this to mute any warning-level notifications if the same alert is
# already critical.
inhibit_rules:
- source_matchers:
    - severity="critical"
  target_matchers:
    - severity="warning"
  # Apply inhibition if the alertname is the same.
  # CAUTION: 
  #   If all label names listed in `equal` are missing 
  #   from both the source and target alerts,
  #   the inhibition rule will apply!
  equal: ['alertname']


receivers:
- name: 'team-X-mails'
  email_configs:
  - to: '<mailto:team-X+alerts@example.org|team-X+alerts@example.org>, <mailto:team-Y+alerts@example.org|team-Y+alerts@example.org>'

- name: 'team-X-pager'
  email_configs:
  - to: '<mailto:team-X+alerts-critical@example.org|team-X+alerts-critical@example.org>'
  pagerduty_configs:
  - routing_key: <team-X-key>

- name: 'team-Y-mails'
  email_configs:
  - to: '<mailto:team-Y+alerts@example.org|team-Y+alerts@example.org>'

- name: 'team-Y-pager'
  pagerduty_configs:
  - routing_key: <team-Y-key>

- name: 'team-DB-pager'
  pagerduty_configs:
  - routing_key: <team-DB-key>
However, when I try to start the application, I get the following error:
Copy code
$ ./alertmanager --config.file=alertmanager.yml
level=info ts=2024-08-02T07:15:41.419Z caller=main.go:245 msg="Starting Alertmanager" version="(version=0.23.0, branch=issue/5095, revision=409a76c68ff6b356a523482e94ab3d8fbfbb3b07)"
level=info ts=2024-08-02T07:15:41.419Z caller=main.go:246 build_context="(go=go1.22.5, user=jaufret@Dev, date=20240802-06:44:12)"
level=info ts=2024-08-02T07:15:41.425Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.1.67 port=9094
level=info ts=2024-08-02T07:15:41.426Z caller=cluster.go:679 component=cluster msg="Waiting for gossip to settle..." interval=2s
level=info ts=2024-08-02T07:15:41.460Z caller=coordinator.go:139 component=configuration msg="Loading a new configuration"
level=error ts=2024-08-02T07:15:41.460Z caller=coordinator.go:146 component=configuration msg="configuration update failed" config="global:\n  resolve_timeout: 5m\n  http_config:\n    follow_redirects: true\n  smtp_from: <mailto:alertmanager@signoz.io|alertmanager@signoz.io>\n  smtp_hello: localhost\n  smtp_smarthost: localhost:25\n  smtp_require_tls: true\n  pagerduty_url: <https://events.pagerduty.com/v2/enqueue>\n  opsgenie_api_url: <https://api.opsgenie.com/>\n  telegram_api_url: <https://api.telegram.org>\n  wechat_api_url: <https://qyapi.weixin.qq.com/cgi-bin/>\n  victorops_api_url: <https://alert.victorops.com/integrations/generic/20131114/alert/>\nroute:\n  receiver: default-receiver\n  group_by:\n  - alertname\n  continue: false\n  group_wait: 30s\n  group_interval: 5m\n  repeat_interval: 4h\nreceivers:\n- name: default-receiver\ntemplates: []\n" err="received an error from query service while fetching config: error in http get: Get \"localhost:8080/api/v1/channels\": unsupported protocol scheme \"localhost\""
level=info ts=2024-08-02T07:15:41.460Z caller=cluster.go:688 component=cluster msg="gossip not settled but continuing anyway" polls=0 elapsed=34.118628ms