I'm trying to configure alertmanager so that it sends alerts to the right channels, based on value of a specific label. I have 3 slack channels - dev/staging/prod and I want the alerts coming from instances that have "env" label set to dev to be sent to the dev slack channel. Staging and prod would obviously work in the same manner. Here is part of my config:
global:
resolve_timeout: 1m
slack_api_url: 'https://slack-url'
route:
group_by: [...]
receiver: 'default'
routes:
- match:
env: 'prod'
receiver: 'slack-notifications-prod'
- match:
env: 'staging'
receiver: 'slack-notifications-staging'
- match:
env: 'dev'
receiver: 'slack-notifications-dev'
receivers:
- name: 'default'
- name: 'slack-notifications-prod'
...
- name: 'slack-notifications-staging'
...
- name: 'slack-notifications-dev'
...
The slack-notifications receivers are all the same and they only differ in one thing, which is the appropriate channel name.
Current behaviour: All alerts are sent to the prod slack channel
Expected behaviour: Alerts from "dev" env are sent to dev channel, "staging" to staging channel, and "prod" to prod channel.
Alertmanager sees these labels just fine (judging from the info from alertmanager webUI).
Turns out my config was fine and I was using a webhook URL which was tied only to one slack channel, I wasn't aware of that.
You have to add continue: true attribute on the first match:
global:
resolve_timeout: 1m
slack_api_url: 'https://slack-url'
route:
group_by: [...]
receiver: 'default'
routes:
- match:
env: 'prod'
receiver: 'slack-notifications-prod'
continue: true
- match:
env: 'staging'
receiver: 'slack-notifications-staging'
- match:
env: 'dev'
receiver: 'slack-notifications-dev'
receivers:
- name: 'default'
- name: 'slack-notifications-prod'
...
- name: 'slack-notifications-staging'
...
- name: 'slack-notifications-dev'
...
The AlertManager will evaluate children routes until there are no routes left or no routes for a given level are matching the current alert.
In that case, the AlertManager will take the configuration of the current node evaluated.
The continue attribute is a value used to define if you want to evaluate route siblings (belonging to the same level) if a route on the same level was already matching.
https://devconnected.com/alertmanager-and-prometheus-complete-setup-on-linux/
Related
I'm creating an eventlisterner for my repo on Bitbucket Cloud and saw on the curent example on the Tekton documentation that the Bitbucket interceptor only support Bitbucket Server.
I've created the eventlistener and looks like this:
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: bitbucket-el
spec:
serviceAccountName: tekton-triggers-admin
triggers:
- name: bitbucket-triggers
interceptors:
- bitbucket:
secretRef:
secretName: bitbucket-secret
secretKey: secretToken
eventTypes:
- cel:
filter: "header.match('X-Event-Key', 'repo:push')"
overlays:
- key: extensions.tag_name
expression: "split(body.ref, '/')[2]"
- key: extensions.mangledtag
expression: "split(split(body.ref, '/')[2], '.')[0]+'-'+split(split(body.ref, '/')[2], '.')[1]+'-'+split(split(body.ref, '/')[2], '.')[2]"
bindings:
- ref: bitbucket-binding
template:
ref: bitbucket-template
and I pass it the token generated (bitbucket-secret) from Bitbucket Cloud consumer secret by going through this doc: https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/
I used basic auth on Ingress and the webhook return 401 Unauthorized and now after I remove the basic auth and then trigger the webhook with a push I'm seeing 403 Forbiden.
Check the image below for illustartion:
enter image description here
Thank you in advance
I spend alot of time on this issue and finally fixed it by using the CEL expression interceptors, as follows.
In this Trigger, we are using the overlays to add the "X-Hub-Signature" to the body of the payload, where the expression value i.e., 1234567 doesn't matter it can be anything, we are just adding the HMAC to the body so that we will not get an error.
Note: By default, there is no interceptor for the bitbucket CLOUD
apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
name: energy
spec:
serviceAccountName: pipeline
interceptors:
- ref:
name: "cel"
params:
- name: "filter"
value: "header.match('X-Event-Key', 'repo:push')"
- name: "overlays"
value:
- key: X-Hub-Signature
expression: "1234567"
bindings:
- ref: energy
template:
ref: energy
I am trying to achieve the same, starting a build when a PR merge has been done in BitBucket cloud.
I was able to create the EventListener resource, but my pipeline is not triggered after merging a PR.
Looking at your example, I still have some questions
How is the GIT repository and the secret configured?
How can you specify a specific branch?
I was looking for a complete example but it seems like Tekton is just ignoring Bitbucket Cloud as a VCS ...
Kind regards,
Bregt
I deployed an envoy as a side car to manage oauth2. Everything work fine for all the resources and the client is redirected to the OIDC in order to authenticate.
Here is a part of my conf (managed in a Helm chart):
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: my-service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: my-service
http_filters:
- name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: {{ .Values.back.envoy.oidc.name }}
uri: https://{{ .Values.back.envoy.oidc.address }}/oidc/token
timeout: 5s
authorization_endpoint: https://{{ .Values.back.envoy.oidc.address }}/oidc/authorize
redirect_uri: "%REQ(x-forwarded-proto)%://%REQ(:authority)%/oidc/callback"
redirect_path_matcher:
path:
exact: /oidc/callback
signout_path:
path:
exact: /oidc/signout
credentials:
client_id: {{ required "back.envoy.oidc.client_id is required" .Values.back.envoy.oidc.client_id }}
token_secret:
name: token
sds_config:
resource_api_version: V3
path: "/etc/envoy/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
resource_api_version: V3
path: "/etc/envoy/hmac-secret.yaml"
forward_bearer_token: true
# (Optional): defaults to 'user' scope if not provided
auth_scopes:
- user
- openid
- email
- homelan_devices_read
- homelan_topology_read
- homelan_devices_write
# (Optional): set resource parameter for Authorization request
#resources:
#- oauth2-resource
#- http://example.com
- name: envoy.filters.http.router
typed_config: {}
Now I'd like that some of the exposed resources don't need to be authenticated.
I see in the doc the Oauth filter doc "Leave this empty to disable OAuth2 for a specific route, using per filter config." (see https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/oauth2/v3/oauth.proto#envoy-v3-api-msg-extensions-filters-http-oauth2-v3-oauth2config)
This phrase make me think that it may be possible.
I tried to manage it changing my conf throught virtual_hosts this way :
virtual_hosts:
- name: no-oauth
domains: ["*"]
typed_per_filter_config:
envoy.filters.http.oauth2:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
routes:
- match:
prefix: "/api/v1/myResource1"
route:
cluster: my-service
- name: my-service
domains: ["*"]
routes:
- match:
prefix: "/api/v1/myResource2"
route:
cluster: my-service
I have the error : [critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': The filter envoy.filters.http.oauth2 doesn't support virtual host-specific configurations
Any idea ? Did someone implement Envoy OAuth2 filter with disabled routes ?
After looking at my envoy logs, I realized that path is know as header ":path".
The pass_through_matcher math the header.
Then only adding:
pass_through_matcher:
- name: ":path"
prefix_match: "/healthz"
- name: ":path"
prefix_match: "/api/v1/myResource1"
in my conf without the lua filter (see my previous answer) it works.
For information, I found a workaround:
I added a LUA filter before my OAuth2 one:
- name: envoy.filters.http.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("X-Path", request_handle:headers():get(":path"))
end
In order to add the path in a header.
Then I can use this element of conf Oauth2:
pass_through_matcher
(repeated config.route.v3.HeaderMatcher) Any request that matches any of the provided matchers will be passed through without OAuth validation.
So I add this to my OAuth2 filter:
pass_through_matcher:
- name: "X-path"
prefix_match: "/healthz"
- name: "X-path"
prefix_match: "/api/v1/myResource1"
Then my /api/v1/myResource1 requests (and healthz also) don't need authentication (are disable from the OAuth2) while my /api/v1/myResource2 requests need it.
I still have got the unanswered question:
What do the OAuth filter doc means with :"Leave this empty to disable OAuth2 for a specific route, using per filter config."
We have two different teams working on different applications.I would like send alert notifications to different slack channels via using same alerts expressions. I found some examples but not understand what is the main reason to use receiver: 'default' when try to add new route? What is the role of this and what if it affects if ı change this?
Meanwhile will be appreciate if you can help how should I send the notifations to multiple slack channels.. New one is what I tried.
Current alertmanager.yml
receivers:
- name: 'team-1'
slack_configs:
- api_url: 'https://hooks.slack.com/services/1'
channel: '#hub-alerts'
route:
group_wait: 10s
group_interval: 5m
receiver: 'team-1'
repeat_interval: 1h
group_by: [datacenter]
New alertmanager.yml
alertmanager.yml:
receivers:
- name: 'team-1'
slack_configs:
- api_url: 'https://hooks.slack.com/services/1'
channel: '#channel-1'
send_resolved: true
- name: 'team-2'
slack_configs:
- api_url: 'https://hooks.slack.com/services/2'
channel: '#channel-2'
send_resolved: true
route:
group_wait: 10s
group_interval: 5m
repeat_interval: 1h
group_by: [datacenter]
receiver: 'default'
routes:
- receiver: 'team-1'
- receiver: 'team-2'
You need to set the continue property on your route to true. By default it is false.
The default behaviour of AlertManager is to traverse your routes for a match and exit at the first node it finds a match at.
What you want to do is fire an alert at the match and continue to search for other matches and fire those too.
Relevant documentation section: https://prometheus.io/docs/alerting/latest/configuration/#route
An example using this:
https://awesome-prometheus-alerts.grep.to/alertmanager.html
In-lined the example above in case it ever breaks.
# alertmanager.yml
route:
# When a new group of alerts is created by an incoming alert, wait at
# least 'group_wait' to send the initial notification.
# This way ensures that you get multiple alerts for the same group that start
# firing shortly after another are batched together on the first
# notification.
group_wait: 10s
# When the first notification was sent, wait 'group_interval' to send a batch
# of new alerts that started firing for that group.
group_interval: 5m
# If an alert has successfully been sent, wait 'repeat_interval' to
# resend them.
repeat_interval: 30m
# A default receiver
receiver: "slack"
# All the above attributes are inherited by all child routes and can
# overwritten on each.
routes:
- receiver: "slack"
group_wait: 10s
match_re:
severity: critical|warning
continue: true
- receiver: "pager"
group_wait: 10s
match_re:
severity: critical
continue: true
receivers:
- name: "slack"
slack_configs:
- api_url: 'https://hooks.slack.com/services/XXXXXXXXX/XXXXXXXXX/xxxxxxxxxxxxxxxxxxxxxxxxxxx'
send_resolved: true
channel: 'monitoring'
text: "{{ range .Alerts }}<!channel> {{ .Annotations.summary }}\n{{ .Annotations.description }}\n{{ end }}"
- name: "pager"
webhook_config:
- url: http://a.b.c.d:8080/send/sms
send_resolved: true
I have alertmanagerconfig with below configuration and now I need to point info alerts to a null receiver, can I have multiple receiver and receivers?
kind: AlertmanagerConfig
metadata:
name: Prometheus-alertmanager-config
namespace: Prometheus
spec:
route:
receiver: alert-email-pagerduty-config
groupBy: ['alertname', 'priority','severity']
groupWait: 30s
groupInterval: 5m
repeatInterval: 15m
continue: true
receivers:
- name: alert-email-pagerduty-config
emailConfigs:
- to: {{.to_email}}
sendResolved: true
from: {{.from_email}}
smarthost: {{.smarthost}}
authUsername: {{.mail_username}}
authPassword:
name: 'alert-smtp-password'
key: 'password'
requireTLS: true
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: alert-smtp-password
namespace: prometheus
stringData:
password: {{.mail_password}}
I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine
The Type has to be String, not string. See the supported parameter data types section in the docs.
Within JJB, you can define project-level variables like this:
- defaults:
name: global
git_url: "git#....."
- project
name: some-test
jobs:
- test-{name}
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
My question, must I hardcode the value of git_url at the default level or can I use some JJB mechanism to bring that in at job load/execution?
The reason I ask is that the yaml script that contains these JJB jobs can be used to define TEST, QA and PROD. It would be nice to just point at a properties file that contains the value for git_url and any other global variable values. I took a look at: http://docs.openstack.org/infra/jenkins-job-builder/definition.html?highlight=default#defaults and I did not see any mechanism.
If I understand your question correctly, there are two other approaches available within the context of a single yaml file
Approach 1: Set git_url at the project level
- project
name: some-test
git_url: "git#dogs.net:woof/bark.git"
jobs:
- test-{name}:
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
Here git_url is set at the project level. This approach allows you to define a second project with a different value for git_url, ie
- project
name: some-other-test
git_url: "git#cats.net:meow/meow.git"
jobs:
- test-{name}:
Approach 2: Set git_url at the job-template instance level
- project
name: some-test
jobs:
- test-{name}:
git_url: "git#....."
- job-template
name: test-{name}
scm:
- git:
url: "{git_url}"
branches:
- master
Here git_url is set on the actual instance of the job-template where it is specified. If your job-template had more than just {name} in its name, this would allow you to create multiple instances of it in the list of jobs at the project level, ie
- project
name: some-test
git_url: "git#....."
jobs:
- test-{name}-{type}:
type: 'cat'
- test-{name}-{type}:
type: 'dog'
- job-template
name: test-{name}-{type}
display-name: 'Test for {type} projects'
scm:
- git:
url: "{git_url}"
branches:
- master
Thoughts on TEST vs QA vs PROD
You also mentioned that you would like some kind of external properties file to differentiate between TEST, QA, and PROD environments. To address this let's consider four different files, project.yaml, defaults/TEST.yaml, defaults/QA.yaml, defaults/PROD.yaml whose contents are enumerated below.
project.yaml
- project
name: some-test
jobs:
- test-{name}:
defaults/TEST.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/test.git"
defaults/QA.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/qa.git"
defaults/PROD.yaml
- defaults:
name: global
git_url: "git#dogs.net:woof/prod.git"
Okay so these aren't great examples because you probably wouldn't have a different git repository for each environment, but I don't want to complicate things by straying too far from your original example.
With JJB you can specify more than one YAML file on the command line (I don't want to complicate the example or its explanation, but you can also specify directories full of JJB yaml). To differentiate between TEST, QA, and PROD deployments of your Jenkins job you can then do something like:
jenkins-jobs project.yaml:defaults/TEST.yaml
For your test environment.
jenkins-jobs project.yaml:defaults/QA.yaml
For your qa environment.
jenkins-jobs project.yaml:defaults/PROD.yaml
For your prod environment.
Hope that helps.