alertmanager 0.25 send resolved messages as thread of original alert message - slack-api

I am using alertmanager 0.25 with slack webhooks trying to thread my resolve messages as a thread of the original alert sent to a slack channel.
According to documentation and to chatgpt, this below template should work, but instead my alertmanager container fails to come up (restarting constantly) with the following error:
{"caller":"coordinator.go:118","component":"configuration","err":"yaml: unmarshal errors:\n line 119: field thread_ts not found in type config.plain\n line 131: field thread_ts not found in type config.plain\n line 160: field thread_ts not found in type config.plain\n line 171: field thread_ts not found in type config.plain\n line 182: field thread_ts not found in type config.plain\n line 194: field thread_ts not found in type config.plain\n line 245: field thread_ts not found in type config.plain","file":"/etc/alertmanager/alertmanager.yml","level":"error","msg":"Loading configuration file failed","ts":"2023-02-12T10:06:21.586Z"}
my alertmanager configuration looks like so:
- name: "default-alerts-test"
slack_configs:
- api_url: 'https://hooks.slack.com/services/XXXXXXXXX/XXXXXXXXXXX/XXXXXXXXXXXXX'
send_resolved: true
thread_ts: "{{ .Annotations.thread_ts }}"
http_config: {}
icon_emoji: ':this-is-fine-fire:'
channel: '#monitoring-alerts'
title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
text: "{{ .Annotations.summary }}\n{{ .Annotations.description }}\nGroup Key: {{ .Labels.group_key }}"
callback_id: '{{ template "slack.default.callbackid" . }}'
without thread_ts my configuration works fine and I am getting resolve messages to the same channel I am alerting to so its not an authetication or authorization issue.
As I understand the thread_ts parameter was implemented in version 0.22, so what am I missing here?

Related

cert-manager: Failed to register ACME account: invalid character '<' looking for beginning of value

I installed the cert-manager using the Helm Chart. I created a ClusterIssuer but I see that it's on a failed state:
kubectl describe clusterissuer letsencrypt-staging
ErrRegisterACMEAccount Failed to register ACME account: invalid character '<' looking for beginning of value
What could be causing this invalid character '<'?
This error is most likely the result of an incorrect server url, the url you specified is returning HTML (hence the complain about <).
Make sure that your server url is https://acme-staging-v02.api.letsencrypt.org/directory" and NOT just https://acme-staging-v02.api.letsencrypt.org/", the directory/ must be included in the url.
So the ClusterIssuer should look like this (emphasis on the .spec.acme.server)
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: name.surname#mycompany.com
privateKeySecretRef:
name: letsencrypt-staging
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- dns01:
route53:
hostedZoneID: XXXXXXXXXXXXXX
region: eu-north-1
selector:
dnsZones:
- xxx.yyy.mycompany.com

coalesce.go:163: warning: skipped value for initContainers: Not a table

helm template .
coalesce.go:163: warning: skipped value for initContainers: Not a table.
Error: YAML parse error on xray/templates/xray-statefulset.yaml: error converting YAML to JSON: yaml: line 48: did not find expected key
Use --debug flag to render out invalid YAML
I am using the below helm chart
https://artifacthub.io/packages/helm/jfrog/xray/103.51.0

Using ansible to grep then format the output [duplicate]

I am using ansible to gather information from remote nodes and will then use this information to update relevant RPMs.
The issue I am having is collection version number of various applications and writing them to a file.
Playbook:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- debug: msg="{{ k8s_version.stdout }}"
Inventory file:
[kubernetes]
172.29.219.102
172.29.219.105
172.29.219.104
172.29.219.103
Output:
TASK [debug] *******************************************************************
ok: [172.29.219.102] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.103] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.105] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.104] => {
"msg": "Kubernetes v1.4.0"
}
The above portion is simple and works. Now I want to write the output to file.
Now im trying to write this information to a file.I want something like:
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
So I added the below line:
- local_action: copy content={{ k8s_version.stdout_lines }} dest=/tmp/test
My /tmp/test looks like :
# cat /tmp/test
["Kubernetes v1.4.0"]
There is only one value here.
I tried to do something different then.
- local_action: lineinfile dest=/tmp/foo line="{{ k8s_version.stdout }}" insertafter=EOF
This resulted in:
# cat /tmp/foo
Kubernetes v1.4.0
Im trying to figure out why I only see one value whereas I should see the versions of every node in my inventory file. What am I doing wrong?
What am I doing wrong ?
lineinfile module does not perform the action "add a line to a file", instead it ensures a given line is present in the file. If all your target nodes have the same version, it won't add the same line multiple times.
On the other hand, copy module was overwriting the file.
If you need to register values for all hosts, you can for example create a template which will loop over hosts in the kubernetes group:
- copy:
content: "{% for host in groups.kubernetes %}{{ hostvars[host].k8s_version }}\n{% endfor %}"
dest: /tmp/test
delegate_to: localhost
run_once: true
Another way would be to extract the values with map from hostvars, but given you want the values from kubernetes host group only, I'm not sure it would be prettier. And having a for in the template allows you to easily add host names.
According to this post
Ansible register result of multiple commands
your desired variable is in k8s_version.results To access it you need to work with a template where you just iterate over it:
- local_action: template src=my_nodes.j2 dest=/tmp/test
And the template templates/my_nodes.j2 :
{% for res in k8s_version.results %}
{{ res.stdout }}
{% endfor %}
The complete playbook would then be:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- local_action: template src=my_nodes.j2 dest=/tmp/test

Don't use config for "links" in "docker_container" if it's not set

I am using Ansible with "docker_container" to deploy a web app to various environments. When the target host is a non-production server I set the "links" option with a variable, e.g.:
links: "{{ var_db_link }}"
.. and this works great ... when var_db_link is actually set.
Problem: I need to be able to leave this unset when deploying to a production host because in that case the DB is never going to be a linked container. I was expecting that if was not set that Ansible would ignore the "link" option and not try to use it. Instead it uses the unset value which produced the error: FAILED! => {"changed": false, "failed": true, "msg": "Error creating container: 500 Server Error: Internal Server Error (\"{\"message\":\"Could not get container for \"}\")"}
Question: Is this even possible? (can Ansible be told to not try to use an option when it has no value set) .. or .. should I (unfortunately) create separate roles / playbooks for each situation (i.e. with "links" option set and without it.
This line in the referenced commit gives a clue as to the solution:
Instead of trying to set it to null, just set it to an empty list:
links: "{{ [] if var_db_link == '' else var_db_link }}"
The behavior you want looks like what is described in Omitting Parameters.
You should do :
links: "{{ var_db_link | default(omit) }}"
You will need Ansible 1.8 or above.

Docker how to use boolean value on spec.container.env.value

Is there a way to pass a boolean value for spec.container.env.value ?
I want to override, with helm, a boolean env variables in a docker parent image (https://github.com/APSL/docker-thumbor) : UPLOAD_ENABLED
I made a simpler test
If you try the following yaml :
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: true
And try to create it with kubernetes, you got the following error :
kubectl create -f envars.yaml
the error :
error: error validating "envars.yaml": error validating data: expected type string, for field spec.containers[0].env[0].value, got bool; if you choose to ignore these errors, turn validation off with --validate=false
with validate=false
Error from server (BadRequest): error when creating "envars.yaml": Pod in version "v1" cannot be handled as a Pod: [pos 192]: json: expect char '"' but got char 't'
It doesn't work with integer values too
spec.container.env.value is defined as string. see here:
https://kubernetes.io/docs/api-reference/v1.6/#envvar-v1-core
You'd have to cast/convert/coerse to boolean in your container when using this value
Try escaping the value. The below worked for me:
- name: DEMO_GREETING
value: "'true'"
This works for me.
In my example, one is hardcoded, and the other comes from an env var.
env:
- name: MY_BOOLEAN
value: 'true'
- name: MY_BOOLEAN2
value: '${MY_BOOLEAN2_ENV_VAR}'
So basically, I wrap single quotes around everything, just in case.
WARNING: Dont use hyphens in your env var names, that will not work...
if you are the helm chart implementer, just quote it
data:
# VNC_ONLY: {{ .Values.vncOnly }} <-- Wrong
VNC_ONLY: "{{ .Values.vncOnly }}" # <-- Correct
From command line you can also use
--set-string
instead of
--set
and you will be able to pass value without escaping
for instance:
--set-string "env.my-setting=False"

Resources