Using ansible to grep then format the output [duplicate] - grep

I am using ansible to gather information from remote nodes and will then use this information to update relevant RPMs.
The issue I am having is collection version number of various applications and writing them to a file.
Playbook:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- debug: msg="{{ k8s_version.stdout }}"
Inventory file:
[kubernetes]
172.29.219.102
172.29.219.105
172.29.219.104
172.29.219.103
Output:
TASK [debug] *******************************************************************
ok: [172.29.219.102] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.103] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.105] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.104] => {
"msg": "Kubernetes v1.4.0"
}
The above portion is simple and works. Now I want to write the output to file.
Now im trying to write this information to a file.I want something like:
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
So I added the below line:
- local_action: copy content={{ k8s_version.stdout_lines }} dest=/tmp/test
My /tmp/test looks like :
# cat /tmp/test
["Kubernetes v1.4.0"]
There is only one value here.
I tried to do something different then.
- local_action: lineinfile dest=/tmp/foo line="{{ k8s_version.stdout }}" insertafter=EOF
This resulted in:
# cat /tmp/foo
Kubernetes v1.4.0
Im trying to figure out why I only see one value whereas I should see the versions of every node in my inventory file. What am I doing wrong?

What am I doing wrong ?
lineinfile module does not perform the action "add a line to a file", instead it ensures a given line is present in the file. If all your target nodes have the same version, it won't add the same line multiple times.
On the other hand, copy module was overwriting the file.
If you need to register values for all hosts, you can for example create a template which will loop over hosts in the kubernetes group:
- copy:
content: "{% for host in groups.kubernetes %}{{ hostvars[host].k8s_version }}\n{% endfor %}"
dest: /tmp/test
delegate_to: localhost
run_once: true
Another way would be to extract the values with map from hostvars, but given you want the values from kubernetes host group only, I'm not sure it would be prettier. And having a for in the template allows you to easily add host names.

According to this post
Ansible register result of multiple commands
your desired variable is in k8s_version.results To access it you need to work with a template where you just iterate over it:
- local_action: template src=my_nodes.j2 dest=/tmp/test
And the template templates/my_nodes.j2 :
{% for res in k8s_version.results %}
{{ res.stdout }}
{% endfor %}
The complete playbook would then be:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- local_action: template src=my_nodes.j2 dest=/tmp/test

Related

Ansible 'get_url' error - URL can't contain control characters

Getting error in Ansible get_url module:
URL can't contain control characters`.
Sample code:
- get_url:
url: "{{ jenkins_url_repo }}" --no-check-certificate
I've also tried the following, but the error still persist.
url: "{{ jenkins_url_repo }} {{ no-check-cert }}"
url: https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate
url: "https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate"
url: 'https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate'
--no-check-certificate looks like it's supposed to be part of some other command, not part of the URL.
Probably you've copied this from somewhere that is fetching the file and disabling checking of the remote SSL certificate, because of incorrectly configured local trusted roots.
You probably want to just leave it off and let the certificate be verified so that you don't install malicious software!
If you're really sure you want to ignore the security check, the Ansible equivalent is the validate_certs option.
- get_url:
url: "{{ jenkins_url_repo }}" --no-check-certificate
# WARNING! DISABLING SECURITY! THIS IS DANGEROUS!
validate_certs: no
# WARNING! DISABLING SECURITY! THIS IS DANGEROUS!

jkube resource failed: Unknown type CRD

I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80
#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc
I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments

How to create a file while creating(claiming) persistent volume?

I want to create a file at a certain path. The docker image that is being used is from the file:
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
The above yaml is taken from here: https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html
JuPyterHub creates a new pod for each user in the scheme singleUser. I want to create a file as soon as there is a creation of the new volume.
I tried reading docs and other relevant questions but none of them addressed this issue.
Below is the snippet where the storage logic is defined. Each user gets a new pvc and I want to create a new file in this pvc whenever it is created. I already have the homeMountPath and the username in the below code snippet, I don't know how to write a file - something on the similar lines as: echo "run_id = 'sample' " > /home/jovyan/username/.ipython/profile_default/startup/aviral.py
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
The full helm chart is available here, the official one: https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
I expect the pods when created in the namespace jhub have a file already created.
you can look at configmap object. files from configmap can be mounted as volume inside container.

SaltStack: getting No top file or master_tops data matches found

I am new to SaltStack and following some tutorials and trying to execute state.apply but getting below error:
# salt "host2" state.apply
host2:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or external nodes data matches found
Started:
Duration:
Changes:
Summary for host2
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
I am able to test.ping successfully to host.
here is directory structure:
/etc/salt/srv/salt/states
|-top.sls
|-installations
|-init.sls
file root entry in master config
file_roots:
base:
- /srv/salt/states
top.sls ->
base:
'*':
- installations
init.sls->
install_apache:
pkg.installed:
- name: apache2
You need to change the path to your states, or move them to the path set in file_roots.
The file_roots option is where you should place your files, you should have the following tree:
# tree /srv/salt/
/srv/salt/
|-- installations
`-- init.sls
`-- top.sls
Or you could change your file_roots, but I wouldn't do it, since /srv/salt/ seems to be a sort of "standard".
Have a look at the tutorials, if you haven't already: https://docs.saltstack.com/en/getstarted/fundamentals/
I changes the
file_root:
base:
- /etc/salt/srv/salt/state
and it works for me. looks it wasn't picking path correctly

Don't use config for "links" in "docker_container" if it's not set

I am using Ansible with "docker_container" to deploy a web app to various environments. When the target host is a non-production server I set the "links" option with a variable, e.g.:
links: "{{ var_db_link }}"
.. and this works great ... when var_db_link is actually set.
Problem: I need to be able to leave this unset when deploying to a production host because in that case the DB is never going to be a linked container. I was expecting that if was not set that Ansible would ignore the "link" option and not try to use it. Instead it uses the unset value which produced the error: FAILED! => {"changed": false, "failed": true, "msg": "Error creating container: 500 Server Error: Internal Server Error (\"{\"message\":\"Could not get container for \"}\")"}
Question: Is this even possible? (can Ansible be told to not try to use an option when it has no value set) .. or .. should I (unfortunately) create separate roles / playbooks for each situation (i.e. with "links" option set and without it.
This line in the referenced commit gives a clue as to the solution:
Instead of trying to set it to null, just set it to an empty list:
links: "{{ [] if var_db_link == '' else var_db_link }}"
The behavior you want looks like what is described in Omitting Parameters.
You should do :
links: "{{ var_db_link | default(omit) }}"
You will need Ansible 1.8 or above.

Resources