Deleting multiple files and folders using Ansible - jenkins

I need to delete files and folders using a ansible playbook. I pass the file/foler paths as a variable to an Ansible playbook from a Groovy script.
Variables are in a properties file named delete.properties. I stored file/foler paths seperatly in a variables so I can change the paths as I need in future.
delete.properties:
delete_files=/home/new-user/myfolder/dltfolder1 /home/new-user/myfolder/dltfolder2 /home/new-user/myfolder/dltfolder3
Groovy script:
stage("Read variable"){
steps{
script{
def propertifile = readFile(properti file path)
deleteParams = new Properties()
deleteParams.load(new StringReader(propertifile))
}
}
}
stage("Delete files/folders"){
steps{
script{
sh script: """cd ansible code path && \
export ANSIBLE_HOST_KEY_CHECKING=False && \
ansible-playbook delete.yml \
--extra-vars"dete_files=${deleteParams.delete_files}" --user user"""
}
}
}
Ansible playbook:
---
- name: delete files
hosts: localhost
tasks:
- name: delete files
file:
path: "{{ delete_files }}"
state: absent
As a result of above codes, only the first file path in delete_files (/home/new-user/myfolder/dltfolder1) variable in delete.properties file gets deleted.
I need to delete the other file/folder paths included in the delete_files variable too.

Put the path of the file into the extra vars. For example,
sh script: """cd ansible code path && \
export ANSIBLE_HOST_KEY_CHECKING=False && \
ansible-playbook delete.yml \
--extra-vars "dete_files=/tmp/delete.properties" --user user"""
Then, given the tree
shell> tree /tmp/test
/tmp/test
├── f1
├── f2
└── f3
, the file
shell> cat /tmp/delete.properties
delete_files=/tmp/test/f1 /tmp/test/f2 /tmp/test/f3
, and the playbook
shell> cat delete.yml
- hosts: localhost
vars:
delete_files: "{{ lookup('ini',
'delete_files',
file=dete_files,
type='properties') }}"
tasks:
- debug:
var: delete_files
- name: delete files
file:
path: "{{ item }}"
state: absent
loop: "{{ delete_files.split() }}"
gives, running in --check --diff mode
shell> ansible-playbook delete.yml --extra-vars "dete_files=/tmp/delete.properties" -CD
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
delete_files: /tmp/test/f1 /tmp/test/f2 /tmp/test/f3
TASK [delete files] **************************************************************************
--- before
+++ after
## -1,5 +1,2 ##
path: /tmp/test/f1
-path_content:
- directories: []
- files: []
-state: directory
+state: absent
changed: [localhost] => (item=/tmp/test/f1)
--- before
+++ after
## -1,5 +1,2 ##
path: /tmp/test/f2
-path_content:
- directories: []
- files: []
-state: directory
+state: absent
changed: [localhost] => (item=/tmp/test/f2)
--- before
+++ after
## -1,5 +1,2 ##
path: /tmp/test/f3
-path_content:
- directories: []
- files: []
-state: directory
+state: absent
changed: [localhost] => (item=/tmp/test/f3)
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

One solution would be to parse the properties file inside the Ansible playbook, with a ini lookup, if you are indeed acting on localhost, as you are showing it in your playbook:
- hosts: localhost
gather_facts: no
tasks:
- file:
path: "{{ item }}"
state: absent
loop: >-
{{
lookup(
'ini',
'delete_files type=properties file=delete.properties'
).split()
}}

Related

Ansible validate docker-compose with env_file and send to host

I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config
This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...

How to create a Kafka Connect connector at worker creation time?

Suppose I have a Kafka Connect worker that I built using Docker from the confluentinc/cp-kafka-connect image deploying to a server and spinning up a worker. Now most of the time, the connector will already exist since I have created it using a REST API call to POST on port 8083. But how would I create my connector (if it doesn't already exist) via a script at worker start time? Can I somehow give my worker steps to run after it spins up?
It requires an overloaded command
Original issue: https://github.com/confluentinc/cp-docker-images/issues/467
Solution
volumes:
- $PWD/scripts:/scripts # TODO: Create this folder ahead of time, on your host
command:
- bash
- -c
- |
/etc/confluent/docker/run &
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
sleep 5
done
nc -vz kafka-connect 8083
echo -e "\n--\n+> Creating Kafka Connector(s)"
/scripts/create-connectors.sh # Note: This script is stored externally from container
sleep infinity
As cricket_007 says, you can embed it in the command with a call out to a mounted script, or you can just put it all inline, like this example. If you do this note that in the command section, $ are replaced with $$ to avoid the error Invalid interpolation format for "command" option
kafka-connect-01:
image: confluentinc/cp-kafka-connect:5.4.0
[…]
command:
- bash
- -c
- |
[…]
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
echo "Waiting for Kafka Connect to start listening on localhost ⏳"
while : ; do
curl_status=$$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors)
echo -e $$(date) " Kafka Connect listener HTTP state: " $$curl_status " (waiting for 200)"
if [ $$curl_status -eq 200 ] ; then
break
fi
sleep 5
done
echo -e "\n--\n+> Creating Data Generator source"
curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/connectors/source-datagen-01/config \
-d '{
"connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"kafka.topic": "ratings",
"max.interval":750,
"quickstart": "ratings",
"tasks.max": 1
}'
sleep infinity
If you use for automation such tools like Ansible, this config may be useful:
- hosts: kafka-connect-docker
name: deploy kafka connect cluster
become: yes
gather_facts: yes
serial: '{{ serial|default(1) }}'
tasks:
# it's not fully working example
...
- name: run container
notify: wait ports
docker_container:
name: kafka-connect
image: "{{ docker_registry }}/kafka-connect:2.4.0-1.3.0"
entrypoint: ["sh", "-c", "'exec /opt/kafka/bin/connect-distributed.sh /etc/kafka-connect/connect-distributed.properties >> /var/log/kafka-connect/stderrout.log 2>&1'"]
restart_policy: always
network_mode: host
state: started
- name: call wait ports
command: /bin/true
notify: wait ports
handlers:
- name: restart container
shell: docker restart kafka-connect
notify: wait ports
- name: wait ports
wait_for: port=10900 timeout=300 host=127.0.0.1
changed_when: True
notify: check cluster status
- name: check cluster status
uri:
url: "http://127.0.0.1:10900/connectors"
status_code: 200
register: cluster_status_json_response
until: cluster_status_json_response.status == 200
retries: 60
delay: 5
- hosts: kafka-connect-docker[0]
name: deploy connectors configs
become: yes
tasks:
- name: restore connectors configs
uri:
url: "http://127.0.0.1:10900/connectors/{{ item }}/config"
method: PUT
return_content: yes
body_format: json
headers:
Accept: "application/json"
Content-Type: "application/json"
body: "{{ lookup('template', 'roles/kafka-connect/templates/etc/kafka-connect/tasks/' + item + '.json') }}"
status_code: 200, 201
timeout: 60
with_items: "{{ connector_configs }}"

Reading multiple values from an env file with ansible and storing them as facts

I have the following code which reads values from an environment (.env) file and stores them as facts:
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_PASSWORD"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read password
set_fact:
db_password: "{{ output.stdout }}"
when:
- db_password is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_USER"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read user
set_fact:
db_user: "{{ output.stdout }}"
when:
- db_user is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_NAME"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read db_name
set_fact:
db_name: "{{ output.stdout }}"
when:
- db_name is undefined
changed_when: false
- name: Container environment loaded; the following facts are now available for use by ansible
debug: "var={{ item }}"
with_items:
- db_name
- db_user
- db_password
It's quite bulky and unwieldy. I would like to write it something like this, but I can't figure out how :
vars:
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo {{ item|upper }}"
register: output
with_items: values
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when:
- item.0 is undefined
with_together:
- values
- output.results
changed_when: false
Instead, I get this output:
ok: [default] => (item=values) => {"changed": false, "cmd": "source /var/www/mydomain.org/.env; echo VALUES", "delta": "0:00:00.002240", "end": "2017-02-15 15:25:15.338673", "item": "values", "rc": 0, "start": "2017-02-15 15:25:15.336433", "stderr": "", "stdout": "VALUES", "stdout_lines": ["VALUES"], "warnings": []}
TASK [sql-base : Store read password] ******************************************
skipping: [default] => (item=[u'values', u'output.results']) => {"changed": false, "item": ["values", "output.results"], "skip_reason": "Conditional check failed", "skipped": true}
Even more ideal of course would be if there is an ansible module I have overlooked that allows me to load values from an environment file.
Generally I would either put my variables into the inventory files themselves, or convert them to yml format and use the include_vars module (you might be able to run a sed script to convert your environment file to yml on the fly). Before using the below code make sure you are really required to use those environment files, and you cannot easily use some other mechanism like:
a YAML file with the include_vars module
putting the config inside the inventory (no modules required)
putting the config into an ansible vault file (no modules required, you need to store the decryption key somewhere though)
use some other secure storage mechanism, like Hashicorp's vault (with a plugin like https://github.com/jhaals/ansible-vault)
Back to your code, it is actually almost correct, you were missing a dollar from the read task, and some curly braces from the condition:
test.env
DB_NAME=abcd
DB_PASSWORD=defg
DB_USER=fghi
Note: make sure that this file adheres to sh standards meaning you don't put spaces around the = sign. Having something like DB_NAME = abcd will fail
play.yml
- hosts: all
connection: local
vars:
env_path: 'test.env'
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo ${{ item|upper }}"
register: output
with_items: "{{ values }}"
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when: '{{ item.0 }} is undefined'
with_together:
- "{{ values }}"
- "{{ output.results }}"
changed_when: false
- name: Debug
debug:
msg: "NAME: {{db_name}} PASS: {{db_password}} USER: {{db_user}}"
Running with ansible-playbook -i 127.0.0.1, play.yml:
TASK [Debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "NAME: abcd PASS: defg USER: fghi"
}

Ansible - Include environment variables from external YML

I'm attempting to store all my environment variables in a file called variables.yml that looks like so:
---
doo: "external"
Then I have a playbook like so:
---
- hosts: localhost
tasks:
- name: "i can totally echo"
environment:
include: variables.yml
ugh: 'internal'
shell: echo "$doo vs $ugh"
register: results
- debug: msg="{{ results.stdout }}"
The result of the echo is ' vs internal'.
How can I change this so that the result is 'external vs internal'. Many thanks!
Assuming the external variable file called variables.ext is structured as follow
---
EXTERNAL:
DOO: "external"
than, according Setting the remote environment and Load variables from files, dynamically within a task a small test could look like
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Load environment variables
include_vars:
file: variables.ext
- name: Echo variables
shell:
cmd: 'echo "${DOO} vs ${UGH}"'
environment:
DOO: "{{ EXTERNAL.DOO }}"
UGH: "internal"
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
resulting into an output of
TASK [Show result] ********
ok: [localhost] =>
msg: external vs internal

Add swap memory with ansible

I'm working on a project where having swap memory on my servers is a needed to avoid some python long running processes to go out of memory and realized for the first time that my ubuntu vagrant boxes and AWS ubuntu instances didn't already have one set up.
In https://github.com/ansible/ansible/issues/5241 a possible built in solution was discussed but never implemented, so I'm guessing this should be a pretty common task to automatize.
How would you set up a file based swap memory with ansible in an idempotent way? What modules or variables does ansible provide help with this setup (like ansible_swaptotal_mb variable) ?
This is my current solution:
- name: Create swap file
command: dd if=/dev/zero of={{ swap_file_path }} bs=1024 count={{ swap_file_size_mb }}k
creates="{{ swap_file_path }}"
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{ swap_file_path }}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: "Check swap file type"
command: file {{ swap_file_path }}
register: swapfile
tags:
- swap.file.mkswap
- name: Make swap file
command: "sudo mkswap {{ swap_file_path }}"
when: swapfile.stdout.find('swap file') == -1
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{ swap_file_path }}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Mount swap
command: "swapon {{ swap_file_path }}"
when: ansible_swaptotal_mb < 1
tags:
- swap.file.swapon
I tried the answer above but "Check swap file type" always came back as changed and therefore isn't idempotent which is encouraged as a best practice when writing Ansible tasks.
The role below has been tested on Ubuntu 14.04 Trusty and doesn't require gather_facts to be enabled.
- name: Set swap_file variable
set_fact:
swap_file: "{{swap_file_path}}"
tags:
- swap.set.file.path
- name: Check if swap file exists
stat:
path: "{{swap_file}}"
register: swap_file_check
tags:
- swap.file.check
- name: Create swap file
command: fallocate -l {{swap_file_size}} {{swap_file}}
when: not swap_file_check.stat.exists
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{swap_file}}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: Format swap file
sudo: yes
command: "mkswap {{swap_file}}"
when: not swap_file_check.stat.exists
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{swap_file}}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Turn on swap
sudo: yes
command: swapon -a
when: not swap_file_check.stat.exists
tags:
- swap.turn.on
- name: Set swappiness
sudo: yes
sysctl:
name: vm.swappiness
value: "{{swappiness}}"
tags:
- swap.set.swappiness
As of Ansible 2.9, sudo declarations should be become:
- name: Set swap_file variable
set_fact:
swap_file: "{{swap_file_path}}"
tags:
- swap.set.file.path
- name: Check if swap file exists
stat:
path: "{{swap_file}}"
register: swap_file_check
tags:
- swap.file.check
- name: Create swap file
command: fallocate -l {{swap_file_size}} {{swap_file}}
when: not swap_file_check.stat.exists
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{swap_file}}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: Format swap file
become: yes
command: "mkswap {{swap_file}}"
when: not swap_file_check.stat.exists
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{swap_file}}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Turn on swap
become: yes
command: swapon -a
when: not swap_file_check.stat.exists
tags:
- swap.turn.on
- name: Set swappiness
become: yes
sysctl:
name: vm.swappiness
value: "{{swappiness}}"
tags:
- swap.set.swappiness
Vars required:
swap_file_path: /swapfile
# Use any of the following suffixes
# c=1
# w=2
# b=512
# kB=1000
# K=1024
# MB=1000*1000
# M=1024*1024
# xM=M
# GB=1000*1000*1000
# G=1024*1024*1024
swap_file_size: 4G
swappiness: 1
I based my take on the great Ashley's answer (please upvote it!) and added the following extra features:
global flag to manage the swap or not,
allow changing the swap size,
allow disabling swap,
...plus these technical improvements:
full idempotency (only ok & skipping when nothing is changed, otherwise some changed),
use dd instead of fallocate for compatibility with XFS fs (see this answer for more info),
Limitations:
changes the existing swapfile works correctly only if you provide its path as swap_file_path.
Tested on Centos 7.7 with Ansible 2.9 and Rocky 8 with Ansible
4.8.0.
Requires using privilege escalation as many of the commands need to run as root. (The easiest way to do it is to add --become to ansible-playbook arguments or to set the appropriate value in your ansible.cfg).
Parameters:
swap_configure: true # or false
swap_enable: true # or false
swap_file_path: /swapfile
swap_file_size_mb: 4096
swappiness: 1
Code:
- name: Configure swap
when: swap_configure | bool
block:
- name: Check if swap file exists
stat:
path: "{{swap_file_path}}"
get_checksum: false
get_md5: false
register: swap_file_check
changed_when: false
- name: Set variable for existing swap file size
set_fact:
swap_file_existing_size_mb: "{{ (swap_file_check.stat.size / 1024 / 1024) | int }}"
when: swap_file_check.stat.exists
- name: Check if swap is on
shell: swapon --show | grep {{swap_file_path}}
register: swap_is_enabled
changed_when: false
failed_when: false
- name: Disable swap
command: swapoff {{swap_file_path}}
register: swap_disabled
when: >
swap_file_check.stat.exists
and 'rc' in swap_is_enabled and swap_is_enabled.rc == 0
and (not swap_enable or (swap_enable and swap_file_existing_size_mb != swap_file_size_mb))
- name: Delete the swap file
file:
path: "{{swap_file_path}}"
state: absent
when: not swap_enable
- name: Remove swap entry from fstab
mount:
name: none
src: "{{swap_file_path}}"
fstype: swap
opts: sw
passno: '0'
dump: '0'
state: present
when: not swap_enable
- name: Configure swap
when: swap_enable | bool
block:
- name: Create or change the size of swap file
command: dd if=/dev/zero of={{swap_file_path}} count={{swap_file_size_mb}} bs=1MiB
register: swap_file_created
when: >
not swap_file_check.stat.exists
or swap_file_existing_size_mb != swap_file_size_mb
- name: Change swap file permissions
file:
path: "{{swap_file_path}}"
mode: 0600
- name: Check if swap is formatted
shell: file {{swap_file_path}} | grep 'swap file'
register: swap_file_is_formatted
changed_when: false
failed_when: false
- name: Format swap file if it's not formatted
command: mkswap {{swap_file_path}}
when: >
('rc' in swap_file_is_formatted and swap_file_is_formatted.rc > 0)
or swap_file_created.changed
- name: Add swap entry to fstab
mount:
name: none
src: "{{swap_file_path}}"
fstype: swap
opts: sw
passno: '0'
dump: '0'
state: present
- name: Turn on swap
shell: swapon -a
# if swap was disabled from the start
# or has been disabled to change its params
when: >
('rc' in swap_is_enabled and swap_is_enabled.rc != 0)
or swap_disabled.changed
- name: Configure swappiness
sysctl:
name: vm.swappiness
value: "{{ swappiness|string }}"
state: present
I'm not able to reply to Greg Dubicki answer so I add my 2 cents here:
I think adding casting to int when doing calculation on swap_file_size_mb * 1024 * 1024 to transform it into swap_file_size_mb | int * 1024 * 1024 will make the code a bit smarter in case you want to use a fact to extract the final swap size based on the amount of RAM installed like:
swap_file_size_mb: "{{ ansible_memory_mb['real']['total'] * 2 }}"
else it will always resize the swap file even if the size it's the same.
Here is the ansible-swap playbook that I use to install 4GB (or whatever I configure group_vars for dd_bs_size_mb * swap_count) of swap space on new servers:
https://github.com/tribou/ansible-swap
I also added a function in my ~/.bash_profile to help with the task:
# Path to where you clone the repo
ANSIBLE_SWAP_PLAYBOOK=$HOME/dev/ansible-swap
install-swap ()
{
usage='Usage: install-swap HOST';
if [ $# -ne 1 ]; then
echo "$usage";
return 1;
fi;
ansible-playbook $ANSIBLE_SWAP_PLAYBOOK/ansible-swap/site.yml --extra-vars "target=$1"
}
Just make sure to add your HOST to your ansible inventory first.
Then it's just install-swap HOST.
Here is a preexisting role on Galaxy which achieves the same thing: https://galaxy.ansible.com/geerlingguy/swap
What works for me is to add 4Gib swap through the playbook.
Check this:
.

Resources