Ansible validate docker-compose with env_file and send to host - docker

I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config

This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...

Related

Ansible molecule using docker - how to specify memory limit

I have a molecule test which spins up 2 Docker containers, for testing 2 application versions at once.
dependency:
name: galaxy
driver:
name: docker
lint:
name: yamllint
platforms:
- name: molecule1
hostname: molecule1
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
- name: molecule2
hostname: molecule2
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
provisioner:
name: ansible
inventory:
host_vars:
molecule1:
app_version: "v1"
molecule2:
app_version: "v2"
lint:
name: ansible-lint
scenario:
name: default
converge_sequence:
- syntax
- lint
- create
- prepare
- converge
- idempotence
- verify
verifier:
name: goss
lint:
name: yamllint
I am looking for a way to specify the memory like -m or --memory= as described here.
I understand that molecule makes use of the docker_container ansible module, which support the memory parameter, but somehow I cannot find a way to make this work in molecule.
Any ideas how to accomplish this?
PS: My guess is that this parameter is not yet implemented in molecule, if my assumption is correct that this is the implementation.
Thanks in advance.
++Update++
--memory is indeed not yet implemented in molecule docker provisioner.
If anybody is interested, here is the relevant change to the source code:
diff --git a/molecule/provisioner/ansible/playbooks/docker/create.yml b/molecule/provisioner/ansible/playbooks/docker/create.yml
index 7a04b851..023a720a 100644
--- a/molecule/provisioner/ansible/playbooks/docker/create.yml
+++ b/molecule/provisioner/ansible/playbooks/docker/create.yml
## -121,6 +121,8 ##
hostname: "{{ item.hostname | default(item.name) }}"
image: "{{ item.pre_build_image | default(false) | ternary('', 'molecule_local/') }}{{ item.image }}"
pull: "{{ item.pull | default(omit) }}"
+ kernel_memory: "{{ item.kernel_memory | default(omit) }}"
+ memory: "{{ item.memory | default(omit) }}"
state: started
recreate: false
log_driver: json-file
My fork has now been merged to Molecule.

Ansible - Download latest release binary from Github repo

With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be:
a. get URL of latest release
b. download the release
For a. I have something like which does not provide the actual release (ex. v0.11.53):
- name: get latest Gogs release
local_action:
module: uri
url: https://github.com/gogits/gogs/releases/latest
method: GET
follow_redirects: no
status_code: 301
register: release_url
For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.:
- name: download latest
become: yes
become-user: "{{gogs_user}}"
get_url:
url: https://github.com/gogs/gogs/releases/download/v0.11.53/linux_amd64.tar.gz
dest: "/home/{{gogs_user}}/linux_amd64.tar.gz"
Thank you!
Github has an API to manipulate the release which is documented.
so imagine you want to get the latest release of ansible (which belong to the project ansible) you would
call the url https://api.github.com/repos/ansible/ansible/releases/latest
get an json structure like this
{
"url": "https://api.github.com/repos/ansible/ansible/releases/5120666",
"assets_url": "https://api.github.com/repos/ansible/ansible/releases/5120666/assets",
"upload_url": "https://uploads.github.com/repos/ansible/ansible/releases/5120666/assets{?name,label}",
"html_url": "https://github.com/ansible/ansible/releases/tag/v2.2.1.0-0.3.rc3",
"id": 5120666,
"node_id": "MDc6UmVsZWFzZTUxMjA2NjY=",
"tag_name": "v2.2.1.0-0.3.rc3",
"target_commitish": "devel",
"name": "THESE ARE NOT OUR OFFICIAL RELEASES",
...
},
"prerelease": false,
"created_at": "2017-01-09T16:49:01Z",
"published_at": "2017-01-10T20:09:37Z",
"assets": [
],
"tarball_url": "https://api.github.com/repos/ansible/ansible/tarball/v2.2.1.0-0.3.rc3",
"zipball_url": "https://api.github.com/repos/ansible/ansible/zipball/v2.2.1.0-0.3.rc3",
"body": "For official tarballs go to https://releases.ansible.com\n"
}
get the value of the key tarball_url
download the value of the key retrieved just above
In ansible code that would do
- hosts: localhost
tasks:
- uri:
url: https://api.github.com/repos/ansible/ansible/releases/latest
return_content: true
register: json_reponse
- get_url:
url: "{{ json_reponse.json.tarball_url }}"
dest: ./ansible-latest.tar.gz
I let you adapt the proper parameters to answer your question :)
I am using the following recipe to download and extract latest watchexec binary for Linux from GitHub releases.
- hosts: localhost
tasks:
- name: check latest watchexec
uri:
url: https://api.github.com/repos/watchexec/watchexec/releases/latest
return_content: true
register: watchexec_latest
- name: "installing watchexec {{ watchexec_latest.json.tag_name }}"
loop: "{{ watchexec_latest.json.assets }}"
when: "'x86_64-unknown-linux-musl.tar.xz' in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ ansible_env.HOME }}/bin/"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- watchexec
tar extra_opts explained here.
This still downloads the binary every time a playbook is called. As an improvement, it might be possible to use set_fact for caching node_id attribute that corresponds to the unpacked file.
Another approach is to use ansible github_release module to get the latest tag
example
- name: Get gogs latest tag
github_release:
user: gogs
repo: gogs
action: latest_release
register: gogs_latest
- name: Grab gogs latest binaries
unarchive:
src: "https://github.com/gogs/gogs/releases/download/{{ gogs_latest['tag'] }}/gogs_{{ gogs_latest['tag'] | regex_replace('^v','') }}_linux_amd64.zip"
dest: /usr/local/bin
remote_src: true
The regex part at the end is for replacing the v at the beginning of the tag since the format now on GitHub for gogs is gogs_0.12.3_linux_armv7.zip and latest tag includes a v
I got inspired and extended the play.
After downloading I add the version number to the binary and link to the program.
so I can check on the next run if the version is still up to date. Otherwise it is downloaded and linked again.
- hosts: localhost
become: false
vars:
org: watchexec
repo: watchexec
filename: x86_64-unknown-linux-gnu.tar.xz
version: latest
project_url: https://api.github.com/repos/{{ org }}/{{ repo }}/releases
tasks:
- name: check {{ repo }} version
uri:
url: "{{ project_url }}/{{ version }}"
return_content: true
register: latest_version
- name: check if {{ repo }}-{{ version }} already there
stat:
path: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
register: newestbinary
- name: download and link to version {{ version }}
block:
- name: create tempfile
tempfile:
state: directory
suffix: dwnld
register: tempfolder_1
- name: "installing {{ repo }} {{ latest_version.json.tag_name }}"
# idea from: https://stackoverflow.com/a/62672308/886659
loop: "{{ latest_version.json.assets }}"
when: "filename|string in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ tempfolder_1.path }}"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- "{{ repo }}"
- name: command because no mv available
command: mv "{{ tempfolder_1.path }}/{{ repo }}" "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
args:
creates: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
- name: "link {{ repo }}-{{ latest_version.json.tag_name }} -> {{ repo }} "
file:
src: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
dest: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}"
state: link
force: yes
when: not newestbinary.stat.exists
always:
- name: delete {{ tempfolder_1.path|default("tempfolder") }}
file:
path: "{{ tempfolder_1.path }}"
state: absent
when: tempfolder_1.path is defined
ignore_errors: true
# vim:ft=yaml.ansible:
here is the file on github
Download the latest release with the help of json_query.
Note: you might need to_json|from_json as a workaround for this issue.
- name: get the latest release details
uri:
url: https://api.github.com/repos/prometheus/node_exporter/releases/latest
method: GET
register: node_exp_release
delegate_to: localhost
- name: set the release facts
set_fact:
file_name: "{{ node_exp_latest.name }}"
download_url: "{{ node_exp_latest.browser_download_url }}"
vars:
node_exp_latest: "{{ node_exp_release.json|to_json|from_json|json_query('assets[?ends_with(name,`linux-amd64.tar.gz`)]')|first }}"
- name: download the latest release
get_url:
url: "{{ download_url }}"
dest: /tmp/
I had a similar situation but they didn't use releases, so I found this workaround - it feels more hacky than it should be, but it gets the latest tag and uses that in lieu of releases when they don't exist.
- name: set headers more location
set_fact:
headers_more_location: /srv/headers-more-nginx-module
- name: git clone headers-more-nginx-module
git:
repo: https://github.com/openresty/headers-more-nginx-module
dest: "{{ headers_more_location }}"
depth: 1
update: no
- name: get new tags from remote
command: "git fetch --tags"
args:
chdir: "{{ headers_more_location }}"
- name: get latest tag
command: "git rev-list --tags --max-count=1"
args:
chdir: "{{ headers_more_location }}"
register: tags_rev_list
- name: get latest tag name
command: "git describe --tags {{ tags_rev_list.stdout }}"
args:
chdir: "{{ headers_more_location }}"
register: latest_tag
- name: checkout the latest version
command: "git checkout {{ latest_tag.stdout }}"
args:
chdir: "{{ headers_more_location }}"
Needed to download the latest docker-compose release from github, so came up with the solution below. I know there are other ways to do this, because the latest release content that is returned also contains a direct download link for example. However, I found this solution quite elegant.
- name: Get all docker-compose releases
uri:
url: "https://api.github.com/repos/docker/compose/releases/latest"
return_content: yes
register: release
- name: Set latest docker-compose release
set_fact:
latest_version: "{{ release.json.tag_name }}"
- name: Install docker-compose
get_url:
url : "https://github.com/docker/compose/releases/download/{{ latest_version }}/docker-compose-linux-{{ ansible_architecture }}"
dest: /usr/local/bin/docker-compose
mode: '755'

building docker container with startup scripts via ansible

I am trying to build docker container which should include startup scripts in container's /etc/my_init.d directory via ansible. I have difficulty finding any documentation how to do this. Here is relevant portion of my yaml file:
- name: Create container
docker:
name: myserver
image: "{{ docker_repo }}/myserver:{{ server.version }}"
state: started
restart_policy: always
docker_api_version: 1.18
registry: "{{ docker_repo }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
links:
- "mywebservices"
ports:
- "8000:8000"
- "9899:9899"
volumes:
- "{{ myserver_home_dir }}/logs:/var/log/my_server"
env:
MY_ENVIRONMENT: "{{ my_environment }}"
when: myserver_action == "create"
or (myserver_action == "diff-create" and myserver.changed)
or myserver_action == "update"
What should I add in here to tell ansible to put my files into container's /etc/my_init.d during build?
First of all, you can't build container (you can start it), you build images.
Second, docker module is deprecated, use docker_image to build images.
You should copy your files into build directory (with copy or synchronize modules), for example:
/tmp/build
Then create Dockerfile that will take them from build directory and add into your image.
After that call docker_image:
docker_image:
path: /tmp/build
name: myimage
Finally start your container:
docker_container:
image: myimage
name: mycontainer
Unsure if it's relevant, as I don't know what your startup Ansible content is doing, but it's probably worth looking at the Ansible Container project.
https://github.com/ansible/ansible-container
You can build your container images using Ansible roles instead of a Dockerfile, orchestrate them locally, and deploy them to production Kubernetes or Red Hat OpenShift.

Ansible - Include environment variables from external YML

I'm attempting to store all my environment variables in a file called variables.yml that looks like so:
---
doo: "external"
Then I have a playbook like so:
---
- hosts: localhost
tasks:
- name: "i can totally echo"
environment:
include: variables.yml
ugh: 'internal'
shell: echo "$doo vs $ugh"
register: results
- debug: msg="{{ results.stdout }}"
The result of the echo is ' vs internal'.
How can I change this so that the result is 'external vs internal'. Many thanks!
Assuming the external variable file called variables.ext is structured as follow
---
EXTERNAL:
DOO: "external"
than, according Setting the remote environment and Load variables from files, dynamically within a task a small test could look like
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Load environment variables
include_vars:
file: variables.ext
- name: Echo variables
shell:
cmd: 'echo "${DOO} vs ${UGH}"'
environment:
DOO: "{{ EXTERNAL.DOO }}"
UGH: "internal"
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
resulting into an output of
TASK [Show result] ********
ok: [localhost] =>
msg: external vs internal

Add swap memory with ansible

I'm working on a project where having swap memory on my servers is a needed to avoid some python long running processes to go out of memory and realized for the first time that my ubuntu vagrant boxes and AWS ubuntu instances didn't already have one set up.
In https://github.com/ansible/ansible/issues/5241 a possible built in solution was discussed but never implemented, so I'm guessing this should be a pretty common task to automatize.
How would you set up a file based swap memory with ansible in an idempotent way? What modules or variables does ansible provide help with this setup (like ansible_swaptotal_mb variable) ?
This is my current solution:
- name: Create swap file
command: dd if=/dev/zero of={{ swap_file_path }} bs=1024 count={{ swap_file_size_mb }}k
creates="{{ swap_file_path }}"
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{ swap_file_path }}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: "Check swap file type"
command: file {{ swap_file_path }}
register: swapfile
tags:
- swap.file.mkswap
- name: Make swap file
command: "sudo mkswap {{ swap_file_path }}"
when: swapfile.stdout.find('swap file') == -1
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{ swap_file_path }}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Mount swap
command: "swapon {{ swap_file_path }}"
when: ansible_swaptotal_mb < 1
tags:
- swap.file.swapon
I tried the answer above but "Check swap file type" always came back as changed and therefore isn't idempotent which is encouraged as a best practice when writing Ansible tasks.
The role below has been tested on Ubuntu 14.04 Trusty and doesn't require gather_facts to be enabled.
- name: Set swap_file variable
set_fact:
swap_file: "{{swap_file_path}}"
tags:
- swap.set.file.path
- name: Check if swap file exists
stat:
path: "{{swap_file}}"
register: swap_file_check
tags:
- swap.file.check
- name: Create swap file
command: fallocate -l {{swap_file_size}} {{swap_file}}
when: not swap_file_check.stat.exists
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{swap_file}}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: Format swap file
sudo: yes
command: "mkswap {{swap_file}}"
when: not swap_file_check.stat.exists
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{swap_file}}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Turn on swap
sudo: yes
command: swapon -a
when: not swap_file_check.stat.exists
tags:
- swap.turn.on
- name: Set swappiness
sudo: yes
sysctl:
name: vm.swappiness
value: "{{swappiness}}"
tags:
- swap.set.swappiness
As of Ansible 2.9, sudo declarations should be become:
- name: Set swap_file variable
set_fact:
swap_file: "{{swap_file_path}}"
tags:
- swap.set.file.path
- name: Check if swap file exists
stat:
path: "{{swap_file}}"
register: swap_file_check
tags:
- swap.file.check
- name: Create swap file
command: fallocate -l {{swap_file_size}} {{swap_file}}
when: not swap_file_check.stat.exists
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{swap_file}}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: Format swap file
become: yes
command: "mkswap {{swap_file}}"
when: not swap_file_check.stat.exists
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{swap_file}}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Turn on swap
become: yes
command: swapon -a
when: not swap_file_check.stat.exists
tags:
- swap.turn.on
- name: Set swappiness
become: yes
sysctl:
name: vm.swappiness
value: "{{swappiness}}"
tags:
- swap.set.swappiness
Vars required:
swap_file_path: /swapfile
# Use any of the following suffixes
# c=1
# w=2
# b=512
# kB=1000
# K=1024
# MB=1000*1000
# M=1024*1024
# xM=M
# GB=1000*1000*1000
# G=1024*1024*1024
swap_file_size: 4G
swappiness: 1
I based my take on the great Ashley's answer (please upvote it!) and added the following extra features:
global flag to manage the swap or not,
allow changing the swap size,
allow disabling swap,
...plus these technical improvements:
full idempotency (only ok & skipping when nothing is changed, otherwise some changed),
use dd instead of fallocate for compatibility with XFS fs (see this answer for more info),
Limitations:
changes the existing swapfile works correctly only if you provide its path as swap_file_path.
Tested on Centos 7.7 with Ansible 2.9 and Rocky 8 with Ansible
4.8.0.
Requires using privilege escalation as many of the commands need to run as root. (The easiest way to do it is to add --become to ansible-playbook arguments or to set the appropriate value in your ansible.cfg).
Parameters:
swap_configure: true # or false
swap_enable: true # or false
swap_file_path: /swapfile
swap_file_size_mb: 4096
swappiness: 1
Code:
- name: Configure swap
when: swap_configure | bool
block:
- name: Check if swap file exists
stat:
path: "{{swap_file_path}}"
get_checksum: false
get_md5: false
register: swap_file_check
changed_when: false
- name: Set variable for existing swap file size
set_fact:
swap_file_existing_size_mb: "{{ (swap_file_check.stat.size / 1024 / 1024) | int }}"
when: swap_file_check.stat.exists
- name: Check if swap is on
shell: swapon --show | grep {{swap_file_path}}
register: swap_is_enabled
changed_when: false
failed_when: false
- name: Disable swap
command: swapoff {{swap_file_path}}
register: swap_disabled
when: >
swap_file_check.stat.exists
and 'rc' in swap_is_enabled and swap_is_enabled.rc == 0
and (not swap_enable or (swap_enable and swap_file_existing_size_mb != swap_file_size_mb))
- name: Delete the swap file
file:
path: "{{swap_file_path}}"
state: absent
when: not swap_enable
- name: Remove swap entry from fstab
mount:
name: none
src: "{{swap_file_path}}"
fstype: swap
opts: sw
passno: '0'
dump: '0'
state: present
when: not swap_enable
- name: Configure swap
when: swap_enable | bool
block:
- name: Create or change the size of swap file
command: dd if=/dev/zero of={{swap_file_path}} count={{swap_file_size_mb}} bs=1MiB
register: swap_file_created
when: >
not swap_file_check.stat.exists
or swap_file_existing_size_mb != swap_file_size_mb
- name: Change swap file permissions
file:
path: "{{swap_file_path}}"
mode: 0600
- name: Check if swap is formatted
shell: file {{swap_file_path}} | grep 'swap file'
register: swap_file_is_formatted
changed_when: false
failed_when: false
- name: Format swap file if it's not formatted
command: mkswap {{swap_file_path}}
when: >
('rc' in swap_file_is_formatted and swap_file_is_formatted.rc > 0)
or swap_file_created.changed
- name: Add swap entry to fstab
mount:
name: none
src: "{{swap_file_path}}"
fstype: swap
opts: sw
passno: '0'
dump: '0'
state: present
- name: Turn on swap
shell: swapon -a
# if swap was disabled from the start
# or has been disabled to change its params
when: >
('rc' in swap_is_enabled and swap_is_enabled.rc != 0)
or swap_disabled.changed
- name: Configure swappiness
sysctl:
name: vm.swappiness
value: "{{ swappiness|string }}"
state: present
I'm not able to reply to Greg Dubicki answer so I add my 2 cents here:
I think adding casting to int when doing calculation on swap_file_size_mb * 1024 * 1024 to transform it into swap_file_size_mb | int * 1024 * 1024 will make the code a bit smarter in case you want to use a fact to extract the final swap size based on the amount of RAM installed like:
swap_file_size_mb: "{{ ansible_memory_mb['real']['total'] * 2 }}"
else it will always resize the swap file even if the size it's the same.
Here is the ansible-swap playbook that I use to install 4GB (or whatever I configure group_vars for dd_bs_size_mb * swap_count) of swap space on new servers:
https://github.com/tribou/ansible-swap
I also added a function in my ~/.bash_profile to help with the task:
# Path to where you clone the repo
ANSIBLE_SWAP_PLAYBOOK=$HOME/dev/ansible-swap
install-swap ()
{
usage='Usage: install-swap HOST';
if [ $# -ne 1 ]; then
echo "$usage";
return 1;
fi;
ansible-playbook $ANSIBLE_SWAP_PLAYBOOK/ansible-swap/site.yml --extra-vars "target=$1"
}
Just make sure to add your HOST to your ansible inventory first.
Then it's just install-swap HOST.
Here is a preexisting role on Galaxy which achieves the same thing: https://galaxy.ansible.com/geerlingguy/swap
What works for me is to add 4Gib swap through the playbook.
Check this:
.

Resources