Add swap memory with ansible - memory

I'm working on a project where having swap memory on my servers is a needed to avoid some python long running processes to go out of memory and realized for the first time that my ubuntu vagrant boxes and AWS ubuntu instances didn't already have one set up.
In https://github.com/ansible/ansible/issues/5241 a possible built in solution was discussed but never implemented, so I'm guessing this should be a pretty common task to automatize.
How would you set up a file based swap memory with ansible in an idempotent way? What modules or variables does ansible provide help with this setup (like ansible_swaptotal_mb variable) ?

This is my current solution:
- name: Create swap file
command: dd if=/dev/zero of={{ swap_file_path }} bs=1024 count={{ swap_file_size_mb }}k
creates="{{ swap_file_path }}"
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{ swap_file_path }}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: "Check swap file type"
command: file {{ swap_file_path }}
register: swapfile
tags:
- swap.file.mkswap
- name: Make swap file
command: "sudo mkswap {{ swap_file_path }}"
when: swapfile.stdout.find('swap file') == -1
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{ swap_file_path }}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Mount swap
command: "swapon {{ swap_file_path }}"
when: ansible_swaptotal_mb < 1
tags:
- swap.file.swapon

I tried the answer above but "Check swap file type" always came back as changed and therefore isn't idempotent which is encouraged as a best practice when writing Ansible tasks.
The role below has been tested on Ubuntu 14.04 Trusty and doesn't require gather_facts to be enabled.
- name: Set swap_file variable
set_fact:
swap_file: "{{swap_file_path}}"
tags:
- swap.set.file.path
- name: Check if swap file exists
stat:
path: "{{swap_file}}"
register: swap_file_check
tags:
- swap.file.check
- name: Create swap file
command: fallocate -l {{swap_file_size}} {{swap_file}}
when: not swap_file_check.stat.exists
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{swap_file}}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: Format swap file
sudo: yes
command: "mkswap {{swap_file}}"
when: not swap_file_check.stat.exists
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{swap_file}}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Turn on swap
sudo: yes
command: swapon -a
when: not swap_file_check.stat.exists
tags:
- swap.turn.on
- name: Set swappiness
sudo: yes
sysctl:
name: vm.swappiness
value: "{{swappiness}}"
tags:
- swap.set.swappiness
As of Ansible 2.9, sudo declarations should be become:
- name: Set swap_file variable
set_fact:
swap_file: "{{swap_file_path}}"
tags:
- swap.set.file.path
- name: Check if swap file exists
stat:
path: "{{swap_file}}"
register: swap_file_check
tags:
- swap.file.check
- name: Create swap file
command: fallocate -l {{swap_file_size}} {{swap_file}}
when: not swap_file_check.stat.exists
tags:
- swap.file.create
- name: Change swap file permissions
file: path="{{swap_file}}"
owner=root
group=root
mode=0600
tags:
- swap.file.permissions
- name: Format swap file
become: yes
command: "mkswap {{swap_file}}"
when: not swap_file_check.stat.exists
tags:
- swap.file.mkswap
- name: Write swap entry in fstab
mount: name=none
src={{swap_file}}
fstype=swap
opts=sw
passno=0
dump=0
state=present
tags:
- swap.fstab
- name: Turn on swap
become: yes
command: swapon -a
when: not swap_file_check.stat.exists
tags:
- swap.turn.on
- name: Set swappiness
become: yes
sysctl:
name: vm.swappiness
value: "{{swappiness}}"
tags:
- swap.set.swappiness
Vars required:
swap_file_path: /swapfile
# Use any of the following suffixes
# c=1
# w=2
# b=512
# kB=1000
# K=1024
# MB=1000*1000
# M=1024*1024
# xM=M
# GB=1000*1000*1000
# G=1024*1024*1024
swap_file_size: 4G
swappiness: 1

I based my take on the great Ashley's answer (please upvote it!) and added the following extra features:
global flag to manage the swap or not,
allow changing the swap size,
allow disabling swap,
...plus these technical improvements:
full idempotency (only ok & skipping when nothing is changed, otherwise some changed),
use dd instead of fallocate for compatibility with XFS fs (see this answer for more info),
Limitations:
changes the existing swapfile works correctly only if you provide its path as swap_file_path.
Tested on Centos 7.7 with Ansible 2.9 and Rocky 8 with Ansible
4.8.0.
Requires using privilege escalation as many of the commands need to run as root. (The easiest way to do it is to add --become to ansible-playbook arguments or to set the appropriate value in your ansible.cfg).
Parameters:
swap_configure: true # or false
swap_enable: true # or false
swap_file_path: /swapfile
swap_file_size_mb: 4096
swappiness: 1
Code:
- name: Configure swap
when: swap_configure | bool
block:
- name: Check if swap file exists
stat:
path: "{{swap_file_path}}"
get_checksum: false
get_md5: false
register: swap_file_check
changed_when: false
- name: Set variable for existing swap file size
set_fact:
swap_file_existing_size_mb: "{{ (swap_file_check.stat.size / 1024 / 1024) | int }}"
when: swap_file_check.stat.exists
- name: Check if swap is on
shell: swapon --show | grep {{swap_file_path}}
register: swap_is_enabled
changed_when: false
failed_when: false
- name: Disable swap
command: swapoff {{swap_file_path}}
register: swap_disabled
when: >
swap_file_check.stat.exists
and 'rc' in swap_is_enabled and swap_is_enabled.rc == 0
and (not swap_enable or (swap_enable and swap_file_existing_size_mb != swap_file_size_mb))
- name: Delete the swap file
file:
path: "{{swap_file_path}}"
state: absent
when: not swap_enable
- name: Remove swap entry from fstab
mount:
name: none
src: "{{swap_file_path}}"
fstype: swap
opts: sw
passno: '0'
dump: '0'
state: present
when: not swap_enable
- name: Configure swap
when: swap_enable | bool
block:
- name: Create or change the size of swap file
command: dd if=/dev/zero of={{swap_file_path}} count={{swap_file_size_mb}} bs=1MiB
register: swap_file_created
when: >
not swap_file_check.stat.exists
or swap_file_existing_size_mb != swap_file_size_mb
- name: Change swap file permissions
file:
path: "{{swap_file_path}}"
mode: 0600
- name: Check if swap is formatted
shell: file {{swap_file_path}} | grep 'swap file'
register: swap_file_is_formatted
changed_when: false
failed_when: false
- name: Format swap file if it's not formatted
command: mkswap {{swap_file_path}}
when: >
('rc' in swap_file_is_formatted and swap_file_is_formatted.rc > 0)
or swap_file_created.changed
- name: Add swap entry to fstab
mount:
name: none
src: "{{swap_file_path}}"
fstype: swap
opts: sw
passno: '0'
dump: '0'
state: present
- name: Turn on swap
shell: swapon -a
# if swap was disabled from the start
# or has been disabled to change its params
when: >
('rc' in swap_is_enabled and swap_is_enabled.rc != 0)
or swap_disabled.changed
- name: Configure swappiness
sysctl:
name: vm.swappiness
value: "{{ swappiness|string }}"
state: present

I'm not able to reply to Greg Dubicki answer so I add my 2 cents here:
I think adding casting to int when doing calculation on swap_file_size_mb * 1024 * 1024 to transform it into swap_file_size_mb | int * 1024 * 1024 will make the code a bit smarter in case you want to use a fact to extract the final swap size based on the amount of RAM installed like:
swap_file_size_mb: "{{ ansible_memory_mb['real']['total'] * 2 }}"
else it will always resize the swap file even if the size it's the same.

Here is the ansible-swap playbook that I use to install 4GB (or whatever I configure group_vars for dd_bs_size_mb * swap_count) of swap space on new servers:
https://github.com/tribou/ansible-swap
I also added a function in my ~/.bash_profile to help with the task:
# Path to where you clone the repo
ANSIBLE_SWAP_PLAYBOOK=$HOME/dev/ansible-swap
install-swap ()
{
usage='Usage: install-swap HOST';
if [ $# -ne 1 ]; then
echo "$usage";
return 1;
fi;
ansible-playbook $ANSIBLE_SWAP_PLAYBOOK/ansible-swap/site.yml --extra-vars "target=$1"
}
Just make sure to add your HOST to your ansible inventory first.
Then it's just install-swap HOST.

Here is a preexisting role on Galaxy which achieves the same thing: https://galaxy.ansible.com/geerlingguy/swap

What works for me is to add 4Gib swap through the playbook.
Check this:
.

Related

Ansible validate docker-compose with env_file and send to host

I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config
This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...

Ansible molecule using docker - how to specify memory limit

I have a molecule test which spins up 2 Docker containers, for testing 2 application versions at once.
dependency:
name: galaxy
driver:
name: docker
lint:
name: yamllint
platforms:
- name: molecule1
hostname: molecule1
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
- name: molecule2
hostname: molecule2
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
provisioner:
name: ansible
inventory:
host_vars:
molecule1:
app_version: "v1"
molecule2:
app_version: "v2"
lint:
name: ansible-lint
scenario:
name: default
converge_sequence:
- syntax
- lint
- create
- prepare
- converge
- idempotence
- verify
verifier:
name: goss
lint:
name: yamllint
I am looking for a way to specify the memory like -m or --memory= as described here.
I understand that molecule makes use of the docker_container ansible module, which support the memory parameter, but somehow I cannot find a way to make this work in molecule.
Any ideas how to accomplish this?
PS: My guess is that this parameter is not yet implemented in molecule, if my assumption is correct that this is the implementation.
Thanks in advance.
++Update++
--memory is indeed not yet implemented in molecule docker provisioner.
If anybody is interested, here is the relevant change to the source code:
diff --git a/molecule/provisioner/ansible/playbooks/docker/create.yml b/molecule/provisioner/ansible/playbooks/docker/create.yml
index 7a04b851..023a720a 100644
--- a/molecule/provisioner/ansible/playbooks/docker/create.yml
+++ b/molecule/provisioner/ansible/playbooks/docker/create.yml
## -121,6 +121,8 ##
hostname: "{{ item.hostname | default(item.name) }}"
image: "{{ item.pre_build_image | default(false) | ternary('', 'molecule_local/') }}{{ item.image }}"
pull: "{{ item.pull | default(omit) }}"
+ kernel_memory: "{{ item.kernel_memory | default(omit) }}"
+ memory: "{{ item.memory | default(omit) }}"
state: started
recreate: false
log_driver: json-file
My fork has now been merged to Molecule.

is there a way to grep or parse an ansible variable

I have a variable which i need to parse to pull out a version string, is there a way to do this? Below is an example of the ansible variable.
--xxx 1.2.3-102 --yyy 2.5.10-47 --zzz 10.4.2-193
Update: Adding ansible task format
---
- hosts: localhost
tasks:
- name: Get Version
shell: echo '{{ version }}'
register: results
- set_fact:
value: "{{ results.stdout | regex_search(regexp,'') }}"
vars:
regexp: ''
- debug:
var: value
getting just the version number after "--yyy", alter the regular expression as needed for your task:
- hosts: localhost
tasks:
- name: Get Version
shell: echo '--xxx 1.2.3-102 --yyy 2.5.10-47 --zzz 10.4.2-193'
register: results
- name: set regex
set_fact:
re: '--yyy\s+(?P<digit>\d+\.\d+\.\d+-\d+)'
- set_fact:
value: "{{ results.stdout | regex_search(re, '\\g<digit>') }}"
- debug:
var: value[0]
|m-p solution works great for the latest version of ansible, unfortunately I have to use ansible pre 2.0 (1.9.6) which doesn't seem to support regex_search for some strange reason.
In that case I will use the following
"{{ results | regex_replace ('((xxx|yyy)\\s[\\S]+)|(--|zzz|\\s)','') | join }}"

Reading multiple values from an env file with ansible and storing them as facts

I have the following code which reads values from an environment (.env) file and stores them as facts:
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_PASSWORD"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read password
set_fact:
db_password: "{{ output.stdout }}"
when:
- db_password is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_USER"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read user
set_fact:
db_user: "{{ output.stdout }}"
when:
- db_user is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_NAME"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read db_name
set_fact:
db_name: "{{ output.stdout }}"
when:
- db_name is undefined
changed_when: false
- name: Container environment loaded; the following facts are now available for use by ansible
debug: "var={{ item }}"
with_items:
- db_name
- db_user
- db_password
It's quite bulky and unwieldy. I would like to write it something like this, but I can't figure out how :
vars:
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo {{ item|upper }}"
register: output
with_items: values
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when:
- item.0 is undefined
with_together:
- values
- output.results
changed_when: false
Instead, I get this output:
ok: [default] => (item=values) => {"changed": false, "cmd": "source /var/www/mydomain.org/.env; echo VALUES", "delta": "0:00:00.002240", "end": "2017-02-15 15:25:15.338673", "item": "values", "rc": 0, "start": "2017-02-15 15:25:15.336433", "stderr": "", "stdout": "VALUES", "stdout_lines": ["VALUES"], "warnings": []}
TASK [sql-base : Store read password] ******************************************
skipping: [default] => (item=[u'values', u'output.results']) => {"changed": false, "item": ["values", "output.results"], "skip_reason": "Conditional check failed", "skipped": true}
Even more ideal of course would be if there is an ansible module I have overlooked that allows me to load values from an environment file.
Generally I would either put my variables into the inventory files themselves, or convert them to yml format and use the include_vars module (you might be able to run a sed script to convert your environment file to yml on the fly). Before using the below code make sure you are really required to use those environment files, and you cannot easily use some other mechanism like:
a YAML file with the include_vars module
putting the config inside the inventory (no modules required)
putting the config into an ansible vault file (no modules required, you need to store the decryption key somewhere though)
use some other secure storage mechanism, like Hashicorp's vault (with a plugin like https://github.com/jhaals/ansible-vault)
Back to your code, it is actually almost correct, you were missing a dollar from the read task, and some curly braces from the condition:
test.env
DB_NAME=abcd
DB_PASSWORD=defg
DB_USER=fghi
Note: make sure that this file adheres to sh standards meaning you don't put spaces around the = sign. Having something like DB_NAME = abcd will fail
play.yml
- hosts: all
connection: local
vars:
env_path: 'test.env'
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo ${{ item|upper }}"
register: output
with_items: "{{ values }}"
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when: '{{ item.0 }} is undefined'
with_together:
- "{{ values }}"
- "{{ output.results }}"
changed_when: false
- name: Debug
debug:
msg: "NAME: {{db_name}} PASS: {{db_password}} USER: {{db_user}}"
Running with ansible-playbook -i 127.0.0.1, play.yml:
TASK [Debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "NAME: abcd PASS: defg USER: fghi"
}

Ansible - Include environment variables from external YML

I'm attempting to store all my environment variables in a file called variables.yml that looks like so:
---
doo: "external"
Then I have a playbook like so:
---
- hosts: localhost
tasks:
- name: "i can totally echo"
environment:
include: variables.yml
ugh: 'internal'
shell: echo "$doo vs $ugh"
register: results
- debug: msg="{{ results.stdout }}"
The result of the echo is ' vs internal'.
How can I change this so that the result is 'external vs internal'. Many thanks!
Assuming the external variable file called variables.ext is structured as follow
---
EXTERNAL:
DOO: "external"
than, according Setting the remote environment and Load variables from files, dynamically within a task a small test could look like
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Load environment variables
include_vars:
file: variables.ext
- name: Echo variables
shell:
cmd: 'echo "${DOO} vs ${UGH}"'
environment:
DOO: "{{ EXTERNAL.DOO }}"
UGH: "internal"
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
resulting into an output of
TASK [Show result] ********
ok: [localhost] =>
msg: external vs internal

Resources