Restart multiple Docker containers using Ansible - docker

how do i dynamically restart all my docker containers from Ansible? I mean i know a way where i can define my containers in a variable and loop through them but what i want to achieve is this -
Fetch the currently running containers and restart all or some of them one by one through some loop.
How to achieve this using Ansible?

Docker explanation
Retrieve name/image for all the running container:
docker container ls -a --format '{{.Names}} {{.Image}}'
You could also filter the output of the docket container command to a specific image name, thanks to the --filter ancestor=image_name option:
docker container ls -a --filter ancestor=alpine --format '{{.Names}} {{.Image}}'
Ansible integration:
First I would define some filters as Ansible variables:
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
Them I will execute the docker container command in a dedicated task and save the command output to an Ansible variable:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
Finally I will iterate over it and use it into the docker_container ansible module:
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"
final playbook.yml
---
- hosts: localhost
gather_facts: no
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
tasks:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"

Related

Retrieve docker container output with ansible using list

I need to retrieve the output of a docker command on ansible, it's easy when running a single instance but I'm running the command using with_dict:
I'm using something like
- name: Running task
docker_container:
command: <<here my command>>
detach: false
recreate: true
restart: false
restart_policy: "no"
with_dict: "{{ mylist.config.validator_client.accounts }}"
register: mycontainers
I've tried the following with no success:
- name: display logs
debug:
msg: "{{ item.ansible_facts.docker_container.Output }}"
with_items: "{{ mycontainers.results }}"
Any idea?

How to escape JSON in Ansible playbook

I have the following YAML Ansible playbook file which I intent do use to capture some information from my docker containers:
---
# Syntax check: /usr/bin/ansible-playbook --syntax-check --inventory data/config/host_inventory.yaml data/config/ansible/docker_containers.yaml
- hosts: hosts
gather_facts: no
tasks:
- name: Docker ps output - identify running containers
shell: "/usr/bin/docker ps --format '{\"ID\":\"{{ .ID }}\", \"Image\": \"{{ .Image }}\", \"Names\":\"{{ .Names }}\"}'"
register: docker_ps_output
- name: Show content of docker_ps_output
debug:
msg: docker_ps_output.stdout_lines
But escaping is not working, Ansible gives me the middle finger when I try to run the playbook:
PLAY [hosts] ***********************************************************************************************************************************************************
TASK [Docker ps output - identify running containers] **********************************************************************************************************************************************
fatal: [myhost.com]: FAILED! => {"msg": "template error while templating string: unexpected '.'. String: /usr/bin/docker ps --format ''{\"ID\":\"{{ .ID }}\", \"Image\": \"{{ .Image }}\", \"Names\":\"{{ .Names }}\"}''"}
to retry, use: --limit #/tmp/docker_containers.retry
PLAY RECAP *****************************************************************************************************************************************************************************************
myhost.com : ok=0 changed=0 unreachable=0 failed=1
The original command I'm trying to run:
/usr/bin/docker ps --format '{"ID":"{{ .ID }}", "Image": "{{ .Image }}", "Names":"{{ .Names }}"}'
I would suggest to use a block scalar. Your problem is that {{ .ID }} etc is processed by Ansible's Jinja templating engine when it should not. Probably the most readable way around this is:
---
# Syntax check: /usr/bin/ansible-playbook --syntax-check --inventory data/config/host_inventory.yaml data/config/ansible/docker_containers.yaml
- hosts: hosts
gather_facts: no
tasks:
- name: Docker ps output - identify running containers
shell: !unsafe >-
/usr/bin/docker ps --format
'{"ID":"{{ .ID }}", "Image": "{{ .Image }}", "Names":"{{ .Names }}"}'
register: docker_ps_output
- name: Show content of docker_ps_output
debug:
msg: docker_ps_output.stdout_lines
>- starts a folded block scalar, in which you do not need to escape anything and newlines are folded into spaces. The tag !unsafe prevents the value to be processed with Jinja.
If you want to avoid templating, you need to cover double-brackets by another double-brackets:
{{ thmthng }}
should look like:
{{ '{{' }} thmthng {{ '}}' }}
Your playbook:
---
- hosts: hosts
gather_facts: no
tasks:
- name: Docker ps output - identify running containers
shell: "docker ps -a --format '{\"ID\": \"{{ '{{' }} .ID {{ '}}' }}\", \"Image\": \"{{ '{{' }} .Image {{ '}}' }}\", \"Names\" : \"{{ '{{' }} .Names {{ '}}' }}}\"'"
register: docker_ps_output
- name: Show content of docker_ps_output
debug:
var: docker_ps_output.stdout_lines

ansible plybook script for destroy all containers and remove images from remote

looking for ansible playbook script which can stop and destroy all container on remote and remove existing images as well the below given code is just stopping the running containers
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
this is the final solution for my question and it worked 100% it will destroy all containers and remove images as well .
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
- name: Remove Stoped docker containers
shell: |
docker rm $(docker ps -a -q);
when: docker_info.containers != 0
- name: Get details of all images
docker_host_info:
images: yes
verbose_output: yes
register: image_info
- name: Remove all images
docker_image:
name: "{{ item }}"
state: absent
loop: "{{ image_info.images | map(attribute='Id') | list }}"

docker_container: How to add multiple Volumes

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

Why ansible keeps recreating docker containers with state "started"

I have a docker container managed by Ansible. Every time I start the container with Ansible it is recreated instead of just started.
Here are the Ansible commands I use to stop/start the container:
ansible-playbook <playbook> -i <inventory> --extra-vars "state=stopped"
ansible-playbook <playbook> -i <inventory> --extra-vars "state=started"
Here's the Ansible taks I use to manage container. The only thing that changes between "stop" and "start" command is {{ state }}.
- docker:
name: "{{ postgres_container_name }}"
image: "{{ postgres_image_name }}"
state: "{{ state }}"
ports:
- "{{ postgres_host_port }}:{{ postgres_guest_port }}"
env:
POSTGRES_USER: "{{ postgres_user }}"
POSTGRES_PASSWORD: "{{ postgres_password }}"
POSTGRES_DB: "{{ postgres_db }}"
When I start, stop and start the container I get the following verbose output from Ansible command:
changed: [127.0.0.1] => {"ansible_facts": {"docker_containers": [{"Id": "ab1c0f6cc30de33aba31ce93671267783ba08a1294df40556870e66e8bf77b6d", "Warnings": null}]}, "changed": true, "containers": [{"Id": "ab1c0f6cc30de33aba31ce93671267783ba08a1294df40556870e66e8bf77b6d", "Warnings": null}], "msg": "removed 1 container, started 1 container, created 1 container.", "reload_reasons": null, "summary": {"created": 1, "killed": 0, "pulled": 0, "removed": 1, "restarted": 0, "started": 1, "stopped": 0}}
It states that the container changed, was removed, created and started.
Could you tell me why Ansible sees my container as changed and recreates it instead of starts?
Ansible's docker module will first remove any stopped containers with the same name when you use it with the state of started.
The module docs don't really make it all that clear but there is a comment explaining this in the source code in the started function.

Resources