Restart mutiple Docker Containers using Ansible not happening - docker

I am trying to restart my docker containers one by one for a particular image using Ansible but it doesn't seem to be happening. Below is my yml and what it is doing is exiting the current running container.
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
docker_container:
name: "{{ item }}"
# image: ubuntu
state: started
restart: yes
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"
If you see the below output when i run docker ps there are no running containers.
TASK [Restart Docker Service] ****************************************************************************************************************
/usr/lib/python2.7/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.25.9) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
changed: [shashank-VM] => (item=c2310b76b005)
PLAY RECAP ***********************************************************************************************************************************
shashank-VM : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shashank#shashank-VM:~/ansible$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am i doing wrong? Can someone help?

I don't think the docker_container module is designed to do what you want (i.e., restart an existing container). The module is designed to manage containers by name, not by id, and will check that the running container matches the options provided to docker_container.
You're probably better off simply using the docker command to restart your containers:
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
command: docker restart {{ item }}
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"

Related

Creating file via ansible directly in container

I want to create a file, directly in a container directory.
I created a directory before:
- name: create private in container
ansible.builtin.file:
path: playcontainer:/etc/ssl/private/
state: directory
mode: 0755
But it doesn´t let me create a file in that directory
- name: openssl key
openssl_privatekey:
path: playcontainer:/etc/ssl/private/playkey.key
size: "{{ key_size }}"
type: "{{ key_type }}"`
What am I missing?
From scratch full example to interact with a container from ansible.
Please note that this is not always what you want to do. In this specific case, unless if testing an ansible role for example, the key should be written inside the image at build time when running your Dockerfile, or bind mounted from host at container start. You should not mess with the container filesystem once started on production.
First we create a container for our test:
docker run -d --rm --name so_example python:latest sleep infinity
Now we need an inventory to target that container (inventories/default/main.yml)
---
all:
vars:
ansible_connection: docker
hosts:
so_example:
Finally a test playbook.yml to achieve your goal:
---
- hosts: all
gather_facts: false
vars:
key_path: /etc/ssl/private
key_size: 4096
key_type: RSA
tasks:
- name: Make sure package requirements are met
apt:
name: python3-pip
state: present
- name: Make sure python requirements are met
pip:
name: cryptography
state: present
- name: Create private directory
file:
path: "{{ key_path }}"
state: directory
owner: root
group: root
mode: 0750
- name: Create a key
openssl_privatekey:
path: "{{ key_path }}/playkey.key"
size: "{{ key_size }}"
type: "{{ key_type }}"
Running the playbook gives:
$ ansible-playbook -i inventories/default/ playbook.yml
PLAY [all] *****************************************************************************************************************************************************************************************
TASK [Make sure package requirements are met] ******************************************************************************************************************************************************
changed: [so_example]
TASK [Make sure python requirements are met] *******************************************************************************************************************************************************
changed: [so_example]
TASK [Create private directory] ********************************************************************************************************************************************************************
changed: [so_example]
TASK [Create a key] ********************************************************************************************************************************************************************************
changed: [so_example]
PLAY RECAP *****************************************************************************************************************************************************************************************
so_example : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We can now check that the file is there
$ docker exec so_example ls -l /etc/ssl/private
total 5
-rw------- 1 root root 3243 Sep 15 13:28 playkey.key
$ docker exec so_example head -2 /etc/ssl/private/playkey.key
-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA6xrz5kQuXbd59Bq0fqnwJ+dhkcHWCMh4sZO6UNCfodve7JP0
Clean-up:
docker stop so_example

Ansible can't import docker even though it's installed

I'm trying to build a server that runs a docker container using ansible, but I'm getting the error Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on ubuntu-xenial's Python /usr/bin/python3.
The target machine is ubuntu xenial, currently in vagrant but I get the same error on an Azure VM. My version of ansible is 2.9.15, the docker version ansible is installing is 20.10.1.
My test inventory file consists of
all:
vars:
ansible_user: vagrant
ansible_become: yes
ansible_python_interpreter: /usr/bin/python3
children:
a_servers:
hosts:
192.168.33.100
b_servers:
hosts:
192.168.33.100
prod:
hosts:
192.168.33.100
The relevant part of my playbook is:
- name: Docker role
include_role:
name: nickjj.docker
vars:
docker__users: ['deploy']
docker_registries:
- #registry_url: "https://index.docker.io/v1/"
username: "{{ docker_user.username }}"
password: "{{ docker_user.password }}"
- name: Log in to docker
docker_login:
username: "{{ docker_user.username }}"
password: "{{ docker_user.password }}"
become: yes
become_user: deploy
When I log in to the vagrant box as user deploy, I can run docker login and docker-compose up -d without any problems. I can also run /usr/bin/python3 and import docker without issue.
There is no other python on the system.
The error comes from the docker_login resource, which can't seem to import docker.
Is a configuration I'm missing, or something else I've overlooked that would cause it to fail? Any help is much appreciated.

How do Docker Swarm Workers Do Self Check?

I am having trouble checking if a docker swarm worker node has already joined a swarm on Ansible.
- name: Check if Worker has already joined
shell: docker node ls
register: swarm_status
ignore_errors: true
- name: Join Swarm
shell: shell: docker swarm join --token {{ hostvars[groups['leader'][0]]['worker_token']['stdout'] }} {{ hostvars[groups['leader'][0]]['ec2_public_ip']['stdout'] }}:2377
when: swarm_status.rc != 0
run_once: true
This doesn't work as swarm_status will always display error as worker cannot inspect self.
Thanks.
Edit: You can check from a manager node with docker_node_info. Debug the json file to find the information you need:
- name: Docker Node Info
docker_node_info:
name: worker
register: worker_status
- name: Debug
debug:
msg: "{{ worker_status }}"
Next, use json query to filter out the results using jmespath
- name:
debug:
msg: "{{ worker_status | json_query('nodes[*].Spec.Role')}}"
Output:
worker

How to force Ansible to recreate a docker container if mounted files have changed

I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?

How to reload config with ansible docker_container module?

I am trying to accomplish docker kill -s HUP <container> in Ansible but it looks like the options I try always restart the container or attempt to instead of reloading the config.
Running the following command allows me to reload the configuration without restarting the container:
docker kill -s HUP <container>
The Ansible docker_container docs suggest the following options:
force_kill Use the kill command when stopping a running
container.
kill_signal Override default signal used to kill a running
container.
Using the kill_signal in isolation did nothing.
Below is an example of what I hoped would work:
- name: Reload haproxy config
docker_container:
name: '{{ haproxy_docker_name }}'
state: stopped
image: '{{ haproxy_docker_image }}'
force_kill: True
kill_signal: HUP
I assumed overriding force_kill and kill_signal would give me the desired behaviour. I have also tried setting state to 'started' and present.
What is the correct way to do this?
I needed to do the same with an haproxy docker instance to reload the configuration. The following worked in ansible 2.11.2:
handlers:
- name: Restart HAProxy
docker_container:
name: haproxy
state: stopped
force_kill: True
kill_signal: HUP
I went with a simple shell command, which runs whenever the docker-compose file has my service:
---
- hosts: pis
remote_user: pi
tasks:
- name: Get latest docker images
docker_compose:
project_src: dc
remove_orphans: true
pull: true
register: docker_compose_output
- name: reload prometheus
command: docker kill --signal=HUP dc_prometheus_1
when: '"prometheus" in ansible_facts'
- name: reload blackbox
command: docker kill --signal=HUP dc_blackbox_1
when: '"blackbox" in ansible_facts'
Appendix
I found some examples using GitHub advanced search, but they didn't work for me:
https://github.com/search?q=kill_signal%3A+HUP+docker+extension%3Ayml&type=Code
An example:
- name: Reload HAProxy
docker_container:
name: "haproxy"
state: started
force_kill: true
kill_signal: HUP

Resources