How do Docker Swarm Workers Do Self Check? - docker

I am having trouble checking if a docker swarm worker node has already joined a swarm on Ansible.
- name: Check if Worker has already joined
shell: docker node ls
register: swarm_status
ignore_errors: true
- name: Join Swarm
shell: shell: docker swarm join --token {{ hostvars[groups['leader'][0]]['worker_token']['stdout'] }} {{ hostvars[groups['leader'][0]]['ec2_public_ip']['stdout'] }}:2377
when: swarm_status.rc != 0
run_once: true
This doesn't work as swarm_status will always display error as worker cannot inspect self.
Thanks.

Edit: You can check from a manager node with docker_node_info. Debug the json file to find the information you need:
- name: Docker Node Info
docker_node_info:
name: worker
register: worker_status
- name: Debug
debug:
msg: "{{ worker_status }}"
Next, use json query to filter out the results using jmespath
- name:
debug:
msg: "{{ worker_status | json_query('nodes[*].Spec.Role')}}"
Output:
worker

Related

Ansible Loop Register

I have an ansible playbook to create new users and create a docker for each user. The information of the users is gathered from a yaml file there are more that 200 records. The yaml file consists of username and password. To create docker container I have to give PUID and PGID of the user to the docker run command. See below for example command
docker run --restart=always -it --init -td -p {port from ansible}:8443 -e PUID={PUID from ansible} -e PGID={PGID from Ansible} linuxserver/code-server:latest
I want to get PUID and PGID of a user and register it to a variable to use them to create docker container. I tried to use the following command but since it appends the output to a variable dictionary, I am not able to match the username with PUID/PGID.
- name: GET PUID
shell: id -u "{{ item.username }}"
loop: "{{ user }}"
register: puid
- name: pgid Variable
debug: msg="{{ pgid.stdout }}"
YAML for user
user:
- username: john.doe
password: password
The docker image that I want to use: https://hub.docker.com/r/linuxserver/code-server
For example, get the UID and GID of user admin at test_11
shell> ssh admin#test_11 id admin
uid=1001(admin) gid=1001(admin) groups=1001(admin)
Use the module getent. The playbook below
- hosts: test_11
tasks:
- getent:
database: passwd
- set_fact:
admin_uid: "{{ ansible_facts.getent_passwd.admin.1 }}"
admin_gid: "{{ ansible_facts.getent_passwd.admin.2 }}"
- debug:
msg: |
admin_uid: {{ admin_uid }}
admin_gid: {{ admin_gid }}
gives (abridged)
TASK [debug] ********************************************************
ok: [test_11] =>
msg: |-
admin_uid: 1001
admin_gid: 1001

Restart mutiple Docker Containers using Ansible not happening

I am trying to restart my docker containers one by one for a particular image using Ansible but it doesn't seem to be happening. Below is my yml and what it is doing is exiting the current running container.
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
docker_container:
name: "{{ item }}"
# image: ubuntu
state: started
restart: yes
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"
If you see the below output when i run docker ps there are no running containers.
TASK [Restart Docker Service] ****************************************************************************************************************
/usr/lib/python2.7/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.25.9) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
changed: [shashank-VM] => (item=c2310b76b005)
PLAY RECAP ***********************************************************************************************************************************
shashank-VM : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shashank#shashank-VM:~/ansible$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am i doing wrong? Can someone help?
I don't think the docker_container module is designed to do what you want (i.e., restart an existing container). The module is designed to manage containers by name, not by id, and will check that the running container matches the options provided to docker_container.
You're probably better off simply using the docker command to restart your containers:
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
command: docker restart {{ item }}
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"

How to force Ansible to recreate a docker container if mounted files have changed

I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?

How to remove outdated containers using ansible?

I'm using with_sequence to iteratively create copies of a container on a single node using ansible. The number of containers is determined by a variable set at the time of deploy. This works well for increasing the number of containers to scale up, but when I reduce the number to deploy less containers the old containers are left running. Is there a way to stop the old containers? Prune won't seem to work correctly since the old containers aren't stopped.
One option is to move from Ansible to docker-compose, which knows how to scale up and scale down (and honestly provides a better use experience for manage complex Docker configurations).
Another idea would be to include one loop for starting containers, and then a second loop that attempts to remove containers up to some maximum number, like this (assuming the number of containers you want to start is in the ansible variable container_count):
---
- hosts: localhost
gather_facts: false
vars:
container_count: 4
maximum_containers: 20
tasks:
- name: Start containers
docker_container:
state: present
name: "service-{{ item }}"
image: fedora
command: "sleep inf"
loop: "{{ range(container_count|int)|list }}"
- name: Stop containers
docker_container:
state: absent
name: "service-{{ item }}"
loop: "{{ range(container_count|int, maximum_containers|int)|list }}"
Called with the default values defined in the playbook, it would create 4 containers and then attempt to delete 16 more. This is going to be a little slow, since Ansible doesn't provide any way to prematurely exit a loop, but it will work.
A third option is to replace the "Stop containers" task with a shell script, which might be slightly faster but less "ansible-like":
---
- hosts: localhost
gather_facts: false
vars:
container_count: 4
tasks:
- name: Start containers
docker_container:
state: present
name: "service-{{ item }}"
image: fedora
command: "sleep inf"
loop: "{{ range(container_count|int)|list }}"
- name: Stop containers
shell: |
let i={{ container_count }}
while :; do
name="service-$i"
docker rm -f $name || break
echo "removed $name"
let i++
done
echo "all done."
Same idea, but somewhat faster and it doesn't require you to define a maximum container count.

How to reload config with ansible docker_container module?

I am trying to accomplish docker kill -s HUP <container> in Ansible but it looks like the options I try always restart the container or attempt to instead of reloading the config.
Running the following command allows me to reload the configuration without restarting the container:
docker kill -s HUP <container>
The Ansible docker_container docs suggest the following options:
force_kill Use the kill command when stopping a running
container.
kill_signal Override default signal used to kill a running
container.
Using the kill_signal in isolation did nothing.
Below is an example of what I hoped would work:
- name: Reload haproxy config
docker_container:
name: '{{ haproxy_docker_name }}'
state: stopped
image: '{{ haproxy_docker_image }}'
force_kill: True
kill_signal: HUP
I assumed overriding force_kill and kill_signal would give me the desired behaviour. I have also tried setting state to 'started' and present.
What is the correct way to do this?
I needed to do the same with an haproxy docker instance to reload the configuration. The following worked in ansible 2.11.2:
handlers:
- name: Restart HAProxy
docker_container:
name: haproxy
state: stopped
force_kill: True
kill_signal: HUP
I went with a simple shell command, which runs whenever the docker-compose file has my service:
---
- hosts: pis
remote_user: pi
tasks:
- name: Get latest docker images
docker_compose:
project_src: dc
remove_orphans: true
pull: true
register: docker_compose_output
- name: reload prometheus
command: docker kill --signal=HUP dc_prometheus_1
when: '"prometheus" in ansible_facts'
- name: reload blackbox
command: docker kill --signal=HUP dc_blackbox_1
when: '"blackbox" in ansible_facts'
Appendix
I found some examples using GitHub advanced search, but they didn't work for me:
https://github.com/search?q=kill_signal%3A+HUP+docker+extension%3Ayml&type=Code
An example:
- name: Reload HAProxy
docker_container:
name: "haproxy"
state: started
force_kill: true
kill_signal: HUP

Resources