I am trying to restart my docker containers one by one for a particular image using Ansible but it doesn't seem to be happening. Below is my yml and what it is doing is exiting the current running container.
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
docker_container:
name: "{{ item }}"
# image: ubuntu
state: started
restart: yes
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"
If you see the below output when i run docker ps there are no running containers.
TASK [Restart Docker Service] ****************************************************************************************************************
/usr/lib/python2.7/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.25.9) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
changed: [shashank-VM] => (item=c2310b76b005)
PLAY RECAP ***********************************************************************************************************************************
shashank-VM : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shashank#shashank-VM:~/ansible$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am i doing wrong? Can someone help?
I don't think the docker_container module is designed to do what you want (i.e., restart an existing container). The module is designed to manage containers by name, not by id, and will check that the running container matches the options provided to docker_container.
You're probably better off simply using the docker command to restart your containers:
---
- name: restart app servers
hosts: shashank-VM
connection: local
become: yes
become_method: sudo
tasks:
- name: Get info on the Container
shell: docker ps | awk '/{{ item }}/{print $1}'
register: list_of_containers
with_items:
- ubuntu
- name: Restart Docker Service
command: docker restart {{ item }}
with_items: "{{ list_of_containers.results | map(attribute='stdout_lines') | list }}"
I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?
I'm using with_sequence to iteratively create copies of a container on a single node using ansible. The number of containers is determined by a variable set at the time of deploy. This works well for increasing the number of containers to scale up, but when I reduce the number to deploy less containers the old containers are left running. Is there a way to stop the old containers? Prune won't seem to work correctly since the old containers aren't stopped.
One option is to move from Ansible to docker-compose, which knows how to scale up and scale down (and honestly provides a better use experience for manage complex Docker configurations).
Another idea would be to include one loop for starting containers, and then a second loop that attempts to remove containers up to some maximum number, like this (assuming the number of containers you want to start is in the ansible variable container_count):
---
- hosts: localhost
gather_facts: false
vars:
container_count: 4
maximum_containers: 20
tasks:
- name: Start containers
docker_container:
state: present
name: "service-{{ item }}"
image: fedora
command: "sleep inf"
loop: "{{ range(container_count|int)|list }}"
- name: Stop containers
docker_container:
state: absent
name: "service-{{ item }}"
loop: "{{ range(container_count|int, maximum_containers|int)|list }}"
Called with the default values defined in the playbook, it would create 4 containers and then attempt to delete 16 more. This is going to be a little slow, since Ansible doesn't provide any way to prematurely exit a loop, but it will work.
A third option is to replace the "Stop containers" task with a shell script, which might be slightly faster but less "ansible-like":
---
- hosts: localhost
gather_facts: false
vars:
container_count: 4
tasks:
- name: Start containers
docker_container:
state: present
name: "service-{{ item }}"
image: fedora
command: "sleep inf"
loop: "{{ range(container_count|int)|list }}"
- name: Stop containers
shell: |
let i={{ container_count }}
while :; do
name="service-$i"
docker rm -f $name || break
echo "removed $name"
let i++
done
echo "all done."
Same idea, but somewhat faster and it doesn't require you to define a maximum container count.
I'm using ansible to execute some tasks on a local docker container, as such:
hosts: name-of-docker-container
connection: docker
tasks:
- name: setting up ssh_config
template:
src: ssh_config.j2
dest: /home/user/.ssh/ssh_config
mode: "0600"
owner: user
group: user
Something as simple as this takes a 2-5 seconds to run. Shouldn't this take less than a second? How can I make ansible run faster? I've tried the pipelining, but it doesn't seem to help:
ansible-playbook -v -e 'pipelining=True' -i inventories/staging/hosts.yml staging-deploy.yml
I try to deploy docker with ansible. I have one docker database container, and in other container is my web app, and I try to link this two container. The problem is that database container didn't have a time to configure itself and a web container is already started. My ansible playbook look something like:
...
- name: run mysql in docker container
docker:
image: "mysql:5.5"
name: database
env: "MYSQL_ROOT_PASSWORD=password"
state: running
- name: run application containers
docker:
name: "application"
image: "myapp"
ports:
- "8080:8080"
links:
- "database:db"
state: running
How to determine if database is start? I try with wait_for module, but that didn't work. I don't want to set timeout, it's not good option for me.
wait_for does not work for the MySQL docker container because it only checks that the port is connectable (which is true straight away for the Docker container). However, wait_for does not check that the service inside the container listens the port and sends responses to the client.
This is how I am waiting in the ansible playbook for the MySQL service becoming fully operational inside the Docker container:
- name: Start MySQL container
docker:
name: some-name
image: mysql:latest
state: started
ports:
- "8306:3306" # it's important to expose the port for waiting requests
env:
MYSQL_ROOT_PASSWORD: "{{ mysql_root_password }}"
- template: mode="a+rx,o=rwx" src=telnet.sh.j2 dest=/home/ubuntu/telnet.sh
# wait while MySQL is starting
- action: shell /home/ubuntu/telnet.sh
register: result
until: result.stdout.find("mysql_native_password") != -1
retries: 10
delay: 3
And the telnet.sh.j2 is
#!/bin/bash -e
telnet localhost 8306 || true
To avoid the sh and I don't normally have telnet installed...
- name: Wait for database to be available
shell: docker run --rm --link mysql:mysql mysql sh -c 'mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p{{mysql_password}} || true'
register: result
until: result.stderr.find("Can't connect to MySQL") == -1
retries: 10
delay: 3
As etrubenok said:
wait_for does not work for the MySQL docker container because it only checks that the port is connectable (which is true straight away for the Docker container). However, wait_for does not check that the service inside the container listens the port and sends responses to the client.
Using Andy Shinn's suggestion of FreshPow's answer, you can wait without needing a shell script or telnet:
- name: Wait for mariadb
command: >
docker exec {{ container|quote }}
mysqladmin ping -u{{ superuser|quote }} -p{{ superuser_password|quote }}
register: result
until: not result.rc # or result.rc == 0 if you prefer
retries: 20
delay: 3
This runs mysqladmin ping ... until it succeeds (return code 0). Usually superuser is root. I tested using podman instead of docker but I believe the command is the same regardless. |quote does shell escaping, which according to the Ansible docs should also be done when using command:
This works for me just fine:
- name: get mariadb IP address
command: "docker inspect --format '{''{ .NetworkSettings.IPAddress }''}' mariadb-container"
register: mariadb_ip_address
- name: wait for mariadb to become ready
wait_for:
host: "{{ mariadb_ip_address.stdout }}"
port: 3306
state: started
delay: 5
connect_timeout: 15
timeout: 30
Use wait_for module. I'm no expert on MySQL but I assume there would be some port or existence of file or message in some log file etc. you can check to find out if the DB is up or not.
Here are examples of wait_for copied from the link above.
# wait 300 seconds for port 8000 to become open on the host, don't start checking for 10 seconds
- wait_for: port=8000 delay=10
# wait 300 seconds for port 8000 of any IP to close active connections, don't start checking for 10 seconds
- wait_for: host=0.0.0.0 port=8000 delay=10 state=drained
# wait 300 seconds for port 8000 of any IP to close active connections, ignoring connections for specified hosts
- wait_for: host=0.0.0.0 port=8000 state=drained exclude_hosts=10.2.1.2,10.2.1.3
# wait until the file /tmp/foo is present before continuing
- wait_for: path=/tmp/foo
# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed
# wait until the lock file is removed
- wait_for: path=/var/lock/file.lock state=absent
# wait until the process is finished and pid was destroyed
- wait_for: path=/proc/3466/status state=absent
# wait 300 seconds for port 22 to become open and contain "OpenSSH", don't assume the inventory_hostname is resolvable
# and don't start checking for 10 seconds
- local_action: wait_for port=22 host="{{ ansible_ssh_host | default(inventory_hostname) }}" search_regex=OpenSSH delay=10
I was able to use wait_for like this:
- name: "MySQL - Check mysql - Wait for mysql to be up"
wait_for:
host: 127.0.0.1
port: 3306
search_regex: "(mysql_native_password|caching_sha2_password)"
This way it will wait for the port o be up and for the service to send some data.
The drawback is that the output may change with mysql versions and configurations. In the example are the strings for mysql 5.5 and 8.0. Adjust for your use cases.
An alternative, avoiding running wait_for, command or shell, may be to retry some mysql command several times until it succedes:
- name: "MySQL - Check mysql - if it responds"
mysql_info:
login_user: root
login_password: "{{ mysql_root_password }}"
filter:
- version
register: mysql_result
until: mysql_result is not failed
retries: 5
delay: 10