Why is ansible with a local docker connection so slow? - docker

I'm using ansible to execute some tasks on a local docker container, as such:
hosts: name-of-docker-container
connection: docker
tasks:
- name: setting up ssh_config
template:
src: ssh_config.j2
dest: /home/user/.ssh/ssh_config
mode: "0600"
owner: user
group: user
Something as simple as this takes a 2-5 seconds to run. Shouldn't this take less than a second? How can I make ansible run faster? I've tried the pipelining, but it doesn't seem to help:
ansible-playbook -v -e 'pipelining=True' -i inventories/staging/hosts.yml staging-deploy.yml

Related

How to remove outdated containers using ansible?

I'm using with_sequence to iteratively create copies of a container on a single node using ansible. The number of containers is determined by a variable set at the time of deploy. This works well for increasing the number of containers to scale up, but when I reduce the number to deploy less containers the old containers are left running. Is there a way to stop the old containers? Prune won't seem to work correctly since the old containers aren't stopped.
One option is to move from Ansible to docker-compose, which knows how to scale up and scale down (and honestly provides a better use experience for manage complex Docker configurations).
Another idea would be to include one loop for starting containers, and then a second loop that attempts to remove containers up to some maximum number, like this (assuming the number of containers you want to start is in the ansible variable container_count):
---
- hosts: localhost
gather_facts: false
vars:
container_count: 4
maximum_containers: 20
tasks:
- name: Start containers
docker_container:
state: present
name: "service-{{ item }}"
image: fedora
command: "sleep inf"
loop: "{{ range(container_count|int)|list }}"
- name: Stop containers
docker_container:
state: absent
name: "service-{{ item }}"
loop: "{{ range(container_count|int, maximum_containers|int)|list }}"
Called with the default values defined in the playbook, it would create 4 containers and then attempt to delete 16 more. This is going to be a little slow, since Ansible doesn't provide any way to prematurely exit a loop, but it will work.
A third option is to replace the "Stop containers" task with a shell script, which might be slightly faster but less "ansible-like":
---
- hosts: localhost
gather_facts: false
vars:
container_count: 4
tasks:
- name: Start containers
docker_container:
state: present
name: "service-{{ item }}"
image: fedora
command: "sleep inf"
loop: "{{ range(container_count|int)|list }}"
- name: Stop containers
shell: |
let i={{ container_count }}
while :; do
name="service-$i"
docker rm -f $name || break
echo "removed $name"
let i++
done
echo "all done."
Same idea, but somewhat faster and it doesn't require you to define a maximum container count.

configure docker variables with ansible

I have a docker image for an FTP server in a repository, this image will be used for several machines, I need to deploy container and change PORT variable depending on the destination machine.
This is my image (I've deleted lines for proftpd installation cause it is not relevant to this case):
FROM alpine:3.5
ARG vcs_ref="Unknown"
ARG build_date="Unknown"
ARG build_number="1"
LABEL org.label-schema.vcs-ref=$vcs_ref \
org.label-schema.build-date=$build_date \
org.label-schema.version="alpine-r${build_number}"
ENV PORT=10000
COPY assets/port.conf /usr/local/etc/ports.conf
COPY replace.sh /replace.sh
#It is for a proFTPD Server
CMD ["/replace.sh"]
My port.conf file (Also deleted not relevant information for this case)
# This is a basic ProFTPD configuration file (rename it to
# 'proftpd.conf' for actual use. It establishes a single server
# and a single anonymous login. It assumes that you have a user/group
# "nobody" and "ftp" for normal operation and anon.
ServerName "ProFTPD Default Installation"
ServerType standalone
DefaultServer on
# Port 21 is the standard FTP port.
Port {{PORT}}
.
.
.
And replace.sh script is:
#!/bin/bash
set -e
[ -z "${PORT}" ] && echo >&2 "PORT is not set" && exit 1
sed -i "s#{{PORT}}#$PORT#g" /usr/local/etc/ports.conf
/usr/local/sbin/proftpd -n -c /usr/local/etc/proftpd.conf
... Is there any way to avoid using replace.sh and use ansible as the one who replace PORT variable in /usr/local/etc/proftpd.conf the file inside the container?
My actual ansible script for container is:
- name: (ftpd) Run container
docker_container:
name: "myimagename"
image: "myimage"
state: present
pull: true
restart_policy: always
env:
"PORT": "{{ myportUsingAnsible}}"
networks:
- name: "{{ network }}"
Resuming all that I need is to use Ansible to replace configuration variable instead of using a shell script that replaces variables before running services, is it possible?
Many thanks
You are using the docker_container module which will need a pre-built image. The file port.conf is baked inside the image. What you need to do is set a static port inside this file. Inside the container, you always use
the static port 21 and depending on the machine, you map this port onto a different port using ansible.
Inside port.conf always use port 21
# Port 21 is the standard FTP port.
Port 21
The ansible task would look like:
- name: (ftpd) Run container
docker_container:
name: "myimagename"
image: "myimage"
state: present
pull: true
restart_policy: always
networks:
- name: "{{ network }}"
ports:
- "{{myportUsingAnsible}}:21"
Now when you connect to the container, you need to use the <hostnamne>:{{myportUsingAnsible}}. This is the standard docker way of doing things. The port inside the image is static and you change the port mappings based on the
available ports that you have.

How to reload config with ansible docker_container module?

I am trying to accomplish docker kill -s HUP <container> in Ansible but it looks like the options I try always restart the container or attempt to instead of reloading the config.
Running the following command allows me to reload the configuration without restarting the container:
docker kill -s HUP <container>
The Ansible docker_container docs suggest the following options:
force_kill Use the kill command when stopping a running
container.
kill_signal Override default signal used to kill a running
container.
Using the kill_signal in isolation did nothing.
Below is an example of what I hoped would work:
- name: Reload haproxy config
docker_container:
name: '{{ haproxy_docker_name }}'
state: stopped
image: '{{ haproxy_docker_image }}'
force_kill: True
kill_signal: HUP
I assumed overriding force_kill and kill_signal would give me the desired behaviour. I have also tried setting state to 'started' and present.
What is the correct way to do this?
I needed to do the same with an haproxy docker instance to reload the configuration. The following worked in ansible 2.11.2:
handlers:
- name: Restart HAProxy
docker_container:
name: haproxy
state: stopped
force_kill: True
kill_signal: HUP
I went with a simple shell command, which runs whenever the docker-compose file has my service:
---
- hosts: pis
remote_user: pi
tasks:
- name: Get latest docker images
docker_compose:
project_src: dc
remove_orphans: true
pull: true
register: docker_compose_output
- name: reload prometheus
command: docker kill --signal=HUP dc_prometheus_1
when: '"prometheus" in ansible_facts'
- name: reload blackbox
command: docker kill --signal=HUP dc_blackbox_1
when: '"blackbox" in ansible_facts'
Appendix
I found some examples using GitHub advanced search, but they didn't work for me:
https://github.com/search?q=kill_signal%3A+HUP+docker+extension%3Ayml&type=Code
An example:
- name: Reload HAProxy
docker_container:
name: "haproxy"
state: started
force_kill: true
kill_signal: HUP

Ansible - playbook dynamic verbosity

I want to build a docker image from a Dockerfile. I can do this by using the bash like this:
[root#srv01 ~]# docker build -t appname/tomcat:someTag /root/Documents/myDockerfiles/tomcat
The good thing about having the image build using the bash is, that it prints to stdout what it executes step-by-step:
Step 1 : FROM tomcat:8.0.32-jre8
8.0.32-jre8: Pulling from library/tomcat
fdd5d7827f33: Already exists
...
When using Ansible in the following fashion from bash:
[root#localhost ansiblescripts]# ansible-playbook -vvvvv build-docker-image.yml:
Where the file build-docker-image.yml contains this content:
- name: "my build-docker-image.yml playbook"
hosts: myHost
tasks:
- name: "simple ping"
ping:
- name: "build the docker image"
become: yes
become_method: root
become_method: su
command: /bin/docker build -t something/tomcat:ver1 /home/docker/tomcat
#async: 1
#poll: 0
It waits for the whole build command to finish and then prints all the stdout as verbose output together in one piece.
Commenting in async:1 and poll:0 doesn't solve my problem, since it doesn't print the stdout at all.

docker with ansible wait for database

I try to deploy docker with ansible. I have one docker database container, and in other container is my web app, and I try to link this two container. The problem is that database container didn't have a time to configure itself and a web container is already started. My ansible playbook look something like:
...
- name: run mysql in docker container
docker:
image: "mysql:5.5"
name: database
env: "MYSQL_ROOT_PASSWORD=password"
state: running
- name: run application containers
docker:
name: "application"
image: "myapp"
ports:
- "8080:8080"
links:
- "database:db"
state: running
How to determine if database is start? I try with wait_for module, but that didn't work. I don't want to set timeout, it's not good option for me.
wait_for does not work for the MySQL docker container because it only checks that the port is connectable (which is true straight away for the Docker container). However, wait_for does not check that the service inside the container listens the port and sends responses to the client.
This is how I am waiting in the ansible playbook for the MySQL service becoming fully operational inside the Docker container:
- name: Start MySQL container
docker:
name: some-name
image: mysql:latest
state: started
ports:
- "8306:3306" # it's important to expose the port for waiting requests
env:
MYSQL_ROOT_PASSWORD: "{{ mysql_root_password }}"
- template: mode="a+rx,o=rwx" src=telnet.sh.j2 dest=/home/ubuntu/telnet.sh
# wait while MySQL is starting
- action: shell /home/ubuntu/telnet.sh
register: result
until: result.stdout.find("mysql_native_password") != -1
retries: 10
delay: 3
And the telnet.sh.j2 is
#!/bin/bash -e
telnet localhost 8306 || true
To avoid the sh and I don't normally have telnet installed...
- name: Wait for database to be available
shell: docker run --rm --link mysql:mysql mysql sh -c 'mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p{{mysql_password}} || true'
register: result
until: result.stderr.find("Can't connect to MySQL") == -1
retries: 10
delay: 3
As etrubenok said:
wait_for does not work for the MySQL docker container because it only checks that the port is connectable (which is true straight away for the Docker container). However, wait_for does not check that the service inside the container listens the port and sends responses to the client.
Using Andy Shinn's suggestion of FreshPow's answer, you can wait without needing a shell script or telnet:
- name: Wait for mariadb
command: >
docker exec {{ container|quote }}
mysqladmin ping -u{{ superuser|quote }} -p{{ superuser_password|quote }}
register: result
until: not result.rc # or result.rc == 0 if you prefer
retries: 20
delay: 3
This runs mysqladmin ping ... until it succeeds (return code 0). Usually superuser is root. I tested using podman instead of docker but I believe the command is the same regardless. |quote does shell escaping, which according to the Ansible docs should also be done when using command:
This works for me just fine:
- name: get mariadb IP address
command: "docker inspect --format '{''{ .NetworkSettings.IPAddress }''}' mariadb-container"
register: mariadb_ip_address
- name: wait for mariadb to become ready
wait_for:
host: "{{ mariadb_ip_address.stdout }}"
port: 3306
state: started
delay: 5
connect_timeout: 15
timeout: 30
Use wait_for module. I'm no expert on MySQL but I assume there would be some port or existence of file or message in some log file etc. you can check to find out if the DB is up or not.
Here are examples of wait_for copied from the link above.
# wait 300 seconds for port 8000 to become open on the host, don't start checking for 10 seconds
- wait_for: port=8000 delay=10
# wait 300 seconds for port 8000 of any IP to close active connections, don't start checking for 10 seconds
- wait_for: host=0.0.0.0 port=8000 delay=10 state=drained
# wait 300 seconds for port 8000 of any IP to close active connections, ignoring connections for specified hosts
- wait_for: host=0.0.0.0 port=8000 state=drained exclude_hosts=10.2.1.2,10.2.1.3
# wait until the file /tmp/foo is present before continuing
- wait_for: path=/tmp/foo
# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed
# wait until the lock file is removed
- wait_for: path=/var/lock/file.lock state=absent
# wait until the process is finished and pid was destroyed
- wait_for: path=/proc/3466/status state=absent
# wait 300 seconds for port 22 to become open and contain "OpenSSH", don't assume the inventory_hostname is resolvable
# and don't start checking for 10 seconds
- local_action: wait_for port=22 host="{{ ansible_ssh_host | default(inventory_hostname) }}" search_regex=OpenSSH delay=10
I was able to use wait_for like this:
- name: "MySQL - Check mysql - Wait for mysql to be up"
wait_for:
host: 127.0.0.1
port: 3306
search_regex: "(mysql_native_password|caching_sha2_password)"
This way it will wait for the port o be up and for the service to send some data.
The drawback is that the output may change with mysql versions and configurations. In the example are the strings for mysql 5.5 and 8.0. Adjust for your use cases.
An alternative, avoiding running wait_for, command or shell, may be to retry some mysql command several times until it succedes:
- name: "MySQL - Check mysql - if it responds"
mysql_info:
login_user: root
login_password: "{{ mysql_root_password }}"
filter:
- version
register: mysql_result
until: mysql_result is not failed
retries: 5
delay: 10

Resources