I need to retrieve the output of a docker command on ansible, it's easy when running a single instance but I'm running the command using with_dict:
I'm using something like
- name: Running task
docker_container:
command: <<here my command>>
detach: false
recreate: true
restart: false
restart_policy: "no"
with_dict: "{{ mylist.config.validator_client.accounts }}"
register: mycontainers
I've tried the following with no success:
- name: display logs
debug:
msg: "{{ item.ansible_facts.docker_container.Output }}"
with_items: "{{ mycontainers.results }}"
Any idea?
how do i dynamically restart all my docker containers from Ansible? I mean i know a way where i can define my containers in a variable and loop through them but what i want to achieve is this -
Fetch the currently running containers and restart all or some of them one by one through some loop.
How to achieve this using Ansible?
Docker explanation
Retrieve name/image for all the running container:
docker container ls -a --format '{{.Names}} {{.Image}}'
You could also filter the output of the docket container command to a specific image name, thanks to the --filter ancestor=image_name option:
docker container ls -a --filter ancestor=alpine --format '{{.Names}} {{.Image}}'
Ansible integration:
First I would define some filters as Ansible variables:
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
Them I will execute the docker container command in a dedicated task and save the command output to an Ansible variable:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
Finally I will iterate over it and use it into the docker_container ansible module:
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"
final playbook.yml
---
- hosts: localhost
gather_facts: no
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
tasks:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"
This is my ansible playbook, the tasks are copied from docker_swarm module documentation so it should work:
- name: Init a new swarm with default parameters
docker_swarm:
state: present
advertise_addr: "{{ manager_ip }}:2377"
register: rezult
when: "ansible_default_ipv4.address == '{{ manager_ip }}'"
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ manager_ip }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
It inits a swarm manager with the "manager_ip" --extra-var
but it fails in the "add nodes task" with this error:
fatal: [vm2]: FAILED! => {"changed": false, "msg": "Can not join the Swarm Cluster: 500 Server Error: Internal Server Error (\"invalid join token\")"}
if I put "'{{ }}'" around "rezult.swarm_facts.JoinTokens.Worker" after join_token I get this:
fatal: [vm2]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'swarm_facts'\n\nThe error appears to be in '/home/ansible/docker-ansible/docker.yml': line 47, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add nodes\n ^ here\n"}
If I put the debug msg for rezult.swarm_facts.JoinTokens.Worker I get the correct token:
ok: [opensuse1] => {
"msg": "SWMTKN-1-5p7brhxxz4gzu716t78tt5woj7h6aflq0kdwvzwlbbe7ct0ba7-e59bg0t79q67ogd61ydwxc5yq"
}
and If I use that token manually with the docker swarm join command on the server I wish to merge with manager it works. So the variable has the correct value and the connection between nodes work. But I just can't get join_token to work. I am running ansible 2.8.5 with python 2.7.5.
I know I can use the shell module but I do not want to do that.
Something like this works for me:
---
- name: Init swarm on the first node
community.general.docker_swarm:
state: present
advertise_addr: "{{ ansible_host }}"
register: result
when: inventory_hostname == groups['swarm_managers'][0]
- name: Get join-token for manager nodes
set_fact:
join_token_manager: "{{ hostvars[groups['swarm_managers'][0]].result.swarm_facts.JoinTokens.Manager }}"
- name: Get join-token for worker nodes
set_fact:
join_token_worker: "{{ hostvars[groups['swarm_managers'][0]].result.swarm_facts.JoinTokens.Worker }}"
- name: Join other managers
community.general.docker_swarm:
state: join
join_token: "{{ join_token_manager }}"
advertise_addr: "{{ ansible_host }}"
remote_addrs: "{{ hostvars[groups['swarm_managers'][0]].ansible_host }}"
when:
- inventory_hostname in groups['swarm_managers']
- inventory_hostname != groups['swarm_managers'][0]
- name: Join workers
community.general.docker_swarm:
state: join
join_token: "{{ join_token_worker }}"
advertise_addr: "{{ ansible_host }}"
remote_addrs: "{{ hostvars[groups['swarm_managers'][0]].ansible_host }}"
when:
- inventory_hostname not in groups['swarm_managers']
In swarm_managers group there are managers and all other hosts from this inventory are workers.
I think achempion was correct, the issue was that OPs variable rezult.swarm_facts.JoinTokens.Worker was not being evaluated, but rather provided as an object of sorts.
Replace rezult.swarm_facts.JoinTokens.Worker with "{{ rezult.swarm_facts.JoinTokens.Worker }}" and it should work.
I realise OP has probably already moved on but I have spent ages trying to figure out a very similar problem and this appeared to resolve it for me.
As I have not enough reputation to comment - here is an other mistake in your playbooks task. You use:
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ manager_ip }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
...which assigns the advertise_addr to the wrong ip address. This will still allow your nodes to join, but mess up their overlay network configuration (no node can ping each other, resulting in constant network failures). I would suggest to use the ssh connection ip instead:
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ ansible_ssh_host }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
Also just take a look into the documentation examples:
- name: Add nodes
community.docker.docker_swarm:
state: join
advertise_addr: 192.168.1.2
join_token: SWMTKN-1--xxxxx
remote_addrs: [ '192.168.1.1:2377' ]
...which also uses different ip addresses.
I have to admit: I also fall for it and it took several hours to solve. I hope someone else sees this answer before making the same mistake by just copying your snippet.
I think the issue here is from rezult.swarm_facts.JoinTokens.Worker. In debug info it appears as "msg": "SWMTKN-1-5p7..." but join_token: option from Ansible configuration expects it to be just plain token without additional wrappers as "msg": and so on.
I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...
I am trying to build docker container which should include startup scripts in container's /etc/my_init.d directory via ansible. I have difficulty finding any documentation how to do this. Here is relevant portion of my yaml file:
- name: Create container
docker:
name: myserver
image: "{{ docker_repo }}/myserver:{{ server.version }}"
state: started
restart_policy: always
docker_api_version: 1.18
registry: "{{ docker_repo }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
links:
- "mywebservices"
ports:
- "8000:8000"
- "9899:9899"
volumes:
- "{{ myserver_home_dir }}/logs:/var/log/my_server"
env:
MY_ENVIRONMENT: "{{ my_environment }}"
when: myserver_action == "create"
or (myserver_action == "diff-create" and myserver.changed)
or myserver_action == "update"
What should I add in here to tell ansible to put my files into container's /etc/my_init.d during build?
First of all, you can't build container (you can start it), you build images.
Second, docker module is deprecated, use docker_image to build images.
You should copy your files into build directory (with copy or synchronize modules), for example:
/tmp/build
Then create Dockerfile that will take them from build directory and add into your image.
After that call docker_image:
docker_image:
path: /tmp/build
name: myimage
Finally start your container:
docker_container:
image: myimage
name: mycontainer
Unsure if it's relevant, as I don't know what your startup Ansible content is doing, but it's probably worth looking at the Ansible Container project.
https://github.com/ansible/ansible-container
You can build your container images using Ansible roles instead of a Dockerfile, orchestrate them locally, and deploy them to production Kubernetes or Red Hat OpenShift.