Docker_swarm module - join_token parameter for ansible not working - docker

This is my ansible playbook, the tasks are copied from docker_swarm module documentation so it should work:
- name: Init a new swarm with default parameters
docker_swarm:
state: present
advertise_addr: "{{ manager_ip }}:2377"
register: rezult
when: "ansible_default_ipv4.address == '{{ manager_ip }}'"
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ manager_ip }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
It inits a swarm manager with the "manager_ip" --extra-var
but it fails in the "add nodes task" with this error:
fatal: [vm2]: FAILED! => {"changed": false, "msg": "Can not join the Swarm Cluster: 500 Server Error: Internal Server Error (\"invalid join token\")"}
if I put "'{{ }}'" around "rezult.swarm_facts.JoinTokens.Worker" after join_token I get this:
fatal: [vm2]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'swarm_facts'\n\nThe error appears to be in '/home/ansible/docker-ansible/docker.yml': line 47, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add nodes\n ^ here\n"}
If I put the debug msg for rezult.swarm_facts.JoinTokens.Worker I get the correct token:
ok: [opensuse1] => {
"msg": "SWMTKN-1-5p7brhxxz4gzu716t78tt5woj7h6aflq0kdwvzwlbbe7ct0ba7-e59bg0t79q67ogd61ydwxc5yq"
}
and If I use that token manually with the docker swarm join command on the server I wish to merge with manager it works. So the variable has the correct value and the connection between nodes work. But I just can't get join_token to work. I am running ansible 2.8.5 with python 2.7.5.
I know I can use the shell module but I do not want to do that.

Something like this works for me:
---
- name: Init swarm on the first node
community.general.docker_swarm:
state: present
advertise_addr: "{{ ansible_host }}"
register: result
when: inventory_hostname == groups['swarm_managers'][0]
- name: Get join-token for manager nodes
set_fact:
join_token_manager: "{{ hostvars[groups['swarm_managers'][0]].result.swarm_facts.JoinTokens.Manager }}"
- name: Get join-token for worker nodes
set_fact:
join_token_worker: "{{ hostvars[groups['swarm_managers'][0]].result.swarm_facts.JoinTokens.Worker }}"
- name: Join other managers
community.general.docker_swarm:
state: join
join_token: "{{ join_token_manager }}"
advertise_addr: "{{ ansible_host }}"
remote_addrs: "{{ hostvars[groups['swarm_managers'][0]].ansible_host }}"
when:
- inventory_hostname in groups['swarm_managers']
- inventory_hostname != groups['swarm_managers'][0]
- name: Join workers
community.general.docker_swarm:
state: join
join_token: "{{ join_token_worker }}"
advertise_addr: "{{ ansible_host }}"
remote_addrs: "{{ hostvars[groups['swarm_managers'][0]].ansible_host }}"
when:
- inventory_hostname not in groups['swarm_managers']
In swarm_managers group there are managers and all other hosts from this inventory are workers.

I think achempion was correct, the issue was that OPs variable rezult.swarm_facts.JoinTokens.Worker was not being evaluated, but rather provided as an object of sorts.
Replace rezult.swarm_facts.JoinTokens.Worker with "{{ rezult.swarm_facts.JoinTokens.Worker }}" and it should work.
I realise OP has probably already moved on but I have spent ages trying to figure out a very similar problem and this appeared to resolve it for me.

As I have not enough reputation to comment - here is an other mistake in your playbooks task. You use:
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ manager_ip }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
...which assigns the advertise_addr to the wrong ip address. This will still allow your nodes to join, but mess up their overlay network configuration (no node can ping each other, resulting in constant network failures). I would suggest to use the ssh connection ip instead:
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ ansible_ssh_host }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
Also just take a look into the documentation examples:
- name: Add nodes
community.docker.docker_swarm:
state: join
advertise_addr: 192.168.1.2
join_token: SWMTKN-1--xxxxx
remote_addrs: [ '192.168.1.1:2377' ]
...which also uses different ip addresses.
I have to admit: I also fall for it and it took several hours to solve. I hope someone else sees this answer before making the same mistake by just copying your snippet.

I think the issue here is from rezult.swarm_facts.JoinTokens.Worker. In debug info it appears as "msg": "SWMTKN-1-5p7..." but join_token: option from Ansible configuration expects it to be just plain token without additional wrappers as "msg": and so on.

Related

Ansible known_hosts module ssh key propagation question

I'm trying to craft a playbook that will update the known_hosts for a machine / user however I'm getting an error I can't make sense of.
---
- name: Keys
hosts: adminslaves
gather_facts: false
no_log: false
remote_user: test
#pre_tasks:
# - setup:
# gather_subset:
# - '!all'
tasks:
- name: Scan for SSH host keys.
shell: ssh-keyscan myhost.mydomain.com 2>/dev/null
changed_when: False
register: ssh_scan
# - name: show vars
# debug:
# msg: "{{ ssh_scan.stdout_lines }}"
#
- name: Update known_hosts.
known_hosts:
key: "{{ item }}"
name: "{{ ansible_host }}"
state: present
with_items: "{{ ssh_scan.stdout_lines }}"
My error is "msg": "Host parameter does not match hashed host field in supplied key"}
I think the variable has the right information (at least it does when I debug it).
My end goal is a playbook that will add ssh keys of a list of hosts to a list of hosts for Jenkins auth.
Appreciate any help.
the problem is that the output of ssh-keyscan myhost.mydomain.com 2>/dev/null usually contains more than one key so you need to process it.
Someone with the same error message raised an issue, but again the problem was with the ssh-key format. I better understood checking the code used by known_hosts task.
Here the code I use:
- name: Populate known_hosts
hosts: spectrum_scale
tags: set_known_hosts
become: true
tasks:
- name: Scan for SSH keys
ansible.builtin.shell:
cmd: "ssh-keyscan {{ hostvars[spectrum_scale].ansible_fqdn }}
{{ hostvars[spectrum_scale].ansible_hostname }}
{{ hostvars[spectrum_scale].ansible_default_ipv4.address }}
2>/dev/null"
loop: "{{ groups['spectrum_scale'] }}"
loop_control:
loop_var: spectrum_scale
register: ssh_scan
- name: Set stdout_lines array for ssh_scan
set_fact:
ssout: []
- name: Fill ssout
set_fact:
ssout: "{{ ssout + ss_r.stdout_lines }}"
loop: "{{ ssh_scan.results }}"
loop_control:
loop_var:
ss_r
when: ss_r.stdout_lines is defined
- name: Add client ssh keys to known_hosts
ansible.builtin.known_hosts:
name: "{{ hk.split()[0] }}"
key: "{{ hk }}"
state: present
loop: "{{ ssout }}"
loop_control:
loop_var: hk

Retrieve docker container output with ansible using list

I need to retrieve the output of a docker command on ansible, it's easy when running a single instance but I'm running the command using with_dict:
I'm using something like
- name: Running task
docker_container:
command: <<here my command>>
detach: false
recreate: true
restart: false
restart_policy: "no"
with_dict: "{{ mylist.config.validator_client.accounts }}"
register: mycontainers
I've tried the following with no success:
- name: display logs
debug:
msg: "{{ item.ansible_facts.docker_container.Output }}"
with_items: "{{ mycontainers.results }}"
Any idea?

ansible plybook script for destroy all containers and remove images from remote

looking for ansible playbook script which can stop and destroy all container on remote and remove existing images as well the below given code is just stopping the running containers
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
this is the final solution for my question and it worked 100% it will destroy all containers and remove images as well .
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
- name: Remove Stoped docker containers
shell: |
docker rm $(docker ps -a -q);
when: docker_info.containers != 0
- name: Get details of all images
docker_host_info:
images: yes
verbose_output: yes
register: image_info
- name: Remove all images
docker_image:
name: "{{ item }}"
state: absent
loop: "{{ image_info.images | map(attribute='Id') | list }}"

docker_container: How to add multiple Volumes

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

Ansible docker_container etc_hosts with variable key

I've an ansible script through which I spawn a docker container and add few hosts entries to it, since etc_hosts takes key as host name and corresponding IP address. In my case I need to have both host name and IP address to be driven by some variable, for instance
docker_container:
name: image-name
image: image-to-be-pulled
state: started
restart_policy: always
etc_hosts:
"{{ domain_1 }}": "{{ domain_1_ip }}"
domain_2 : "{{ domain_2_ip }}"
domain_3 : "{{ domain_3_ip }}"
When I make above configuration then it makes an entry to host file as
xx.xx.xx.xxx {{ domain_1 }}
Ideally the the host file should contain host name against the IP, can someone suggest how can i achieve this. Thanks in advance
Try this syntax:
docker_container:
name: image-name
image: image-to-be-pulled
state: started
restart_policy: always
etc_hosts: >
{
"{{ domain_1 }}": "{{ domain_1_ip }}",
"domain_2" : "{{ domain_2_ip }}",
"domain_3" : "{{ domain_3_ip }}"
}
This will form dict-like string that ansible's templator will evaluate into dict.
Pay attention, that every item should be quoted and pairs are separated by commas.
I had success with...
in my play(book):
- vars:
my_etc_hosts:
{
"host1.example.com host1": 10.3.1.5,
"host2.example.com host2": 10.3.1.3
}
(no ">" character, compared to the accepted answer)
Separated from the playbook, the role with...
in the task:
- name: have the container created
docker_container:
etc_hosts: "{{ my_etc_hosts | default({}) }}"
in defaults/main.yml:
my_etc_hosts: {}
Additionally (not needed by the task above, but part of another template task):
in a template, with jinja2:
{% for host in my_etc_hosts | default([]) %}
--add-host "{{ host }}":{{ my_etc_hosts[host] }} \
{% endfor %}
(As a plus, you see how two hostnames for one IP address can be handled: "fqdn alias1". If you instead split them into two values, they will form two lines in /etc/hosts with the same IP, which is not correct according to the man page.)

Resources