Ansible docker_container etc_hosts with variable key - docker

I've an ansible script through which I spawn a docker container and add few hosts entries to it, since etc_hosts takes key as host name and corresponding IP address. In my case I need to have both host name and IP address to be driven by some variable, for instance
docker_container:
name: image-name
image: image-to-be-pulled
state: started
restart_policy: always
etc_hosts:
"{{ domain_1 }}": "{{ domain_1_ip }}"
domain_2 : "{{ domain_2_ip }}"
domain_3 : "{{ domain_3_ip }}"
When I make above configuration then it makes an entry to host file as
xx.xx.xx.xxx {{ domain_1 }}
Ideally the the host file should contain host name against the IP, can someone suggest how can i achieve this. Thanks in advance

Try this syntax:
docker_container:
name: image-name
image: image-to-be-pulled
state: started
restart_policy: always
etc_hosts: >
{
"{{ domain_1 }}": "{{ domain_1_ip }}",
"domain_2" : "{{ domain_2_ip }}",
"domain_3" : "{{ domain_3_ip }}"
}
This will form dict-like string that ansible's templator will evaluate into dict.
Pay attention, that every item should be quoted and pairs are separated by commas.

I had success with...
in my play(book):
- vars:
my_etc_hosts:
{
"host1.example.com host1": 10.3.1.5,
"host2.example.com host2": 10.3.1.3
}
(no ">" character, compared to the accepted answer)
Separated from the playbook, the role with...
in the task:
- name: have the container created
docker_container:
etc_hosts: "{{ my_etc_hosts | default({}) }}"
in defaults/main.yml:
my_etc_hosts: {}
Additionally (not needed by the task above, but part of another template task):
in a template, with jinja2:
{% for host in my_etc_hosts | default([]) %}
--add-host "{{ host }}":{{ my_etc_hosts[host] }} \
{% endfor %}
(As a plus, you see how two hostnames for one IP address can be handled: "fqdn alias1". If you instead split them into two values, they will form two lines in /etc/hosts with the same IP, which is not correct according to the man page.)

Related

Ansible known_hosts module ssh key propagation question

I'm trying to craft a playbook that will update the known_hosts for a machine / user however I'm getting an error I can't make sense of.
---
- name: Keys
hosts: adminslaves
gather_facts: false
no_log: false
remote_user: test
#pre_tasks:
# - setup:
# gather_subset:
# - '!all'
tasks:
- name: Scan for SSH host keys.
shell: ssh-keyscan myhost.mydomain.com 2>/dev/null
changed_when: False
register: ssh_scan
# - name: show vars
# debug:
# msg: "{{ ssh_scan.stdout_lines }}"
#
- name: Update known_hosts.
known_hosts:
key: "{{ item }}"
name: "{{ ansible_host }}"
state: present
with_items: "{{ ssh_scan.stdout_lines }}"
My error is "msg": "Host parameter does not match hashed host field in supplied key"}
I think the variable has the right information (at least it does when I debug it).
My end goal is a playbook that will add ssh keys of a list of hosts to a list of hosts for Jenkins auth.
Appreciate any help.
the problem is that the output of ssh-keyscan myhost.mydomain.com 2>/dev/null usually contains more than one key so you need to process it.
Someone with the same error message raised an issue, but again the problem was with the ssh-key format. I better understood checking the code used by known_hosts task.
Here the code I use:
- name: Populate known_hosts
hosts: spectrum_scale
tags: set_known_hosts
become: true
tasks:
- name: Scan for SSH keys
ansible.builtin.shell:
cmd: "ssh-keyscan {{ hostvars[spectrum_scale].ansible_fqdn }}
{{ hostvars[spectrum_scale].ansible_hostname }}
{{ hostvars[spectrum_scale].ansible_default_ipv4.address }}
2>/dev/null"
loop: "{{ groups['spectrum_scale'] }}"
loop_control:
loop_var: spectrum_scale
register: ssh_scan
- name: Set stdout_lines array for ssh_scan
set_fact:
ssout: []
- name: Fill ssout
set_fact:
ssout: "{{ ssout + ss_r.stdout_lines }}"
loop: "{{ ssh_scan.results }}"
loop_control:
loop_var:
ss_r
when: ss_r.stdout_lines is defined
- name: Add client ssh keys to known_hosts
ansible.builtin.known_hosts:
name: "{{ hk.split()[0] }}"
key: "{{ hk }}"
state: present
loop: "{{ ssout }}"
loop_control:
loop_var: hk

Port mapping in docker container with multiple networks

Having used docker on multiple occasions, I am familiar with the concepts of docker networks and port mapping. However, I haven't found any case online where you'd want to mix those two. Hopefully there are ppl who can help me out.
I use Traefik in many situations. I also have pi-hole software as private DNS. I would like to standardize all services behind Traefik to use TLS and custom (internal) domains. The pi-hole admin interface works perfectly together with Traefik.
The biggest issue with pi-hole behind an edge router, is docker uses NAT for the internal network. So pi-hole is not able to see where the DNS requests are made from. The only thing to overcome this is to map the DNS ports (53 & 853) directly to the host, I guess (so bypassing the internal Traefik network, bypassing any NAT).
I can attach the pi-hole container to multiple networks, but how I'm able to attach :80 to the Traefik network and :53 to the host network?
Eventually this was quite simple, although I didn't think this would work: simply publish the ports while the pi-hole container is connected to the Traefik network.
This is the Ansible config I used:
- name: Create the pihole container
docker_container:
name: "{{ pihole_docker_container }}"
image: "{{ pihole_docker_tag }}"
pull: yes
restart_policy: unless-stopped
networks_cli_compatible: yes
networks:
- name: "{{ traefik_docker_network }}"
volumes:
- "{{ pihole_config_dir }}:/etc/pihile/"
- "{{ pihole_dnsmasq_dir }}:/etc/dnsmasq.d/"
env:
TZ: "{{ pihole_tz }}"
WEBPASSWORD: ""
DNS1: "{{ pihole_container_dns1 }}"
DNS2: "{{ pihole_container_dns2 }}"
REV_SERVER: "{{ pihole_server_rev }}"
REV_SERVER_DOMAIN: "{{ pihole_server_domain }}"
REV_SERVER_TARGET: "{{ pihole_server_gateway }}"
REV_SERVER_CIDR: "{{ pihole_server_subnet }}"
dns_servers:
- 127.0.0.1
- "{{ pihole_container_dns1 }}"
ports:
- "53:53/tcp"
- "53:53/udp"
- "853:853"
labels:
traefik.enable: "true"
traefik.http.routers.pihole.entrypoints: "websecure"
traefik.http.routers.pihole.rule: "Host(`{{ pihole_public_domain }}`)"
traefik.http.routers.pihole.middlewares: "pihole-admin"
traefik.http.routers.pihole.service: "pihole"
traefik.http.routers.pihole.tls: "true"
traefik.http.routers.pihole.tls.certresolver: "le"
traefik.http.middlewares.pihole-admin.addprefix.prefix: "/admin"
traefik.http.routers.pihole_http.entrypoints: "web"
traefik.http.routers.pihole_http.rule: "Host(`{{ pihole_public_domain }}`)"
traefik.http.routers.pihole_http.middlewares: "redirect-to-https"
traefik.http.services.pihole.loadBalancer.server.port: "80"

Docker_swarm module - join_token parameter for ansible not working

This is my ansible playbook, the tasks are copied from docker_swarm module documentation so it should work:
- name: Init a new swarm with default parameters
docker_swarm:
state: present
advertise_addr: "{{ manager_ip }}:2377"
register: rezult
when: "ansible_default_ipv4.address == '{{ manager_ip }}'"
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ manager_ip }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
It inits a swarm manager with the "manager_ip" --extra-var
but it fails in the "add nodes task" with this error:
fatal: [vm2]: FAILED! => {"changed": false, "msg": "Can not join the Swarm Cluster: 500 Server Error: Internal Server Error (\"invalid join token\")"}
if I put "'{{ }}'" around "rezult.swarm_facts.JoinTokens.Worker" after join_token I get this:
fatal: [vm2]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'swarm_facts'\n\nThe error appears to be in '/home/ansible/docker-ansible/docker.yml': line 47, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add nodes\n ^ here\n"}
If I put the debug msg for rezult.swarm_facts.JoinTokens.Worker I get the correct token:
ok: [opensuse1] => {
"msg": "SWMTKN-1-5p7brhxxz4gzu716t78tt5woj7h6aflq0kdwvzwlbbe7ct0ba7-e59bg0t79q67ogd61ydwxc5yq"
}
and If I use that token manually with the docker swarm join command on the server I wish to merge with manager it works. So the variable has the correct value and the connection between nodes work. But I just can't get join_token to work. I am running ansible 2.8.5 with python 2.7.5.
I know I can use the shell module but I do not want to do that.
Something like this works for me:
---
- name: Init swarm on the first node
community.general.docker_swarm:
state: present
advertise_addr: "{{ ansible_host }}"
register: result
when: inventory_hostname == groups['swarm_managers'][0]
- name: Get join-token for manager nodes
set_fact:
join_token_manager: "{{ hostvars[groups['swarm_managers'][0]].result.swarm_facts.JoinTokens.Manager }}"
- name: Get join-token for worker nodes
set_fact:
join_token_worker: "{{ hostvars[groups['swarm_managers'][0]].result.swarm_facts.JoinTokens.Worker }}"
- name: Join other managers
community.general.docker_swarm:
state: join
join_token: "{{ join_token_manager }}"
advertise_addr: "{{ ansible_host }}"
remote_addrs: "{{ hostvars[groups['swarm_managers'][0]].ansible_host }}"
when:
- inventory_hostname in groups['swarm_managers']
- inventory_hostname != groups['swarm_managers'][0]
- name: Join workers
community.general.docker_swarm:
state: join
join_token: "{{ join_token_worker }}"
advertise_addr: "{{ ansible_host }}"
remote_addrs: "{{ hostvars[groups['swarm_managers'][0]].ansible_host }}"
when:
- inventory_hostname not in groups['swarm_managers']
In swarm_managers group there are managers and all other hosts from this inventory are workers.
I think achempion was correct, the issue was that OPs variable rezult.swarm_facts.JoinTokens.Worker was not being evaluated, but rather provided as an object of sorts.
Replace rezult.swarm_facts.JoinTokens.Worker with "{{ rezult.swarm_facts.JoinTokens.Worker }}" and it should work.
I realise OP has probably already moved on but I have spent ages trying to figure out a very similar problem and this appeared to resolve it for me.
As I have not enough reputation to comment - here is an other mistake in your playbooks task. You use:
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ manager_ip }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
...which assigns the advertise_addr to the wrong ip address. This will still allow your nodes to join, but mess up their overlay network configuration (no node can ping each other, resulting in constant network failures). I would suggest to use the ssh connection ip instead:
- name: Add nodes
docker_swarm:
state: join
advertise_addr: "{{ ansible_ssh_host }}"
join_token: rezult.swarm_facts.JoinTokens.Worker
remote_addrs: "{{ manager_ip }}:2377"
when: "ansible_default_ipv4.address != '{{ manager_ip }}'"
Also just take a look into the documentation examples:
- name: Add nodes
community.docker.docker_swarm:
state: join
advertise_addr: 192.168.1.2
join_token: SWMTKN-1--xxxxx
remote_addrs: [ '192.168.1.1:2377' ]
...which also uses different ip addresses.
I have to admit: I also fall for it and it took several hours to solve. I hope someone else sees this answer before making the same mistake by just copying your snippet.
I think the issue here is from rezult.swarm_facts.JoinTokens.Worker. In debug info it appears as "msg": "SWMTKN-1-5p7..." but join_token: option from Ansible configuration expects it to be just plain token without additional wrappers as "msg": and so on.

docker_container: How to add multiple Volumes

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

How to use Docker link in Ansible when link var defined

I want playbook that will start an container (in a task) and only link it to another container if the link is provided in a variable. For example:
- name: Start container
docker_container:
image: somerepo/app-server:{{ var_tag }}
name: odoo-server
state: started
log_opt: "tag=app-server-{{ var_tag }}"
expose:
- 8080
links:
- "{{ var_db_link }}"
when: var_db_link is defined
But of course this does not work. (I know - without a value is invalid ~ this is just pseudo code)
The whole task is actually quite a bit larger because it includes other directives so I really don't to have 2 versions of the task defined, one for starting with a link and another without.
when use '-', it means there is certain value , so I have a way to avoid it.
---
- hosts: localhost
tasks:
- name: Start container
docker_container:
image: centos
name: odoo-server
state: started
expose:
- 8080
links: "{{ var_db_link | default([]) }}"
then test it use
ansible-playbook ha.yml -e var_db_link="redis-master:centos"
ansible-playbook ha.yml
It runs normally!

Resources