Retrieve docker container output with ansible using list - docker

I need to retrieve the output of a docker command on ansible, it's easy when running a single instance but I'm running the command using with_dict:
I'm using something like
- name: Running task
docker_container:
command: <<here my command>>
detach: false
recreate: true
restart: false
restart_policy: "no"
with_dict: "{{ mylist.config.validator_client.accounts }}"
register: mycontainers
I've tried the following with no success:
- name: display logs
debug:
msg: "{{ item.ansible_facts.docker_container.Output }}"
with_items: "{{ mycontainers.results }}"
Any idea?

Related

Ansible known_hosts module ssh key propagation question

I'm trying to craft a playbook that will update the known_hosts for a machine / user however I'm getting an error I can't make sense of.
---
- name: Keys
hosts: adminslaves
gather_facts: false
no_log: false
remote_user: test
#pre_tasks:
# - setup:
# gather_subset:
# - '!all'
tasks:
- name: Scan for SSH host keys.
shell: ssh-keyscan myhost.mydomain.com 2>/dev/null
changed_when: False
register: ssh_scan
# - name: show vars
# debug:
# msg: "{{ ssh_scan.stdout_lines }}"
#
- name: Update known_hosts.
known_hosts:
key: "{{ item }}"
name: "{{ ansible_host }}"
state: present
with_items: "{{ ssh_scan.stdout_lines }}"
My error is "msg": "Host parameter does not match hashed host field in supplied key"}
I think the variable has the right information (at least it does when I debug it).
My end goal is a playbook that will add ssh keys of a list of hosts to a list of hosts for Jenkins auth.
Appreciate any help.
the problem is that the output of ssh-keyscan myhost.mydomain.com 2>/dev/null usually contains more than one key so you need to process it.
Someone with the same error message raised an issue, but again the problem was with the ssh-key format. I better understood checking the code used by known_hosts task.
Here the code I use:
- name: Populate known_hosts
hosts: spectrum_scale
tags: set_known_hosts
become: true
tasks:
- name: Scan for SSH keys
ansible.builtin.shell:
cmd: "ssh-keyscan {{ hostvars[spectrum_scale].ansible_fqdn }}
{{ hostvars[spectrum_scale].ansible_hostname }}
{{ hostvars[spectrum_scale].ansible_default_ipv4.address }}
2>/dev/null"
loop: "{{ groups['spectrum_scale'] }}"
loop_control:
loop_var: spectrum_scale
register: ssh_scan
- name: Set stdout_lines array for ssh_scan
set_fact:
ssout: []
- name: Fill ssout
set_fact:
ssout: "{{ ssout + ss_r.stdout_lines }}"
loop: "{{ ssh_scan.results }}"
loop_control:
loop_var:
ss_r
when: ss_r.stdout_lines is defined
- name: Add client ssh keys to known_hosts
ansible.builtin.known_hosts:
name: "{{ hk.split()[0] }}"
key: "{{ hk }}"
state: present
loop: "{{ ssout }}"
loop_control:
loop_var: hk

ansible plybook script for destroy all containers and remove images from remote

looking for ansible playbook script which can stop and destroy all container on remote and remove existing images as well the below given code is just stopping the running containers
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
this is the final solution for my question and it worked 100% it will destroy all containers and remove images as well .
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
- name: Remove Stoped docker containers
shell: |
docker rm $(docker ps -a -q);
when: docker_info.containers != 0
- name: Get details of all images
docker_host_info:
images: yes
verbose_output: yes
register: image_info
- name: Remove all images
docker_image:
name: "{{ item }}"
state: absent
loop: "{{ image_info.images | map(attribute='Id') | list }}"

Restart multiple Docker containers using Ansible

how do i dynamically restart all my docker containers from Ansible? I mean i know a way where i can define my containers in a variable and loop through them but what i want to achieve is this -
Fetch the currently running containers and restart all or some of them one by one through some loop.
How to achieve this using Ansible?
Docker explanation
Retrieve name/image for all the running container:
docker container ls -a --format '{{.Names}} {{.Image}}'
You could also filter the output of the docket container command to a specific image name, thanks to the --filter ancestor=image_name option:
docker container ls -a --filter ancestor=alpine --format '{{.Names}} {{.Image}}'
Ansible integration:
First I would define some filters as Ansible variables:
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
Them I will execute the docker container command in a dedicated task and save the command output to an Ansible variable:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
Finally I will iterate over it and use it into the docker_container ansible module:
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"
final playbook.yml
---
- hosts: localhost
gather_facts: no
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
tasks:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"

Ansible molecule using docker - how to specify memory limit

I have a molecule test which spins up 2 Docker containers, for testing 2 application versions at once.
dependency:
name: galaxy
driver:
name: docker
lint:
name: yamllint
platforms:
- name: molecule1
hostname: molecule1
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
- name: molecule2
hostname: molecule2
image: "geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest"
command: ${MOLECULE_DOCKER_COMMAND:-""}
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
privileged: true
pre_build_image: true
provisioner:
name: ansible
inventory:
host_vars:
molecule1:
app_version: "v1"
molecule2:
app_version: "v2"
lint:
name: ansible-lint
scenario:
name: default
converge_sequence:
- syntax
- lint
- create
- prepare
- converge
- idempotence
- verify
verifier:
name: goss
lint:
name: yamllint
I am looking for a way to specify the memory like -m or --memory= as described here.
I understand that molecule makes use of the docker_container ansible module, which support the memory parameter, but somehow I cannot find a way to make this work in molecule.
Any ideas how to accomplish this?
PS: My guess is that this parameter is not yet implemented in molecule, if my assumption is correct that this is the implementation.
Thanks in advance.
++Update++
--memory is indeed not yet implemented in molecule docker provisioner.
If anybody is interested, here is the relevant change to the source code:
diff --git a/molecule/provisioner/ansible/playbooks/docker/create.yml b/molecule/provisioner/ansible/playbooks/docker/create.yml
index 7a04b851..023a720a 100644
--- a/molecule/provisioner/ansible/playbooks/docker/create.yml
+++ b/molecule/provisioner/ansible/playbooks/docker/create.yml
## -121,6 +121,8 ##
hostname: "{{ item.hostname | default(item.name) }}"
image: "{{ item.pre_build_image | default(false) | ternary('', 'molecule_local/') }}{{ item.image }}"
pull: "{{ item.pull | default(omit) }}"
+ kernel_memory: "{{ item.kernel_memory | default(omit) }}"
+ memory: "{{ item.memory | default(omit) }}"
state: started
recreate: false
log_driver: json-file
My fork has now been merged to Molecule.

docker_container: How to add multiple Volumes

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

Resources