docker_container: How to add multiple Volumes - docker

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?

It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

Related

community.docker_image failed to tag image - 404 Client Error

I created an Ansible role for use in our pipelines. It logs in to AWS ECR, builds a docker image, pushes that image, re-tags the image and then pushes it again.
The role has 3 execution routes which are nearly identical. One for each of the 3 architectures that we build for: x86, armv7, arm64.
This role is the only thing that runs in the playbook. When I commit to master branch, a job runs for each of the 3 architectures. Each execution path tags the image with the branch name(with -arm and -amr64 appended respectively) and then with the latest tag(or lates-arm, latest-arm64)
All execution paths work, except for one very specific circumstance. The x86 execution path fails on the staging branch only. This role is used across multiple repositories with the same result. With each repository(on the staging branch only) the image will be built, tagged, and pushed with ecr.registry/image-name:staging without issue. The next task tags the local ecr.registry/image-name:staging images as ecr.registry/image-name:latest and pushes it.
The fact that it works with the tags latest-arm and latest-arm64 makes me think this has something to do with the latest tag specifically.
Here are the playbook tasks for each execution path. They are all almost identical.
Note: The playbook uses only localhost for the hosts: parameter so this playbook simply runs on the CI runner EC2 instance which executes the playbook.
Note: The x86 execution path runs on an x86 machine. Both of the arm execution paths run on an arm64 machine.
x86.yml
---
- name: "[x86] Build Image x86"
community.docker.docker_image:
state: present
source: build
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}"
tag: "{{ ecr_tag }}"
build:
dockerfile: "{{ dockerfile }}"
nocache: yes
pull: yes
path: "{{ context_build_dir }}"
platform: linux/amd64
tags:
- x86
- never
- name: "[x86] Push Image x86 with 'latest' tag"
community.docker.docker_image:
source: local
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}:{{ ecr_tag }}"
force_tag: true
repository: "{{ ecr_registry }}/{{ ecr_image }}:latest"
tags:
- x86
- never
arm64.yml
---
- name: "[ARM64] Build Image ARM64"
community.docker.docker_image:
state: present
source: build
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}"
tag: "{{ ecr_tag }}-arm64"
build:
dockerfile: "{{ dockerfile }}"
nocache: yes
pull: yes
path: "{{ context_build_dir }}"
platform: linux/arm64
tags:
- arm64
- never
- name: "[ARM64] Push Image ARM64 with 'latest-arm64' tag"
community.docker.docker_image:
state: present
source: local
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}:{{ ecr_tag }}-arm64"
force_tag: true
repository: "{{ ecr_registry }}/{{ ecr_image }}:latest-arm64"
tags:
- arm64
- never
armv7.yml
---
- name: "[ARMv7] Start QEMU Container"
community.docker.docker_container:
name: qemu
privileged: yes
auto_remove: yes
image: multiarch/qemu-user-static
command: "--reset -p yes"
tags:
- armv7
- never
- name: "[ARMv7] Build Image ARM"
community.docker.docker_image:
state: present
source: build
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}"
tag: "{{ ecr_tag }}-arm"
build:
dockerfile: "{{ dockerfile }}"
nocache: yes
pull: yes
path: "{{ context_build_dir }}"
platform: linux/arm/v7
tags:
- armv7
- never
- name: "[ARMv7] Push Image ARMv7 with 'latest' tag"
community.docker.docker_image:
state: present
source: local
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}:{{ ecr_tag }}-arm"
force_tag: true
repository: "{{ ecr_registry }}/{{ ecr_image }}:latest-arm"
tags:
- armv7
- never

Ansible known_hosts module ssh key propagation question

I'm trying to craft a playbook that will update the known_hosts for a machine / user however I'm getting an error I can't make sense of.
---
- name: Keys
hosts: adminslaves
gather_facts: false
no_log: false
remote_user: test
#pre_tasks:
# - setup:
# gather_subset:
# - '!all'
tasks:
- name: Scan for SSH host keys.
shell: ssh-keyscan myhost.mydomain.com 2>/dev/null
changed_when: False
register: ssh_scan
# - name: show vars
# debug:
# msg: "{{ ssh_scan.stdout_lines }}"
#
- name: Update known_hosts.
known_hosts:
key: "{{ item }}"
name: "{{ ansible_host }}"
state: present
with_items: "{{ ssh_scan.stdout_lines }}"
My error is "msg": "Host parameter does not match hashed host field in supplied key"}
I think the variable has the right information (at least it does when I debug it).
My end goal is a playbook that will add ssh keys of a list of hosts to a list of hosts for Jenkins auth.
Appreciate any help.
the problem is that the output of ssh-keyscan myhost.mydomain.com 2>/dev/null usually contains more than one key so you need to process it.
Someone with the same error message raised an issue, but again the problem was with the ssh-key format. I better understood checking the code used by known_hosts task.
Here the code I use:
- name: Populate known_hosts
hosts: spectrum_scale
tags: set_known_hosts
become: true
tasks:
- name: Scan for SSH keys
ansible.builtin.shell:
cmd: "ssh-keyscan {{ hostvars[spectrum_scale].ansible_fqdn }}
{{ hostvars[spectrum_scale].ansible_hostname }}
{{ hostvars[spectrum_scale].ansible_default_ipv4.address }}
2>/dev/null"
loop: "{{ groups['spectrum_scale'] }}"
loop_control:
loop_var: spectrum_scale
register: ssh_scan
- name: Set stdout_lines array for ssh_scan
set_fact:
ssout: []
- name: Fill ssout
set_fact:
ssout: "{{ ssout + ss_r.stdout_lines }}"
loop: "{{ ssh_scan.results }}"
loop_control:
loop_var:
ss_r
when: ss_r.stdout_lines is defined
- name: Add client ssh keys to known_hosts
ansible.builtin.known_hosts:
name: "{{ hk.split()[0] }}"
key: "{{ hk }}"
state: present
loop: "{{ ssout }}"
loop_control:
loop_var: hk

Retrieve docker container output with ansible using list

I need to retrieve the output of a docker command on ansible, it's easy when running a single instance but I'm running the command using with_dict:
I'm using something like
- name: Running task
docker_container:
command: <<here my command>>
detach: false
recreate: true
restart: false
restart_policy: "no"
with_dict: "{{ mylist.config.validator_client.accounts }}"
register: mycontainers
I've tried the following with no success:
- name: display logs
debug:
msg: "{{ item.ansible_facts.docker_container.Output }}"
with_items: "{{ mycontainers.results }}"
Any idea?

ansible plybook script for destroy all containers and remove images from remote

looking for ansible playbook script which can stop and destroy all container on remote and remove existing images as well the below given code is just stopping the running containers
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
this is the final solution for my question and it worked 100% it will destroy all containers and remove images as well .
---
- hosts: all
gather_facts: false
tasks:
- name: Get running containers
docker_host_info:
containers: yes
register: docker_info
- name: Stop running containers
docker_container:
name: "{{ item }}"
state: stopped
loop: "{{ docker_info.containers | map(attribute='Id') | list }}"
- name: Remove Stoped docker containers
shell: |
docker rm $(docker ps -a -q);
when: docker_info.containers != 0
- name: Get details of all images
docker_host_info:
images: yes
verbose_output: yes
register: image_info
- name: Remove all images
docker_image:
name: "{{ item }}"
state: absent
loop: "{{ image_info.images | map(attribute='Id') | list }}"

Restart multiple Docker containers using Ansible

how do i dynamically restart all my docker containers from Ansible? I mean i know a way where i can define my containers in a variable and loop through them but what i want to achieve is this -
Fetch the currently running containers and restart all or some of them one by one through some loop.
How to achieve this using Ansible?
Docker explanation
Retrieve name/image for all the running container:
docker container ls -a --format '{{.Names}} {{.Image}}'
You could also filter the output of the docket container command to a specific image name, thanks to the --filter ancestor=image_name option:
docker container ls -a --filter ancestor=alpine --format '{{.Names}} {{.Image}}'
Ansible integration:
First I would define some filters as Ansible variables:
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
Them I will execute the docker container command in a dedicated task and save the command output to an Ansible variable:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
Finally I will iterate over it and use it into the docker_container ansible module:
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"
final playbook.yml
---
- hosts: localhost
gather_facts: no
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
tasks:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"

Resources