I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...
I'm trying to write conditional properties in the Ansible module docker_container. For example adding log_options when a var is set. Any idea? Ansible doesn't seem very elastic in this task.
name: vault
become: yes
docker_container:
name: vault
image: vault
state: started
restart: yes
network_mode: host
command: server
capabilities:
- IPC_LOCK
You may want to read about omitting parameters:
docker_container:
...
log_options: "{{ my_log_opts | default(omit) }}"
...
I want playbook that will start an container (in a task) and only link it to another container if the link is provided in a variable. For example:
- name: Start container
docker_container:
image: somerepo/app-server:{{ var_tag }}
name: odoo-server
state: started
log_opt: "tag=app-server-{{ var_tag }}"
expose:
- 8080
links:
- "{{ var_db_link }}"
when: var_db_link is defined
But of course this does not work. (I know - without a value is invalid ~ this is just pseudo code)
The whole task is actually quite a bit larger because it includes other directives so I really don't to have 2 versions of the task defined, one for starting with a link and another without.
when use '-', it means there is certain value , so I have a way to avoid it.
---
- hosts: localhost
tasks:
- name: Start container
docker_container:
image: centos
name: odoo-server
state: started
expose:
- 8080
links: "{{ var_db_link | default([]) }}"
then test it use
ansible-playbook ha.yml -e var_db_link="redis-master:centos"
ansible-playbook ha.yml
It runs normally!
I have a situation where i have to use the node/chrome and selenium/hub images in different host machines. However problem is although i am linking them in the ansible role as below:
- name: seleniumchromenode container
docker:
name: seleniumhubchromenode
image: "{{ seleniumchromenode_image }}"
state: "{{ 'started' }}"
pull: always
restart_policy: always
links: seleniumhub:hub
It doesnt get linked , or in other words the hub is not discovering the node. Please let me know if linking works only when the hub and node are within the same host machine.
Links don't work across machines. You can either specify the IP address/hostname and let it connect through that, or you can use Docker Swarm Mode to deploy your containers - that lets you do something very close to linking (it sets up a mesh network across the swarm nodes, so services can find each other).
Simplest: just pass the hostname in Ansible.
Below is what finally worked for me. Note that the SE_OPTS is necessary for the node to be able to link successfully to the hub that is on a different host.
- name: seleniumchromenode container
docker_container:
name: seleniumhubchromenode
image: "{{ seleniumchromenode_image }}"
state: "{{ 'started' }}"
pull: true
restart_policy: always
exposed_ports:
- "{{seleniumnode_port}}"
published_ports:
- "{{seleniumnode_port}}:{{seleniumnode_port}}"
env:
HUB_PORT_4444_TCP_ADDR: "{{seleniumhub_host}}"
HUB_PORT_4444_TCP_PORT: "{{seleniumhub_port}}"
SE_OPTS: "-host {{seleniumnode_host}} -port {{seleniumnode_port}}"
NODE_MAX_INSTANCES: "5"
NODE_MAX_SESSION: "5"
I've an ansible script through which I spawn a docker container and add few hosts entries to it, since etc_hosts takes key as host name and corresponding IP address. In my case I need to have both host name and IP address to be driven by some variable, for instance
docker_container:
name: image-name
image: image-to-be-pulled
state: started
restart_policy: always
etc_hosts:
"{{ domain_1 }}": "{{ domain_1_ip }}"
domain_2 : "{{ domain_2_ip }}"
domain_3 : "{{ domain_3_ip }}"
When I make above configuration then it makes an entry to host file as
xx.xx.xx.xxx {{ domain_1 }}
Ideally the the host file should contain host name against the IP, can someone suggest how can i achieve this. Thanks in advance
Try this syntax:
docker_container:
name: image-name
image: image-to-be-pulled
state: started
restart_policy: always
etc_hosts: >
{
"{{ domain_1 }}": "{{ domain_1_ip }}",
"domain_2" : "{{ domain_2_ip }}",
"domain_3" : "{{ domain_3_ip }}"
}
This will form dict-like string that ansible's templator will evaluate into dict.
Pay attention, that every item should be quoted and pairs are separated by commas.
I had success with...
in my play(book):
- vars:
my_etc_hosts:
{
"host1.example.com host1": 10.3.1.5,
"host2.example.com host2": 10.3.1.3
}
(no ">" character, compared to the accepted answer)
Separated from the playbook, the role with...
in the task:
- name: have the container created
docker_container:
etc_hosts: "{{ my_etc_hosts | default({}) }}"
in defaults/main.yml:
my_etc_hosts: {}
Additionally (not needed by the task above, but part of another template task):
in a template, with jinja2:
{% for host in my_etc_hosts | default([]) %}
--add-host "{{ host }}":{{ my_etc_hosts[host] }} \
{% endfor %}
(As a plus, you see how two hostnames for one IP address can be handled: "fqdn alias1". If you instead split them into two values, they will form two lines in /etc/hosts with the same IP, which is not correct according to the man page.)