I'm attempting to store all my environment variables in a file called variables.yml that looks like so:
---
doo: "external"
Then I have a playbook like so:
---
- hosts: localhost
tasks:
- name: "i can totally echo"
environment:
include: variables.yml
ugh: 'internal'
shell: echo "$doo vs $ugh"
register: results
- debug: msg="{{ results.stdout }}"
The result of the echo is ' vs internal'.
How can I change this so that the result is 'external vs internal'. Many thanks!
Assuming the external variable file called variables.ext is structured as follow
---
EXTERNAL:
DOO: "external"
than, according Setting the remote environment and Load variables from files, dynamically within a task a small test could look like
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Load environment variables
include_vars:
file: variables.ext
- name: Echo variables
shell:
cmd: 'echo "${DOO} vs ${UGH}"'
environment:
DOO: "{{ EXTERNAL.DOO }}"
UGH: "internal"
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
resulting into an output of
TASK [Show result] ********
ok: [localhost] =>
msg: external vs internal
Related
I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config
This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...
Osmc media player needs a specific path for playbooks
https://github.com/osmc/osmc/issues/319
environment:
PATH: "{{ ansible_env.PATH }}:/sbin:/usr/sbin"
I was wondering whether I can set this as an environmental variable in the inventory for those machines, rather than have it in every playbook or create separate playbooks.
In common usage - is that path likely to cause problems for general *nix machines if it is implemented on non-osmc installations?
If you can't set this an an inventory variable:
Is that just because it's no implemented/ useful to most?
Or because the inventory has no relation to path - e.g. it's not invoked at that point?
Or is a better way for all of this to have it as a machine specific variable/ task in a role?
How would that look please?
New to ansible and still trying to get my head round some of the concepts.
As said, the environment keyword can be used only at task or playbook level.
You will be able to use an standard playbook just adding the following:
---
- name: Environment
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Setup
setup:
gather_subset:
- "!all"
or
---
- name: Environment
hosts: localhost
connection: local
gather_facts: True
gather_subset:
- "!all"
If you debug the variable:
---
- name: Environment
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Setup
setup:
gather_subset:
- "!all"
- name: Debug
debug:
var: ansible_env.PATH
You will get something like:
TASK [Setup] *******************************************************************************************************************************************************
ok: [localhost]
TASK [Debug] *******************************************************************************************************************************************************
ok: [localhost] => {
"ansible_env.PATH": "/Users/imjoseangel/source/venv/ansible/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
}
And what if you want to pass that variable to another play with different inventory?
Just do hostvars.localhost.ansible_env.PATH
- name: Environment2
hosts: windows
connection: local
gather_facts: False
tasks:
- name: Debug
debug:
var: hostvars.localhost.ansible_env.PATH
So the
environment:
PATH: "{{ ansible_env.PATH }}:/sbin:/usr/sbin"
Will be valid only with gather_facts or setup module under the defined inventory but you don't need to split playbooks.
I was trying to use vars_prompt in Ansible with default values taken from facts (or otherwise a previously defined variable). The playbook is intended be used as an ad-hoc one for initial provisioning.
My playbook:
---
- hosts: server01
gather_facts: True
vars_prompt:
- name: new_hostname
prompt: please enter the name for the target
default: "{{ ansible_hostname }}"
private: no
tasks:
- debug: msg="{{ new_hostname }}"
Current result:
please enter the name for the target [{{ ansible_hostname }}]:
ERROR! 'ansible_hostname' is undefined
Expected results (assuming ansible_hostname=server01:
please enter the name for the target [server01]:
Is it possible to achieve in Ansible?
This can be implemented using the pause module:
---
- hosts: server01
gather_facts: True
tasks:
- pause:
prompt: please enter the name for the target [{{ ansible_hostname }}]
register: prompt
- debug:
msg: "{{ prompt.user_input if prompt.user_input else ansible_hostname }}"
I have the following code which reads values from an environment (.env) file and stores them as facts:
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_PASSWORD"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read password
set_fact:
db_password: "{{ output.stdout }}"
when:
- db_password is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_USER"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read user
set_fact:
db_user: "{{ output.stdout }}"
when:
- db_user is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_NAME"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read db_name
set_fact:
db_name: "{{ output.stdout }}"
when:
- db_name is undefined
changed_when: false
- name: Container environment loaded; the following facts are now available for use by ansible
debug: "var={{ item }}"
with_items:
- db_name
- db_user
- db_password
It's quite bulky and unwieldy. I would like to write it something like this, but I can't figure out how :
vars:
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo {{ item|upper }}"
register: output
with_items: values
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when:
- item.0 is undefined
with_together:
- values
- output.results
changed_when: false
Instead, I get this output:
ok: [default] => (item=values) => {"changed": false, "cmd": "source /var/www/mydomain.org/.env; echo VALUES", "delta": "0:00:00.002240", "end": "2017-02-15 15:25:15.338673", "item": "values", "rc": 0, "start": "2017-02-15 15:25:15.336433", "stderr": "", "stdout": "VALUES", "stdout_lines": ["VALUES"], "warnings": []}
TASK [sql-base : Store read password] ******************************************
skipping: [default] => (item=[u'values', u'output.results']) => {"changed": false, "item": ["values", "output.results"], "skip_reason": "Conditional check failed", "skipped": true}
Even more ideal of course would be if there is an ansible module I have overlooked that allows me to load values from an environment file.
Generally I would either put my variables into the inventory files themselves, or convert them to yml format and use the include_vars module (you might be able to run a sed script to convert your environment file to yml on the fly). Before using the below code make sure you are really required to use those environment files, and you cannot easily use some other mechanism like:
a YAML file with the include_vars module
putting the config inside the inventory (no modules required)
putting the config into an ansible vault file (no modules required, you need to store the decryption key somewhere though)
use some other secure storage mechanism, like Hashicorp's vault (with a plugin like https://github.com/jhaals/ansible-vault)
Back to your code, it is actually almost correct, you were missing a dollar from the read task, and some curly braces from the condition:
test.env
DB_NAME=abcd
DB_PASSWORD=defg
DB_USER=fghi
Note: make sure that this file adheres to sh standards meaning you don't put spaces around the = sign. Having something like DB_NAME = abcd will fail
play.yml
- hosts: all
connection: local
vars:
env_path: 'test.env'
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo ${{ item|upper }}"
register: output
with_items: "{{ values }}"
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when: '{{ item.0 }} is undefined'
with_together:
- "{{ values }}"
- "{{ output.results }}"
changed_when: false
- name: Debug
debug:
msg: "NAME: {{db_name}} PASS: {{db_password}} USER: {{db_user}}"
Running with ansible-playbook -i 127.0.0.1, play.yml:
TASK [Debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "NAME: abcd PASS: defg USER: fghi"
}
This has been answered before here on Stack.
ansible get aws ebs volume id which already exist
Get volume id from newly created ebs volume using ansible
For the life of me I am trying ec2_vol.volume_id and some other jmespath query bits but not getting the right output help. I just want the vol id. Nothing more.
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: get associated vols
ec2_vol:
instance: i-xxxxxxxxxxxxx
state: list
profile: default
region: us-east-1
register: ec2_vol
- debug:
msg: "{{ ec2_vol.volume_id }}"
also doesn't work
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: get associated vols
ec2_vol:
instance: i-xxxxxxxxxxxxxx
state: list
profile: default
region: us-east-1
register: ec2_vol
- debug: msg="{{ item.volume_id }}"
with_items: ec2_vol.results
Ansible 2.2 and 2.3 tested
Taking bits from the prior answer you will need to understand JMESPATH like filtering to get what you want out of the output.
Here is the answer
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: get associated vols
ec2_vol:
instance: i-xxxxxxxxxxxxxx
state: list
profile: default
region: us-east-1
register: ec2_vol
- debug: msg="{{ ec2_vol.volumes | map(attribute='id') | list }}"