Ansible known_hosts module ssh key propagation question - jenkins

I'm trying to craft a playbook that will update the known_hosts for a machine / user however I'm getting an error I can't make sense of.
---
- name: Keys
hosts: adminslaves
gather_facts: false
no_log: false
remote_user: test
#pre_tasks:
# - setup:
# gather_subset:
# - '!all'
tasks:
- name: Scan for SSH host keys.
shell: ssh-keyscan myhost.mydomain.com 2>/dev/null
changed_when: False
register: ssh_scan
# - name: show vars
# debug:
# msg: "{{ ssh_scan.stdout_lines }}"
#
- name: Update known_hosts.
known_hosts:
key: "{{ item }}"
name: "{{ ansible_host }}"
state: present
with_items: "{{ ssh_scan.stdout_lines }}"
My error is "msg": "Host parameter does not match hashed host field in supplied key"}
I think the variable has the right information (at least it does when I debug it).
My end goal is a playbook that will add ssh keys of a list of hosts to a list of hosts for Jenkins auth.
Appreciate any help.

the problem is that the output of ssh-keyscan myhost.mydomain.com 2>/dev/null usually contains more than one key so you need to process it.
Someone with the same error message raised an issue, but again the problem was with the ssh-key format. I better understood checking the code used by known_hosts task.
Here the code I use:
- name: Populate known_hosts
hosts: spectrum_scale
tags: set_known_hosts
become: true
tasks:
- name: Scan for SSH keys
ansible.builtin.shell:
cmd: "ssh-keyscan {{ hostvars[spectrum_scale].ansible_fqdn }}
{{ hostvars[spectrum_scale].ansible_hostname }}
{{ hostvars[spectrum_scale].ansible_default_ipv4.address }}
2>/dev/null"
loop: "{{ groups['spectrum_scale'] }}"
loop_control:
loop_var: spectrum_scale
register: ssh_scan
- name: Set stdout_lines array for ssh_scan
set_fact:
ssout: []
- name: Fill ssout
set_fact:
ssout: "{{ ssout + ss_r.stdout_lines }}"
loop: "{{ ssh_scan.results }}"
loop_control:
loop_var:
ss_r
when: ss_r.stdout_lines is defined
- name: Add client ssh keys to known_hosts
ansible.builtin.known_hosts:
name: "{{ hk.split()[0] }}"
key: "{{ hk }}"
state: present
loop: "{{ ssout }}"
loop_control:
loop_var: hk

Related

Retrieve docker container output with ansible using list

I need to retrieve the output of a docker command on ansible, it's easy when running a single instance but I'm running the command using with_dict:
I'm using something like
- name: Running task
docker_container:
command: <<here my command>>
detach: false
recreate: true
restart: false
restart_policy: "no"
with_dict: "{{ mylist.config.validator_client.accounts }}"
register: mycontainers
I've tried the following with no success:
- name: display logs
debug:
msg: "{{ item.ansible_facts.docker_container.Output }}"
with_items: "{{ mycontainers.results }}"
Any idea?

hashi_vault don't work through Web Application Firewall

I want to retrieve a vault secret with Ansible using the hashi_vault module which doesn't seem to work through a WAF.
The hashi_vault module work when the vault server is mapped to the root url (https://address/) in the WAF but when we use a custom path (https://address/vault) the Ansible playbook return this error : "Invalid Hashicorp VaultToken Specified for hashi_vault lookup"
- hosts: localhost
tasks:
- name: set variables
set_fact:
vault_token: !vault |
$ANSIBLE_VAULT;1.1;AES256
SOMETOKEN
- name: get Configuration token from Vault
set_fact:
vault_result: "{{ lookup('hashi_vault', 'secret=secret/data/all/default/token token={{ vault_token }} url=https://address validate_certs=False' ) }}"
- name: parse result
set_fact:
TOKEN: "{{ vault_result.data.token }}"
register: TOKEN
- name: show result
debug:
msg: "{{ TOKEN }}"
I'd love to find a way to keep my custom url (https://address/vault) and get my secret !
It's not related to your WAF, it's just a variable issue...
Try something like that:
- name: get Configuration token from Vault
set_fact:
vault_result: "{{ lookup('hashi_vault', 'secret=secret/data/all/default/token token=' + vault_token + ' url=https://address validate_certs=False' ) }}"

docker_container: How to add multiple Volumes

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

Ansible - Download latest release binary from Github repo

With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be:
a. get URL of latest release
b. download the release
For a. I have something like which does not provide the actual release (ex. v0.11.53):
- name: get latest Gogs release
local_action:
module: uri
url: https://github.com/gogits/gogs/releases/latest
method: GET
follow_redirects: no
status_code: 301
register: release_url
For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.:
- name: download latest
become: yes
become-user: "{{gogs_user}}"
get_url:
url: https://github.com/gogs/gogs/releases/download/v0.11.53/linux_amd64.tar.gz
dest: "/home/{{gogs_user}}/linux_amd64.tar.gz"
Thank you!
Github has an API to manipulate the release which is documented.
so imagine you want to get the latest release of ansible (which belong to the project ansible) you would
call the url https://api.github.com/repos/ansible/ansible/releases/latest
get an json structure like this
{
"url": "https://api.github.com/repos/ansible/ansible/releases/5120666",
"assets_url": "https://api.github.com/repos/ansible/ansible/releases/5120666/assets",
"upload_url": "https://uploads.github.com/repos/ansible/ansible/releases/5120666/assets{?name,label}",
"html_url": "https://github.com/ansible/ansible/releases/tag/v2.2.1.0-0.3.rc3",
"id": 5120666,
"node_id": "MDc6UmVsZWFzZTUxMjA2NjY=",
"tag_name": "v2.2.1.0-0.3.rc3",
"target_commitish": "devel",
"name": "THESE ARE NOT OUR OFFICIAL RELEASES",
...
},
"prerelease": false,
"created_at": "2017-01-09T16:49:01Z",
"published_at": "2017-01-10T20:09:37Z",
"assets": [
],
"tarball_url": "https://api.github.com/repos/ansible/ansible/tarball/v2.2.1.0-0.3.rc3",
"zipball_url": "https://api.github.com/repos/ansible/ansible/zipball/v2.2.1.0-0.3.rc3",
"body": "For official tarballs go to https://releases.ansible.com\n"
}
get the value of the key tarball_url
download the value of the key retrieved just above
In ansible code that would do
- hosts: localhost
tasks:
- uri:
url: https://api.github.com/repos/ansible/ansible/releases/latest
return_content: true
register: json_reponse
- get_url:
url: "{{ json_reponse.json.tarball_url }}"
dest: ./ansible-latest.tar.gz
I let you adapt the proper parameters to answer your question :)
I am using the following recipe to download and extract latest watchexec binary for Linux from GitHub releases.
- hosts: localhost
tasks:
- name: check latest watchexec
uri:
url: https://api.github.com/repos/watchexec/watchexec/releases/latest
return_content: true
register: watchexec_latest
- name: "installing watchexec {{ watchexec_latest.json.tag_name }}"
loop: "{{ watchexec_latest.json.assets }}"
when: "'x86_64-unknown-linux-musl.tar.xz' in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ ansible_env.HOME }}/bin/"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- watchexec
tar extra_opts explained here.
This still downloads the binary every time a playbook is called. As an improvement, it might be possible to use set_fact for caching node_id attribute that corresponds to the unpacked file.
Another approach is to use ansible github_release module to get the latest tag
example
- name: Get gogs latest tag
github_release:
user: gogs
repo: gogs
action: latest_release
register: gogs_latest
- name: Grab gogs latest binaries
unarchive:
src: "https://github.com/gogs/gogs/releases/download/{{ gogs_latest['tag'] }}/gogs_{{ gogs_latest['tag'] | regex_replace('^v','') }}_linux_amd64.zip"
dest: /usr/local/bin
remote_src: true
The regex part at the end is for replacing the v at the beginning of the tag since the format now on GitHub for gogs is gogs_0.12.3_linux_armv7.zip and latest tag includes a v
I got inspired and extended the play.
After downloading I add the version number to the binary and link to the program.
so I can check on the next run if the version is still up to date. Otherwise it is downloaded and linked again.
- hosts: localhost
become: false
vars:
org: watchexec
repo: watchexec
filename: x86_64-unknown-linux-gnu.tar.xz
version: latest
project_url: https://api.github.com/repos/{{ org }}/{{ repo }}/releases
tasks:
- name: check {{ repo }} version
uri:
url: "{{ project_url }}/{{ version }}"
return_content: true
register: latest_version
- name: check if {{ repo }}-{{ version }} already there
stat:
path: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
register: newestbinary
- name: download and link to version {{ version }}
block:
- name: create tempfile
tempfile:
state: directory
suffix: dwnld
register: tempfolder_1
- name: "installing {{ repo }} {{ latest_version.json.tag_name }}"
# idea from: https://stackoverflow.com/a/62672308/886659
loop: "{{ latest_version.json.assets }}"
when: "filename|string in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ tempfolder_1.path }}"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- "{{ repo }}"
- name: command because no mv available
command: mv "{{ tempfolder_1.path }}/{{ repo }}" "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
args:
creates: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
- name: "link {{ repo }}-{{ latest_version.json.tag_name }} -> {{ repo }} "
file:
src: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
dest: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}"
state: link
force: yes
when: not newestbinary.stat.exists
always:
- name: delete {{ tempfolder_1.path|default("tempfolder") }}
file:
path: "{{ tempfolder_1.path }}"
state: absent
when: tempfolder_1.path is defined
ignore_errors: true
# vim:ft=yaml.ansible:
here is the file on github
Download the latest release with the help of json_query.
Note: you might need to_json|from_json as a workaround for this issue.
- name: get the latest release details
uri:
url: https://api.github.com/repos/prometheus/node_exporter/releases/latest
method: GET
register: node_exp_release
delegate_to: localhost
- name: set the release facts
set_fact:
file_name: "{{ node_exp_latest.name }}"
download_url: "{{ node_exp_latest.browser_download_url }}"
vars:
node_exp_latest: "{{ node_exp_release.json|to_json|from_json|json_query('assets[?ends_with(name,`linux-amd64.tar.gz`)]')|first }}"
- name: download the latest release
get_url:
url: "{{ download_url }}"
dest: /tmp/
I had a similar situation but they didn't use releases, so I found this workaround - it feels more hacky than it should be, but it gets the latest tag and uses that in lieu of releases when they don't exist.
- name: set headers more location
set_fact:
headers_more_location: /srv/headers-more-nginx-module
- name: git clone headers-more-nginx-module
git:
repo: https://github.com/openresty/headers-more-nginx-module
dest: "{{ headers_more_location }}"
depth: 1
update: no
- name: get new tags from remote
command: "git fetch --tags"
args:
chdir: "{{ headers_more_location }}"
- name: get latest tag
command: "git rev-list --tags --max-count=1"
args:
chdir: "{{ headers_more_location }}"
register: tags_rev_list
- name: get latest tag name
command: "git describe --tags {{ tags_rev_list.stdout }}"
args:
chdir: "{{ headers_more_location }}"
register: latest_tag
- name: checkout the latest version
command: "git checkout {{ latest_tag.stdout }}"
args:
chdir: "{{ headers_more_location }}"
Needed to download the latest docker-compose release from github, so came up with the solution below. I know there are other ways to do this, because the latest release content that is returned also contains a direct download link for example. However, I found this solution quite elegant.
- name: Get all docker-compose releases
uri:
url: "https://api.github.com/repos/docker/compose/releases/latest"
return_content: yes
register: release
- name: Set latest docker-compose release
set_fact:
latest_version: "{{ release.json.tag_name }}"
- name: Install docker-compose
get_url:
url : "https://github.com/docker/compose/releases/download/{{ latest_version }}/docker-compose-linux-{{ ansible_architecture }}"
dest: /usr/local/bin/docker-compose
mode: '755'

Reading multiple values from an env file with ansible and storing them as facts

I have the following code which reads values from an environment (.env) file and stores them as facts:
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_PASSWORD"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read password
set_fact:
db_password: "{{ output.stdout }}"
when:
- db_password is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_USER"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read user
set_fact:
db_user: "{{ output.stdout }}"
when:
- db_user is undefined
changed_when: false
- name: Read values from environment
shell: "source {{ env_path }}; echo $DB_NAME"
register: output
args:
executable: /bin/bash
changed_when: false
- name: Store read db_name
set_fact:
db_name: "{{ output.stdout }}"
when:
- db_name is undefined
changed_when: false
- name: Container environment loaded; the following facts are now available for use by ansible
debug: "var={{ item }}"
with_items:
- db_name
- db_user
- db_password
It's quite bulky and unwieldy. I would like to write it something like this, but I can't figure out how :
vars:
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo {{ item|upper }}"
register: output
with_items: values
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when:
- item.0 is undefined
with_together:
- values
- output.results
changed_when: false
Instead, I get this output:
ok: [default] => (item=values) => {"changed": false, "cmd": "source /var/www/mydomain.org/.env; echo VALUES", "delta": "0:00:00.002240", "end": "2017-02-15 15:25:15.338673", "item": "values", "rc": 0, "start": "2017-02-15 15:25:15.336433", "stderr": "", "stdout": "VALUES", "stdout_lines": ["VALUES"], "warnings": []}
TASK [sql-base : Store read password] ******************************************
skipping: [default] => (item=[u'values', u'output.results']) => {"changed": false, "item": ["values", "output.results"], "skip_reason": "Conditional check failed", "skipped": true}
Even more ideal of course would be if there is an ansible module I have overlooked that allows me to load values from an environment file.
Generally I would either put my variables into the inventory files themselves, or convert them to yml format and use the include_vars module (you might be able to run a sed script to convert your environment file to yml on the fly). Before using the below code make sure you are really required to use those environment files, and you cannot easily use some other mechanism like:
a YAML file with the include_vars module
putting the config inside the inventory (no modules required)
putting the config into an ansible vault file (no modules required, you need to store the decryption key somewhere though)
use some other secure storage mechanism, like Hashicorp's vault (with a plugin like https://github.com/jhaals/ansible-vault)
Back to your code, it is actually almost correct, you were missing a dollar from the read task, and some curly braces from the condition:
test.env
DB_NAME=abcd
DB_PASSWORD=defg
DB_USER=fghi
Note: make sure that this file adheres to sh standards meaning you don't put spaces around the = sign. Having something like DB_NAME = abcd will fail
play.yml
- hosts: all
connection: local
vars:
env_path: 'test.env'
values:
- db_name
- db_password
- db_user
tasks:
- name: Read values from environment
shell: "source {{ env_path }}; echo ${{ item|upper }}"
register: output
with_items: "{{ values }}"
args:
executable: /bin/bash
changed_when: false
- name: Store read value
set_fact:
"{{ item.0 }}": "{{ item.1.stdout }}"
when: '{{ item.0 }} is undefined'
with_together:
- "{{ values }}"
- "{{ output.results }}"
changed_when: false
- name: Debug
debug:
msg: "NAME: {{db_name}} PASS: {{db_password}} USER: {{db_user}}"
Running with ansible-playbook -i 127.0.0.1, play.yml:
TASK [Debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "NAME: abcd PASS: defg USER: fghi"
}

Resources