Ansible - Download latest release binary from Github repo - url

With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be:
a. get URL of latest release
b. download the release
For a. I have something like which does not provide the actual release (ex. v0.11.53):
- name: get latest Gogs release
local_action:
module: uri
url: https://github.com/gogits/gogs/releases/latest
method: GET
follow_redirects: no
status_code: 301
register: release_url
For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.:
- name: download latest
become: yes
become-user: "{{gogs_user}}"
get_url:
url: https://github.com/gogs/gogs/releases/download/v0.11.53/linux_amd64.tar.gz
dest: "/home/{{gogs_user}}/linux_amd64.tar.gz"
Thank you!

Github has an API to manipulate the release which is documented.
so imagine you want to get the latest release of ansible (which belong to the project ansible) you would
call the url https://api.github.com/repos/ansible/ansible/releases/latest
get an json structure like this
{
"url": "https://api.github.com/repos/ansible/ansible/releases/5120666",
"assets_url": "https://api.github.com/repos/ansible/ansible/releases/5120666/assets",
"upload_url": "https://uploads.github.com/repos/ansible/ansible/releases/5120666/assets{?name,label}",
"html_url": "https://github.com/ansible/ansible/releases/tag/v2.2.1.0-0.3.rc3",
"id": 5120666,
"node_id": "MDc6UmVsZWFzZTUxMjA2NjY=",
"tag_name": "v2.2.1.0-0.3.rc3",
"target_commitish": "devel",
"name": "THESE ARE NOT OUR OFFICIAL RELEASES",
...
},
"prerelease": false,
"created_at": "2017-01-09T16:49:01Z",
"published_at": "2017-01-10T20:09:37Z",
"assets": [
],
"tarball_url": "https://api.github.com/repos/ansible/ansible/tarball/v2.2.1.0-0.3.rc3",
"zipball_url": "https://api.github.com/repos/ansible/ansible/zipball/v2.2.1.0-0.3.rc3",
"body": "For official tarballs go to https://releases.ansible.com\n"
}
get the value of the key tarball_url
download the value of the key retrieved just above
In ansible code that would do
- hosts: localhost
tasks:
- uri:
url: https://api.github.com/repos/ansible/ansible/releases/latest
return_content: true
register: json_reponse
- get_url:
url: "{{ json_reponse.json.tarball_url }}"
dest: ./ansible-latest.tar.gz
I let you adapt the proper parameters to answer your question :)

I am using the following recipe to download and extract latest watchexec binary for Linux from GitHub releases.
- hosts: localhost
tasks:
- name: check latest watchexec
uri:
url: https://api.github.com/repos/watchexec/watchexec/releases/latest
return_content: true
register: watchexec_latest
- name: "installing watchexec {{ watchexec_latest.json.tag_name }}"
loop: "{{ watchexec_latest.json.assets }}"
when: "'x86_64-unknown-linux-musl.tar.xz' in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ ansible_env.HOME }}/bin/"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- watchexec
tar extra_opts explained here.
This still downloads the binary every time a playbook is called. As an improvement, it might be possible to use set_fact for caching node_id attribute that corresponds to the unpacked file.

Another approach is to use ansible github_release module to get the latest tag
example
- name: Get gogs latest tag
github_release:
user: gogs
repo: gogs
action: latest_release
register: gogs_latest
- name: Grab gogs latest binaries
unarchive:
src: "https://github.com/gogs/gogs/releases/download/{{ gogs_latest['tag'] }}/gogs_{{ gogs_latest['tag'] | regex_replace('^v','') }}_linux_amd64.zip"
dest: /usr/local/bin
remote_src: true
The regex part at the end is for replacing the v at the beginning of the tag since the format now on GitHub for gogs is gogs_0.12.3_linux_armv7.zip and latest tag includes a v

I got inspired and extended the play.
After downloading I add the version number to the binary and link to the program.
so I can check on the next run if the version is still up to date. Otherwise it is downloaded and linked again.
- hosts: localhost
become: false
vars:
org: watchexec
repo: watchexec
filename: x86_64-unknown-linux-gnu.tar.xz
version: latest
project_url: https://api.github.com/repos/{{ org }}/{{ repo }}/releases
tasks:
- name: check {{ repo }} version
uri:
url: "{{ project_url }}/{{ version }}"
return_content: true
register: latest_version
- name: check if {{ repo }}-{{ version }} already there
stat:
path: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
register: newestbinary
- name: download and link to version {{ version }}
block:
- name: create tempfile
tempfile:
state: directory
suffix: dwnld
register: tempfolder_1
- name: "installing {{ repo }} {{ latest_version.json.tag_name }}"
# idea from: https://stackoverflow.com/a/62672308/886659
loop: "{{ latest_version.json.assets }}"
when: "filename|string in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ tempfolder_1.path }}"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- "{{ repo }}"
- name: command because no mv available
command: mv "{{ tempfolder_1.path }}/{{ repo }}" "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
args:
creates: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
- name: "link {{ repo }}-{{ latest_version.json.tag_name }} -> {{ repo }} "
file:
src: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
dest: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}"
state: link
force: yes
when: not newestbinary.stat.exists
always:
- name: delete {{ tempfolder_1.path|default("tempfolder") }}
file:
path: "{{ tempfolder_1.path }}"
state: absent
when: tempfolder_1.path is defined
ignore_errors: true
# vim:ft=yaml.ansible:
here is the file on github

Download the latest release with the help of json_query.
Note: you might need to_json|from_json as a workaround for this issue.
- name: get the latest release details
uri:
url: https://api.github.com/repos/prometheus/node_exporter/releases/latest
method: GET
register: node_exp_release
delegate_to: localhost
- name: set the release facts
set_fact:
file_name: "{{ node_exp_latest.name }}"
download_url: "{{ node_exp_latest.browser_download_url }}"
vars:
node_exp_latest: "{{ node_exp_release.json|to_json|from_json|json_query('assets[?ends_with(name,`linux-amd64.tar.gz`)]')|first }}"
- name: download the latest release
get_url:
url: "{{ download_url }}"
dest: /tmp/

I had a similar situation but they didn't use releases, so I found this workaround - it feels more hacky than it should be, but it gets the latest tag and uses that in lieu of releases when they don't exist.
- name: set headers more location
set_fact:
headers_more_location: /srv/headers-more-nginx-module
- name: git clone headers-more-nginx-module
git:
repo: https://github.com/openresty/headers-more-nginx-module
dest: "{{ headers_more_location }}"
depth: 1
update: no
- name: get new tags from remote
command: "git fetch --tags"
args:
chdir: "{{ headers_more_location }}"
- name: get latest tag
command: "git rev-list --tags --max-count=1"
args:
chdir: "{{ headers_more_location }}"
register: tags_rev_list
- name: get latest tag name
command: "git describe --tags {{ tags_rev_list.stdout }}"
args:
chdir: "{{ headers_more_location }}"
register: latest_tag
- name: checkout the latest version
command: "git checkout {{ latest_tag.stdout }}"
args:
chdir: "{{ headers_more_location }}"

Needed to download the latest docker-compose release from github, so came up with the solution below. I know there are other ways to do this, because the latest release content that is returned also contains a direct download link for example. However, I found this solution quite elegant.
- name: Get all docker-compose releases
uri:
url: "https://api.github.com/repos/docker/compose/releases/latest"
return_content: yes
register: release
- name: Set latest docker-compose release
set_fact:
latest_version: "{{ release.json.tag_name }}"
- name: Install docker-compose
get_url:
url : "https://github.com/docker/compose/releases/download/{{ latest_version }}/docker-compose-linux-{{ ansible_architecture }}"
dest: /usr/local/bin/docker-compose
mode: '755'

Related

community.docker_image failed to tag image - 404 Client Error

I created an Ansible role for use in our pipelines. It logs in to AWS ECR, builds a docker image, pushes that image, re-tags the image and then pushes it again.
The role has 3 execution routes which are nearly identical. One for each of the 3 architectures that we build for: x86, armv7, arm64.
This role is the only thing that runs in the playbook. When I commit to master branch, a job runs for each of the 3 architectures. Each execution path tags the image with the branch name(with -arm and -amr64 appended respectively) and then with the latest tag(or lates-arm, latest-arm64)
All execution paths work, except for one very specific circumstance. The x86 execution path fails on the staging branch only. This role is used across multiple repositories with the same result. With each repository(on the staging branch only) the image will be built, tagged, and pushed with ecr.registry/image-name:staging without issue. The next task tags the local ecr.registry/image-name:staging images as ecr.registry/image-name:latest and pushes it.
The fact that it works with the tags latest-arm and latest-arm64 makes me think this has something to do with the latest tag specifically.
Here are the playbook tasks for each execution path. They are all almost identical.
Note: The playbook uses only localhost for the hosts: parameter so this playbook simply runs on the CI runner EC2 instance which executes the playbook.
Note: The x86 execution path runs on an x86 machine. Both of the arm execution paths run on an arm64 machine.
x86.yml
---
- name: "[x86] Build Image x86"
community.docker.docker_image:
state: present
source: build
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}"
tag: "{{ ecr_tag }}"
build:
dockerfile: "{{ dockerfile }}"
nocache: yes
pull: yes
path: "{{ context_build_dir }}"
platform: linux/amd64
tags:
- x86
- never
- name: "[x86] Push Image x86 with 'latest' tag"
community.docker.docker_image:
source: local
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}:{{ ecr_tag }}"
force_tag: true
repository: "{{ ecr_registry }}/{{ ecr_image }}:latest"
tags:
- x86
- never
arm64.yml
---
- name: "[ARM64] Build Image ARM64"
community.docker.docker_image:
state: present
source: build
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}"
tag: "{{ ecr_tag }}-arm64"
build:
dockerfile: "{{ dockerfile }}"
nocache: yes
pull: yes
path: "{{ context_build_dir }}"
platform: linux/arm64
tags:
- arm64
- never
- name: "[ARM64] Push Image ARM64 with 'latest-arm64' tag"
community.docker.docker_image:
state: present
source: local
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}:{{ ecr_tag }}-arm64"
force_tag: true
repository: "{{ ecr_registry }}/{{ ecr_image }}:latest-arm64"
tags:
- arm64
- never
armv7.yml
---
- name: "[ARMv7] Start QEMU Container"
community.docker.docker_container:
name: qemu
privileged: yes
auto_remove: yes
image: multiarch/qemu-user-static
command: "--reset -p yes"
tags:
- armv7
- never
- name: "[ARMv7] Build Image ARM"
community.docker.docker_image:
state: present
source: build
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}"
tag: "{{ ecr_tag }}-arm"
build:
dockerfile: "{{ dockerfile }}"
nocache: yes
pull: yes
path: "{{ context_build_dir }}"
platform: linux/arm/v7
tags:
- armv7
- never
- name: "[ARMv7] Push Image ARMv7 with 'latest' tag"
community.docker.docker_image:
state: present
source: local
push: "{{ push_image }}"
name: "{{ ecr_registry }}/{{ ecr_image }}:{{ ecr_tag }}-arm"
force_tag: true
repository: "{{ ecr_registry }}/{{ ecr_image }}:latest-arm"
tags:
- armv7
- never

Ansible known_hosts module ssh key propagation question

I'm trying to craft a playbook that will update the known_hosts for a machine / user however I'm getting an error I can't make sense of.
---
- name: Keys
hosts: adminslaves
gather_facts: false
no_log: false
remote_user: test
#pre_tasks:
# - setup:
# gather_subset:
# - '!all'
tasks:
- name: Scan for SSH host keys.
shell: ssh-keyscan myhost.mydomain.com 2>/dev/null
changed_when: False
register: ssh_scan
# - name: show vars
# debug:
# msg: "{{ ssh_scan.stdout_lines }}"
#
- name: Update known_hosts.
known_hosts:
key: "{{ item }}"
name: "{{ ansible_host }}"
state: present
with_items: "{{ ssh_scan.stdout_lines }}"
My error is "msg": "Host parameter does not match hashed host field in supplied key"}
I think the variable has the right information (at least it does when I debug it).
My end goal is a playbook that will add ssh keys of a list of hosts to a list of hosts for Jenkins auth.
Appreciate any help.
the problem is that the output of ssh-keyscan myhost.mydomain.com 2>/dev/null usually contains more than one key so you need to process it.
Someone with the same error message raised an issue, but again the problem was with the ssh-key format. I better understood checking the code used by known_hosts task.
Here the code I use:
- name: Populate known_hosts
hosts: spectrum_scale
tags: set_known_hosts
become: true
tasks:
- name: Scan for SSH keys
ansible.builtin.shell:
cmd: "ssh-keyscan {{ hostvars[spectrum_scale].ansible_fqdn }}
{{ hostvars[spectrum_scale].ansible_hostname }}
{{ hostvars[spectrum_scale].ansible_default_ipv4.address }}
2>/dev/null"
loop: "{{ groups['spectrum_scale'] }}"
loop_control:
loop_var: spectrum_scale
register: ssh_scan
- name: Set stdout_lines array for ssh_scan
set_fact:
ssout: []
- name: Fill ssout
set_fact:
ssout: "{{ ssout + ss_r.stdout_lines }}"
loop: "{{ ssh_scan.results }}"
loop_control:
loop_var:
ss_r
when: ss_r.stdout_lines is defined
- name: Add client ssh keys to known_hosts
ansible.builtin.known_hosts:
name: "{{ hk.split()[0] }}"
key: "{{ hk }}"
state: present
loop: "{{ ssout }}"
loop_control:
loop_var: hk

Ansible validate docker-compose with env_file and send to host

I created a role that has in template folder two files: docker-compose.yml.j2 and env.j2
env.j2 is used in docker-compose file:
version: "2"
services:
service_name:
image: {{ IMAGE | mandatory }}
container_name: service_name
mem_limit: 256m
user: "2001"
env_file: ".env"
Now my question: is there some ansible module that sends docker-compose file to host and there validate it because than env and docker-compose are in same folder on host machine?
This example of ansible task return error because env file is not in template folder but on host.
- name: "Copy env file"
ansible.builtin.template:
src: "env.j2"
dest: "/opt/db_backup/.env"
mode: '770'
owner: deployment
group: deployment
- name: "Validate and copy docker compose file"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "/opt/db_backup/docker-compose.yml"
mode: '770'
owner: deployment
group: deployment
validate: docker-compose -f %s config
This probably falls into the Complex validation configuration cases linked in the documentation for the template module validate parameter
In any case, unless refactoring completely your current file and passing more variables in your environment (e.g. to allow .env being in a location out of the current directory), you cannot validate docker-compose.yml until both files are in the same location.
An easy scenario would be to copy both files in place, validate prior to doing anything with them and roll back to the previous version in case of error. The below example is far from rocket proof but will give you an idea:
---
- hosts: localhost
gather_facts: false
vars:
IMAGE: alpine:latest
deploy_dir: /tmp/validate_compose
tasks:
- name: "make sure {{ deploy_dir }} directory exits"
file:
path: "{{ deploy_dir }}"
state: directory
- name: copy project file templates
template:
src: "{{ item }}"
dest: "{{ deploy_dir }}/{{ item | regex_replace('^(.*)\\.j2', '\\g<1>') }}"
mode: 0640
backup: true
loop:
- .env.j2
- docker-compose.yml.j2
register: copy_files
- block:
- name: check docker-compose file validity
command:
cmd: docker-compose config
chdir: "{{ deploy_dir }}"
rescue:
- name: rollback configuration to previous version for changed files
copy:
src: "{{ item.backup_file }}"
dest: "{{ item.dest }}"
remote_src: true
loop: "{{ copy_files.results | selectattr('backup_file', 'defined') }}"
- name: Give some info about error.
debug:
msg:
- The compose file did not validate.
- Please see previous error above for details
- Files have been rolled back to the latest known version.
- name: Fail
fail:
- name: Rest of the playbook using the above validated files
debug:
msg: Next tasks...

Ansible - Jeniks pipeline too slow to copy files to network location

Any idea / suggest to speed up automatic deployment of files on a network location, I have 200 MB folder containing multiple files and sub folder. However, when I try to deploy using pipeline, it is talking around 4 hrs to deploy.
Any suggestions to improve performance please?
Pipeline steps:
Zip files on a windows server (< 5 mins)
Move .zip file to destination directory (around 10-12 mins)
Unzip on destination ( takes around 3.5 to 4 hrs)
My script below
- name: "Download maven artifact {{ group_id }}:{{ artifact }}:{{ version }}"
win_get_url:
url: "{{ nexus_repository_url }}&g={{ group_id }}&a={{ artifact }}&v={{ version }}&c={{ classifier }}&p={{ packformat }}"
dest: "{{ staging }}\\{{ artifact }}-{{ version }}.zip"
username: "{{ lookup('env', 'DSNEXUS_USERNAME') }}"
password: "{{ lookup('env', 'DSNEXUS_PASSWORD') }}"
validate_certs: no
when: nexusSource
- name: Find directories to deploy
win_find:
paths: "{{ staging }}"
file_type: directory
register: folders_to_copy
- name: Create the target Folder at NAS root
win_file:
path: "{{ root }}\\{{ target_folder }}"
state: directory
- name : Perform deployment here.
win_copy:
remote_src: yes
src: "{{ staging }}\\{{ artifact }}-{{ version }}.zip"
dest: "{{ root }}\\{{ target_folder }}\\{{ artifact }}-{{ version }}.zip"
- name : unzip here on destination.
win_unzip:
remote_src: yes
src: "{{ root }}\\{{ target_folder }}\\{{ artifact }}-{{ version }}.zip"
dest: "{{ root }}\\{{ target_folder }}"

docker_container: How to add multiple Volumes

I am trying to execute with Ansible the following Docker command:
docker run --name soadb_test1 --network=soa_net --ip 172.16.1.10 -d -v $TEST1/SOADB-Volume/u01:/u01/ -v $TEST1/SOADB-Volume/u02:/u02/ -v $TEST1/SOADB-Volume/u03:/u03/ -v $TEST1/SOADB-Volume/u04:/u04/ -v $TEST1/SOADB-Volume/ORCL:/ORCL/ --env-file $ENV_HOME/db.env.list database/enterprise:12.2.0.1
This is my Ansible Script:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
vars_files:
- vars.yml
When I run it I get the following error:
TASK [install_docker_DB : Create DB container] *******************************************************************************************************************************************************************
fatal: [soa_poc]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (docker_container) module: vars_files Supported parameters include: api_version, auto_remove, blkio_weight, cacert_path, cap_drop, capabilities, cert_path, cleanup, command, cpu_period, cpu_quota, cpu_shares, cpuset_cpus, cpuset_mems, debug, detach, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, key_path, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, name, network_mode, networks, oom_killer, oom_score_adj, output_logs, paused, pid_mode, privileged, published_ports, pull, purge_networks, read_only, recreate, restart, restart_policy, restart_retries, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tls_verify, tmpfs, trust_image_content, tty, ulimits, user, userns_mode, uts, volume_driver, volumes, volumes_from, working_dir"}
Am i declaring the volumes the wrong way?
It looks like your indentation level for the vars_files entry is wrong - please move it to somewhere else:
---
- name: Create DB container
docker_container:
name: "{{ name }}"
image: "{{ image }}"
env_file: "{{ env_file }}"
detach: yes
volumes:
- "{{ src_vol }}:{{ dest_vol }}"
- "{{ src_vol_2 }}:{{ dest_vol_2 }}"
- "{{ src_vol_3 }}:{{ dest_vol_3 }}"
- "{{ src_vol_4 }}:{{ dest_vol_4 }}"
- "{{ src_vol_5 }}:{{ dest_vol_5 }}"
networks:
- name: soa_net
ipv4_address: "{{ ip }}"
The indentation for the first network entry was also wrong.
Depending on whether the above is from a playbook file, or from a role, the location of vars_files might differ. If this is a playbook, then vars_files should be at the same indentation level as tasks:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- name: Create DB container
docker_container: ...
This has nothing to do with the volumes...

Resources