Error while using vagrant to deploy ruby/rbenv using railsbox - ruby-on-rails

I used railbox to create a configuration that I can deploy through vagrant.
However, the setup is stopped by the following error.
==> myapp: fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'git'\n\nThe error appears to have been in '/ansible/roles/ruby/tasks/rbenv.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Install plugins\n ^ here\n"}
The content of rbenv.yml:
---
- name: Install libffi-dev
apt: name=libffi-dev
- name: Clone rbenv repository to ~/.rbenv
git: repo={{ rbenv_repo }} dest={{ rbenv_path }} version={{ rbenv_version }} accept_hostkey=yes
sudo_user: '{{ user_name }}'
- name: Create rbenv.sh
template: src=rbenv.sh.j2 dest={{ profile_d_path }}/rbenv.sh owner={{ user_name }} group={{ group_name }}
- name: Create plugins directory
file: path={{ rbenv_plugins_path }} state=directory owner={{ user_name }} group={{ group_name }}
- name: Install plugins
git: repo={{ item.git }} dest={{ rbenv_plugins_path }}/{{ item.name }} version={{ item.version }} accept_hostkey=yes
sudo_user: '{{ user_name }}'
with_items: rbenv_plugins
- name: Check if ruby installed
shell: '{{ rbenv_bin }} versions | grep -q {{ rbenv_ruby_version }}'
register: ruby_installed
ignore_errors: yes
sudo_user: '{{ user_name }}'
- name: Install ruby
command: '{{ rbenv_bin }} install {{ rbenv_ruby_version }}'
sudo_user: '{{ user_name }}'
when: ruby_installed|failed
- name: Set global ruby version
command: '{{ rbenv_bin }} global {{ rbenv_ruby_version }}'
sudo_user: '{{ user_name }}'
- name: Rehash rbenv
command: '{{ rbenv_bin }} rehash'
sudo_user: '{{ user_name }}'
What is wrong with the yml file?

If you run a recent version of Ansible (I believe it was changed for 2.2) you need to write with jinja2 syntax
- name: Install plugins
git: repo={{ item.git }} dest={{ rbenv_plugins_path }}/{{ item.name }} version={{ item.version }} accept_hostkey=yes
sudo_user: '{{ user_name }}'
with_items: '{{ rbenv_plugins }}'

Related

How to escape JSON in Ansible playbook

I have the following YAML Ansible playbook file which I intent do use to capture some information from my docker containers:
---
# Syntax check: /usr/bin/ansible-playbook --syntax-check --inventory data/config/host_inventory.yaml data/config/ansible/docker_containers.yaml
- hosts: hosts
gather_facts: no
tasks:
- name: Docker ps output - identify running containers
shell: "/usr/bin/docker ps --format '{\"ID\":\"{{ .ID }}\", \"Image\": \"{{ .Image }}\", \"Names\":\"{{ .Names }}\"}'"
register: docker_ps_output
- name: Show content of docker_ps_output
debug:
msg: docker_ps_output.stdout_lines
But escaping is not working, Ansible gives me the middle finger when I try to run the playbook:
PLAY [hosts] ***********************************************************************************************************************************************************
TASK [Docker ps output - identify running containers] **********************************************************************************************************************************************
fatal: [myhost.com]: FAILED! => {"msg": "template error while templating string: unexpected '.'. String: /usr/bin/docker ps --format ''{\"ID\":\"{{ .ID }}\", \"Image\": \"{{ .Image }}\", \"Names\":\"{{ .Names }}\"}''"}
to retry, use: --limit #/tmp/docker_containers.retry
PLAY RECAP *****************************************************************************************************************************************************************************************
myhost.com : ok=0 changed=0 unreachable=0 failed=1
The original command I'm trying to run:
/usr/bin/docker ps --format '{"ID":"{{ .ID }}", "Image": "{{ .Image }}", "Names":"{{ .Names }}"}'
I would suggest to use a block scalar. Your problem is that {{ .ID }} etc is processed by Ansible's Jinja templating engine when it should not. Probably the most readable way around this is:
---
# Syntax check: /usr/bin/ansible-playbook --syntax-check --inventory data/config/host_inventory.yaml data/config/ansible/docker_containers.yaml
- hosts: hosts
gather_facts: no
tasks:
- name: Docker ps output - identify running containers
shell: !unsafe >-
/usr/bin/docker ps --format
'{"ID":"{{ .ID }}", "Image": "{{ .Image }}", "Names":"{{ .Names }}"}'
register: docker_ps_output
- name: Show content of docker_ps_output
debug:
msg: docker_ps_output.stdout_lines
>- starts a folded block scalar, in which you do not need to escape anything and newlines are folded into spaces. The tag !unsafe prevents the value to be processed with Jinja.
If you want to avoid templating, you need to cover double-brackets by another double-brackets:
{{ thmthng }}
should look like:
{{ '{{' }} thmthng {{ '}}' }}
Your playbook:
---
- hosts: hosts
gather_facts: no
tasks:
- name: Docker ps output - identify running containers
shell: "docker ps -a --format '{\"ID\": \"{{ '{{' }} .ID {{ '}}' }}\", \"Image\": \"{{ '{{' }} .Image {{ '}}' }}\", \"Names\" : \"{{ '{{' }} .Names {{ '}}' }}}\"'"
register: docker_ps_output
- name: Show content of docker_ps_output
debug:
var: docker_ps_output.stdout_lines

ansible aws ecr login without using docker command

I want to login in aws docker ecr registry using ansible
# return docker login -u AWS -p <token>
-name: dget docker command
shell: "aws ecr get-login --region {{ aws_region }}"
register: docker_login_command
-name: docker login
shell: "{{docker_login_command.output}}"
this will required docker cli install in our machine.but we are using docker container to run ansible with share docker socket. is there way to not use docker cli for this?
try this. this work for me.
- name: ecr docker get-authorization-token
shell: "aws ecr get-authorization-token \
--profile {{ envsettings.infra.aws_profile }} --region {{ envsettings.infra.aws_region }}"
register: ecr_command
- set_fact:
ecr_authorization_data: "{{ (ecr_command.stdout | from_json).authorizationData[0] }}"
- set_fact:
ecr_credentials: "{{ (ecr_authorization_data.authorizationToken | b64decode).split(':') }}"
- name: docker_repository - Log into ECR registry and force re-authorization
docker_login:
registry_url: "{{ ecr_authorization_data.proxyEndpoint.rpartition('//')[2] }}"
username: "{{ ecr_credentials[0] }}"
password: "{{ ecr_credentials[1] }}"
reauthorize: yes
it required docker pip python module. install before above code
- name: install required packages for this role
pip:
state: present
name: docker
executable: /usr/bin/pip3
Another solution, maybe easier, is to rely on get-login-password rather than get-authorization-token
For example, basing on instance profile :
- name: Get instance profile info
amazon.aws.aws_caller_info:
register: aws_info
- set_fact:
ecr_registry_url: "{{ aws_info.account }}.dkr.ecr.eu-west-1.amazonaws.com"
- name: Get ECR token
shell: "aws ecr get-login-password --region eu-west-1"
register: ecr_token
- name: Log into ECR registry
docker_login:
registry_url: "{{ ecr_registry_url }}"
debug: yes
username: "AWS"
password: "{{ ecr_token.stdout }}"
reauthorize: yes
This worked for me
- name: "Teili e zaga"
shell: "{{ item }}"
with_items:
- $(aws ecr get-login --no-include-email --region us-east-1)

Restart multiple Docker containers using Ansible

how do i dynamically restart all my docker containers from Ansible? I mean i know a way where i can define my containers in a variable and loop through them but what i want to achieve is this -
Fetch the currently running containers and restart all or some of them one by one through some loop.
How to achieve this using Ansible?
Docker explanation
Retrieve name/image for all the running container:
docker container ls -a --format '{{.Names}} {{.Image}}'
You could also filter the output of the docket container command to a specific image name, thanks to the --filter ancestor=image_name option:
docker container ls -a --filter ancestor=alpine --format '{{.Names}} {{.Image}}'
Ansible integration:
First I would define some filters as Ansible variables:
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
Them I will execute the docker container command in a dedicated task and save the command output to an Ansible variable:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
Finally I will iterate over it and use it into the docker_container ansible module:
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"
final playbook.yml
---
- hosts: localhost
gather_facts: no
vars:
- image_v1: '--filter ancestor=my_image:v1'
- image_v2: '--filter ancestor=my_image:v2'
tasks:
- name: Get images name
command: docker container ls -a {{ image_v1 }} {{ image_v2 }} --format "{{ '{{' }}.Names {{ '}}' }} {{ '{{' }}.Image {{ '}}' }}"
register: docker_images
- name: Restart images
docker_container:
name: "{{ item.split(' ')[0]}}"
image: "{{ item.split(' ')[1]}}"
state: started
restart: yes
loop: "{{ docker_images.stdout_lines}}"

Ansible - Download latest release binary from Github repo

With Ansible please advise how i could download the latest release binary from Github repository. As per my current understanding the steps would be:
a. get URL of latest release
b. download the release
For a. I have something like which does not provide the actual release (ex. v0.11.53):
- name: get latest Gogs release
local_action:
module: uri
url: https://github.com/gogits/gogs/releases/latest
method: GET
follow_redirects: no
status_code: 301
register: release_url
For b. I have the below which works but needs constant updating. Instead of version i would need a variable set in a.:
- name: download latest
become: yes
become-user: "{{gogs_user}}"
get_url:
url: https://github.com/gogs/gogs/releases/download/v0.11.53/linux_amd64.tar.gz
dest: "/home/{{gogs_user}}/linux_amd64.tar.gz"
Thank you!
Github has an API to manipulate the release which is documented.
so imagine you want to get the latest release of ansible (which belong to the project ansible) you would
call the url https://api.github.com/repos/ansible/ansible/releases/latest
get an json structure like this
{
"url": "https://api.github.com/repos/ansible/ansible/releases/5120666",
"assets_url": "https://api.github.com/repos/ansible/ansible/releases/5120666/assets",
"upload_url": "https://uploads.github.com/repos/ansible/ansible/releases/5120666/assets{?name,label}",
"html_url": "https://github.com/ansible/ansible/releases/tag/v2.2.1.0-0.3.rc3",
"id": 5120666,
"node_id": "MDc6UmVsZWFzZTUxMjA2NjY=",
"tag_name": "v2.2.1.0-0.3.rc3",
"target_commitish": "devel",
"name": "THESE ARE NOT OUR OFFICIAL RELEASES",
...
},
"prerelease": false,
"created_at": "2017-01-09T16:49:01Z",
"published_at": "2017-01-10T20:09:37Z",
"assets": [
],
"tarball_url": "https://api.github.com/repos/ansible/ansible/tarball/v2.2.1.0-0.3.rc3",
"zipball_url": "https://api.github.com/repos/ansible/ansible/zipball/v2.2.1.0-0.3.rc3",
"body": "For official tarballs go to https://releases.ansible.com\n"
}
get the value of the key tarball_url
download the value of the key retrieved just above
In ansible code that would do
- hosts: localhost
tasks:
- uri:
url: https://api.github.com/repos/ansible/ansible/releases/latest
return_content: true
register: json_reponse
- get_url:
url: "{{ json_reponse.json.tarball_url }}"
dest: ./ansible-latest.tar.gz
I let you adapt the proper parameters to answer your question :)
I am using the following recipe to download and extract latest watchexec binary for Linux from GitHub releases.
- hosts: localhost
tasks:
- name: check latest watchexec
uri:
url: https://api.github.com/repos/watchexec/watchexec/releases/latest
return_content: true
register: watchexec_latest
- name: "installing watchexec {{ watchexec_latest.json.tag_name }}"
loop: "{{ watchexec_latest.json.assets }}"
when: "'x86_64-unknown-linux-musl.tar.xz' in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ ansible_env.HOME }}/bin/"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- watchexec
tar extra_opts explained here.
This still downloads the binary every time a playbook is called. As an improvement, it might be possible to use set_fact for caching node_id attribute that corresponds to the unpacked file.
Another approach is to use ansible github_release module to get the latest tag
example
- name: Get gogs latest tag
github_release:
user: gogs
repo: gogs
action: latest_release
register: gogs_latest
- name: Grab gogs latest binaries
unarchive:
src: "https://github.com/gogs/gogs/releases/download/{{ gogs_latest['tag'] }}/gogs_{{ gogs_latest['tag'] | regex_replace('^v','') }}_linux_amd64.zip"
dest: /usr/local/bin
remote_src: true
The regex part at the end is for replacing the v at the beginning of the tag since the format now on GitHub for gogs is gogs_0.12.3_linux_armv7.zip and latest tag includes a v
I got inspired and extended the play.
After downloading I add the version number to the binary and link to the program.
so I can check on the next run if the version is still up to date. Otherwise it is downloaded and linked again.
- hosts: localhost
become: false
vars:
org: watchexec
repo: watchexec
filename: x86_64-unknown-linux-gnu.tar.xz
version: latest
project_url: https://api.github.com/repos/{{ org }}/{{ repo }}/releases
tasks:
- name: check {{ repo }} version
uri:
url: "{{ project_url }}/{{ version }}"
return_content: true
register: latest_version
- name: check if {{ repo }}-{{ version }} already there
stat:
path: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
register: newestbinary
- name: download and link to version {{ version }}
block:
- name: create tempfile
tempfile:
state: directory
suffix: dwnld
register: tempfolder_1
- name: "installing {{ repo }} {{ latest_version.json.tag_name }}"
# idea from: https://stackoverflow.com/a/62672308/886659
loop: "{{ latest_version.json.assets }}"
when: "filename|string in item.name"
unarchive:
remote_src: yes
src: "{{ item.browser_download_url }}"
dest: "{{ tempfolder_1.path }}"
keep_newer: yes
extra_opts:
- --strip=1
- --no-anchored
- "{{ repo }}"
- name: command because no mv available
command: mv "{{ tempfolder_1.path }}/{{ repo }}" "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
args:
creates: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
- name: "link {{ repo }}-{{ latest_version.json.tag_name }} -> {{ repo }} "
file:
src: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}-{{ latest_version.json.tag_name }}"
dest: "{{ ansible_env.HOME }}/.local/bin/{{ repo }}"
state: link
force: yes
when: not newestbinary.stat.exists
always:
- name: delete {{ tempfolder_1.path|default("tempfolder") }}
file:
path: "{{ tempfolder_1.path }}"
state: absent
when: tempfolder_1.path is defined
ignore_errors: true
# vim:ft=yaml.ansible:
here is the file on github
Download the latest release with the help of json_query.
Note: you might need to_json|from_json as a workaround for this issue.
- name: get the latest release details
uri:
url: https://api.github.com/repos/prometheus/node_exporter/releases/latest
method: GET
register: node_exp_release
delegate_to: localhost
- name: set the release facts
set_fact:
file_name: "{{ node_exp_latest.name }}"
download_url: "{{ node_exp_latest.browser_download_url }}"
vars:
node_exp_latest: "{{ node_exp_release.json|to_json|from_json|json_query('assets[?ends_with(name,`linux-amd64.tar.gz`)]')|first }}"
- name: download the latest release
get_url:
url: "{{ download_url }}"
dest: /tmp/
I had a similar situation but they didn't use releases, so I found this workaround - it feels more hacky than it should be, but it gets the latest tag and uses that in lieu of releases when they don't exist.
- name: set headers more location
set_fact:
headers_more_location: /srv/headers-more-nginx-module
- name: git clone headers-more-nginx-module
git:
repo: https://github.com/openresty/headers-more-nginx-module
dest: "{{ headers_more_location }}"
depth: 1
update: no
- name: get new tags from remote
command: "git fetch --tags"
args:
chdir: "{{ headers_more_location }}"
- name: get latest tag
command: "git rev-list --tags --max-count=1"
args:
chdir: "{{ headers_more_location }}"
register: tags_rev_list
- name: get latest tag name
command: "git describe --tags {{ tags_rev_list.stdout }}"
args:
chdir: "{{ headers_more_location }}"
register: latest_tag
- name: checkout the latest version
command: "git checkout {{ latest_tag.stdout }}"
args:
chdir: "{{ headers_more_location }}"
Needed to download the latest docker-compose release from github, so came up with the solution below. I know there are other ways to do this, because the latest release content that is returned also contains a direct download link for example. However, I found this solution quite elegant.
- name: Get all docker-compose releases
uri:
url: "https://api.github.com/repos/docker/compose/releases/latest"
return_content: yes
register: release
- name: Set latest docker-compose release
set_fact:
latest_version: "{{ release.json.tag_name }}"
- name: Install docker-compose
get_url:
url : "https://github.com/docker/compose/releases/download/{{ latest_version }}/docker-compose-linux-{{ ansible_architecture }}"
dest: /usr/local/bin/docker-compose
mode: '755'

Ansible workflow for building Docker image and recreating Docker container if image was changed?

I've been struggling with figuring out what the proper Ansible workflow is for deploying a Docker image and recreating a Docker container if the image has changed.
Here's the task list of a role I initially thought would work:
- name: Deploy Source
synchronize:
archive: yes
checksum: yes
compress: yes
dest: '/tmp/{{ app_name }}'
src: ./
- name: Build Docker Image
docker_image:
name: '{{ docker_image_name }}'
path: '/tmp/{{ app_name }}'
rm: yes
state: present
register: build_docker_image
- name: Create Docker Container
docker_container:
image: '{{ docker_image_name }}'
keep_volumes: yes
name: '{{ docker_container_name }}'
recreate: '{{ true if build_docker_image.changed else omit }}'
state: started
This does not work because the Ansible docker_image module does not offer a state: latest option. state: present only checks if the image exists and not if it's up to date. This means that even if the Dockerfile has changed, the image will not be rebuilt. docker_image does offer a force: yes option, but this will always recreate the image regardless of whether there was a change to the Dockerfile. When force: yes is used, it makes sense to me that it's better to always recreate containers running the image to prevent them from pointing to dangling Docker images.
What am I missing? Is there a better alternative?
User viggeh provided a workaround on the Ansible GitHub which I've adapted to my needs as follows:
- name: Deploy Source
synchronize:
archive: yes
checksum: yes
compress: yes
dest: '/tmp/{{ app_name }}'
src: ./
- name: Get Existing Image ID
command: 'docker images --format {% raw %}"{{.ID}}"{% endraw %} --no-trunc {{ docker_image_name }}:{{ docker_image_tag }}'
register: image_id
changed_when: image_id.rc != 0
- name: Build Docker Image
docker_image:
force: yes
name: '{{ docker_image_name }}'
path: '/tmp/{{ app_name }}'
rm: yes
state: present
tag: '{{ docker_image_tag }}'
register: image_build
changed_when: image_id.stdout != image_build.image.Id
- name: Create Docker Container
docker_container:
image: '{{ docker_image_name }}'
keep_volumes: yes
name: '{{ docker_container_name }}'
recreate: '{{ True if image_build.changed else omit }}'
state: started

Resources