Get parent directory in Ansible? - path

Is there a way to evaluate a relative path in Ansible?
tasks:
- name: Run docker containers
include: tasks/dockerup.yml src_code='..'
Essentially I am interested in passing the source code path to my task. It happens that the source code is the parent path of {{ansible_inventory}} but there doesn't seem to be anything to accomplish that out of the box.
---- further info ----
Project structure:
myproj
app
deploy
deploy.yml
So I am trying to access app from deploy.yml.

You can use the dirname filter:
{{ inventory_dir | dirname }}
For reference, see Managing file names and path names in the docs.

You can use {{playbook_dir}} for the absolute path to your current playbook run.
For me thats the best way, because you normally know where your playbook is located.

OK, a workaround is to use a separate task just for this:
tasks:
- name: Get source code absolute path
shell: dirname '{{inventory_dir}}'
register: dirname
- name: Run docker containers
include: tasks/dockerup.yml src_code={{dirname.stdout}}
Thanks to udondan for hinting me on inventory_dir.

Related

Configure script to write content of config.log into stdout and/or stderr

A bit complicated case: I'm trying to debug configure script which is run by Maven, which runs inside Docker container, which is run by GitHub Action. It fails and asks me to look into config.log. Of course, I don't have access to that config.log (I should probably somehow take it out of container and save into GitHub actions artifacts, but that's too long...) Is there a way to make it write that output just to stdout/stderr instead of config.log?
It is also possible to print contents of the file
Try this in the workflow:
steps:
- name: Read config.log
id: configlog
uses: juliangruber/read-file-action#v1
with:
path: /path/to/config.log
- name: Echo config.log
run: echo "${{ steps.configlog.outputs.content }}"

CircleCI insert environment variable

I created my first pipeline yesterday and I wanted to replace a placeholder in my bundle.gradle file with the CIRCLE_BUILD_NUM environment variable. The only method I found find was writing my own ‘sed’ command and executing the regex in a run statement. This worked fine to get up and running, since there was only one variable to replace, however this method obviously won’t scale, down the road. Is there a CircleCI feature/orb or other method to do a more comprehensive placeholder/envar swap throughout my project?
- run:
name: Increment build id
command: sed "s/_buildNum/${CIRCLE_BUILD_NUM}/g" -i build.gradle
EDIT
Looking for a utility/tools/orb/CircleCI best practice similar to what they have in Azure DevOps (Jenkins performs a similar feature as well): simply replace all placeholders in specified files with environment variables matching the same name.
https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens
There is envtpl tool with myriad of implementation in various languages.
It allows for interpolating variables in templates with values set in environment variables.
The following described command installs an implementation in Rust.
commands:
replace-vars-from-env:
description: Replace variables in file from environment variables.
parameters:
filename:
type: string
steps:
- run:
name: Replace variables in build.gradle file
command: |
if ! [ -x /usr/local/bin/envtpl ]; then
curl -L https://github.com/niquola/envtpl/releases/download/0.0.3/envtpl.linux > /usr/local/bin/envtpl
chmod +x /usr/local/bin/envtpl
fi
mv <<parameters.filename>> <<parameters.filename>>.tpl
cat <<parameters.filename>>.tpl | envtpl > <<parameters.filename>>
rm <<parameters.filename>>
and use that in other commands or as a part of your jobs. For example,
executors:
linux:
machine:
image: ubuntu-1604:201903-01
jobs:
build:
executor: linux
steps:
- replace-vars-from-env:
filename: build.gradle
You could use envsubst which provides that basically out of the box.
Depending on your primary container you can install envsubst on top of alpine/your distro, or use some image that has that already, like datasailors/envsubst.
In that case, you would just need to run configure like:
- run:
name: Increment build id
command: envsubst < build.gradle.template > build.gradle
And in your template file you can have ${CIRCLE_BUILD_NUM}, as many other variables directly.

how to link yaml file in concourse?

In my task I have
file: tasks/build-task-config.yml
unknown artifact source: 'tasks' in task config file path 'tasks/build-task-config.yml'
I'm running concourse via docker-compose
ci/
pipeline.yml
tasks/
build-task-config.yml
Above is my directory structure.
This is how I run fly
fly -t tutorial set-pipeline -c ./ci/main-pipeline.yml -p test-frontend
How I can resolve this issue?
How do paths works in Concourse ?
Edit:
I've tried with path ci/tasks/build-task-config.yml but it's also not working
You need an input to the task called tasks. This may come from a get: step, or as the output of a previous task. Most likely you have a get with your repo that has this code (let's pretend it's called source). If that's the case, then your task should look like this
- task: build-task-config # Or whatever name you want
file: source/ci/tasks/build-task-config.yml
...
Everything has to be relative to an input in a task, if it's not part of the base image.

How to copy a file or jar file that has built from jenkins to a diff host server

Am having a jenkins job where am building a jar file. after the build is done I need to copy that jar file to a different server and deploy it there.
Am trying this yml file to achieve the same but it is looking for the file in the different server other than the jenkins server.
---
# ansible_ssh_private_key_file: "{{inventory_dir}}/private_key"
- hosts: host
remote_user: xuser
tasks:
- service: name=nginx state=started
become: yes
become_method: sudo
tasks:
- name: test a shell script
command: sh /home/user/test.sh
tasks:
- name: copy files
synchronize:
src: /var/jenkins_home/hadoop_id_rsa
dest: /home/user/
could you please suggest is there any other way or what could be approach to copy a build file to the server using jenkins to deploy.
Thanks.
Hi as per my knowledge you can use Publish Over ssh plugin in jenkins. actually i am not clear about your problem. but hoping that this can help you. plugin details: https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin if it wont help you, please comment. can you please more specific. (screen shot if possible)
Use remote ssh script in the build step no plug in is required
scp –P 22 Desktop/url.txt user#192.168.1.50:~/Desktop/url.txt
Setup passwords less authentication use the below link for help
https://www.howtogeek.com/66776/how-to-remotely-copy-files-over-ssh-without-entering-your-password/

Ansible - is there an elegant way to tar.gz and rsync a large directory?

I'm trying to avoid a list of 'command' modules in my ansible play, but there seems to be a void of ansible docs regarding tar/gz and the synch module seems... incomplete.
I'd like to gzip a tarball of a big directory then rsync it to another host. Even using 'command' seems to not work for me :<
"warnings": ["Consider using unarchive module rather than running tar"]}
[WARNING]: Consider using unarchive module rather than running tar
PLAY RECAP *********************************************************************
ctarlctarl : ok=2 changed=0 unreachable=0 failed=1
The 'unarchive' module seems to expect an already compressed/archived directory and doesn't appear to be the solution I want.
Related but unanswered: ansible playbook unable to continue as the `tar` fails due to `file change as we read`
(Edit) showing the task since it was asked if I remembered the z. =)
- name: tar ball app dir, exclude conf files
command: "tar -zcvf {{ item.code_dir }}.tar.gz --exclude '*config*' ."
args:
chdir: "{{ apps_home }}/{{ item.code_dir }}"
with_items:
- "{{ processes }}"
In version 2.2 ansible will get an archive module: https://docs.ansible.com/ansible/archive_module.html
With that you can archive, transfer and unarchive all with ansible modules and don't have to fiddle with command line arguments.
So, I got it working... just inelegantly.
With the v flag set, my ansible stdout was horrendously verbose (even without calling -vvvv with ansible-playbook). Turning off the switch inside the tar command made tracking down the problem easier.
The problem was actually the error "error: file changed as we read it" - no doubt because I was adding the archive file into the directory it was compressing. I solved it by simply moving the archived file up a directory when running.
This, however, leaves me with the 'command after command' solution - which is what I hope I'll eventually be able to avoid. Half way there, though.
This warning tip you can use the ansible unarchive module instead of tar commnad. The syntax is very easy just like below:
- unarchive: src=foo.tgz dest=/var/lib/foo
And more detail info you can got from here: unarchive_module

Resources