Ansible Tower CLI pass Launch Parameters in one command without prompt - jenkins

I am trying to launch an ansible-tower cli job through Jenkins. But I don't want a prompt that appears on Ansible Tower. I want to pass those parameters in the same command so that a prompt is not required.
I have tried:
tower-cli job launch --job-template=33 -e "param1" -e "param2"
This is the error I get:
Error: failed to pass some of the extra variables

According to the Ansible Tower-CLI documentation the parameter -e is wrong. You need to use --extra-vars. This differs from ansible-playbook command. So an easy example is
tower-cli job launch --job-template 1 --extra-vars '{"x":"y"}'
Be aware that you write all vars in one argument. The --extra-vars expects JSON or YAML format.
Be also aware, that the given job template MUST be configured to ask for extra-vars. Otherwise the argument is ignored on Ansible Tower side.
Also - not the question but a good advice - if your Jenkins needs to wait for the job result add --monitor to the tower-cli command. Then the cli waits for the response code and the stage could "fail" if there is a problem.

Related

Is it possible to run a command in a Docker image as a test in Bazel?

I would like to run a command inside a container to test that it works. It should be invoked by bazel test.
Something like this:
container_test(
image = "//:my_image"
test_command = "exit 1"
)
I noticed this: https://github.com/bazelbuild/rules_docker/blob/master/contrib/test.bzl#L125
However it isn't documented.
How should I approach this in Bazel?
Take a look at the sample test rule here
This is a test rule which creates a script (script) that can be invoked in the CLI
The script will then exit with a non-zero error-code to indicate that the test failed (or 0 for success)
The script is then written as an executable output (ctx.actions.write), declares the list of files it needs available at runtime (runfiles)
This python function is then wrapped as a bazel rule (see full guide here)
So, how would you proceed towards creating your container test rule?
The script we want to generate above is probably some usage of docker run --rm IMAGE [COMMAND] [ARG...] to create a container from an image, run a command, and remove the container when done
Don't forget to set the script exit status based on the exit status of the docker command (as done in the example, where they copy the exit status of grep as the exit status for the overall script)
Update the sample above to use the above docker command, and plant the path to the image accordingly
See f.path in the script above showing how they access the path of an individual source file
You will need to make sure docker is available when your bazel rules are evaluated
I haven't done this fully myself since I don't have a computer with both bazel and docker, but this should be enough to get you started :)
Good luck!

How to run repo from a script inside a container in a jenkins job

I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.

How to find output of shell command run by fastlane action?

Sometimes it happens that a fastlane action throws an error like:
ERROR [2016-11-06 03:34:20.16]: Shell command exited with exit status 48 instead of 0.
I found troubleshooting difficult, as --verbose is not not verbose enough. By action I don't mean sh action, which is rather special case, but other fastlane actions e.g. create_keychain. The action create_keychain calls shell command security create-keychain and when it fails there is no clue what happened.
How do I find output of a shell command run by fastlane's action?
How do I find what shell command including all parameters is fastlane actually trying to run?
The output of the shell command is printed out by default when you use the sh action. Alternatively you can also run the shell command directly yourself using backticks (standard Ruby)
puts `ls`
The answer is that there is no such option at the moment, but it should be easy to add it.
I have created git issue #6878 for it.

How do I capture output from one Rundeck step to be used in a later step?

I'm attempting to build, launch, and link a set of docker containers using Rundeck. In short (for those not familiar with docker), when an image is launched, it returns a container ID. I would like to use this container ID in the launching of subsequent jobs.
When run from the command line, it would look something like this (example only!!):
# docker run -Pd 23ABCD45
34DEF123
# docker run -Pd --link 34DEF123:host1 ABC123EF
321CB456
(note the use of the first return value in the second command line)
At this point, there would be two containers running. The second would be linked to the first by the --link option, and it would be addressable using the hostname host1 from inside the second container. To be fair, docker generates (or may be given) a specific container name which can be used in place of the container id. I would prefer to use the container ID to avoid the hassle of having to create/track unique names.
I would like to be able to capture the output of the first command (the container ID) so that it can be reused in the second command. Is this possible?
Edit: These images are being used for testing immediately following a
"docker build" (which also outputs a similar ID I would like to
include in my chain) and might be followed by "docker rm" and "docker
rmi" commands, so there are a number of uses for capturing this type
of output and carrying it through a related set of operations. This
is not just about launching/linking containers.
There is no direct Rundeck implementation that allows you pass an output from one job to another job as an input, but there are work around I've tried in the past, and I've settled on the second approach.
1. Use a file to pass data
Save the ID/output into a tmp file in first job
Second job read that file
Things might go wrong since you depend on a file, but good code can improve.
2. Call two jobs using Rundeck CLI from another job
This is the approach I am using.
JobA printout two random numbers.
echo $RANDOM;echo $RANDOM
JobB print out the second random produced from JobA which is passed as an option "number"
echo "$RD_OPTION_NUMBER is the number JobB received"
JobC calls first job, save last line to a variable and pass it to JobB
#!/bin/bash
OUTPUT_FROM_JOB_A=`run -f --id <ID of JobA> | tail -n 1`
run -f --id <ID of JobB> -- -number $OUTPUT_FROM_JOB_A
Output:
[5394] execution status: succeeded
Job execution started:
[5395] JobB <https://hostname:4443/project/Project/execution/show/5395>
6186 is the number JobB received
[5395] execution status: succeeded
This is just primitive code sample. you can do alot with python subprocess or just use bash.

fpm is not recognised if executing script with jenkins and ssh

I am trying to execute a script over ssh connexion with Jenkins. I am using the SSH plugin and it is well configured. I arrive to execute the first part of the script, but when I try to execute a fpm command it says:
fpm: command not found
If I connect to the instance and run the same script that I call via Jenkins it runs and there is no error (fpm is installed).
So, I have created a test like a script test.sh:
#!/bin/bash -x
fpm
but, with Jenkins, I get the same error: fpm: command not found, while if I execute it I get a normal "parameter needed":
Missing required -s flag. What package source did you want? {:level=>:warn}
Missing required -t flag. What package output did you want? {:level=>:warn}
No parameters given. You need to pass additional command arguments so that I know what you want to build packages from. For example, for '-s dir' you would pass a list of files and directories. For '-s gem' you would pass a one or more gems to package from. As a full example, this will make an rpm of the 'json' rubygem: `fpm -s gem -t rpm json` {:level=>:warn}
Fix the above problems, and you'll be rolling packages in no time! {:level=>:fatal}
What am I missing? Why it cannot find fpm if it is installed?
Make sure fpm is in /usr/bin..
It seems that the problem came because the fpm was installed in the /home/user2connect/bin/, and the command was not recognised. For fixing this I had to call it wit the whole path:
/home/user2connect/bin/fpm ...
I have chosen to reinstall the fpm using sudo, so now it works.

Resources