I have a simple Jenkins function / procedure:
def StartContainer() {
def SqlPort = BranchToPort[env.BRANCH_NAME]
bat "docker run -e \"ACCEPT_EULA=Y\" -e \"SA_PASSWORD=P#ssword1\" --name SQLLinux${env.BRANCH_NAME} -d -i -p $SqlPort:1433 microsoft/mssql-server-linux"
}
BranchToPort does exactly what I want it to do, the problem I have is plugging the value it returns into the following call to bat, I've tried all sorts of things and this either results in language compile errors with the groovy script or the bat command ending immediately after the -p command. There is obviously something really simple I'm missing here.
My issue is down to scoping the Groovy map is declared outside of the scope of the method that spins up the container, if I move the declaration of the map into the same method as the one that starts the container, it works.
Related
I was running the following command:
docker run -e 'SOME_ENV_VARIABLE=somevalue' somecommand
And this did not pass the env variable to somecommand (Note: SOME_ENV_VARIABLE and somevalue did not have un-escaped single quotes, they were capital snake case and lowercase respectively without special character.
Later, I tried:
docker run -e "SOME_ENV_VARIABLE=somevalue" somecommand
And this somehow worked (note the double quotes).
Why is this? I have seen documentation with single quotes.
I tried this on Windows.
Quotation is handled by your shell - since you are on windows, that is cmd.exe. And, from what I understand, since cmd.exe does not handle single quotes, it will pass that argument with the quotes to docker.
It would work on Linux though (which usually has the bash shell), being the the environment many people would use docker on, and as such the reason it is present in a lot of documentation.
I am trying to launch an ansible-tower cli job through Jenkins. But I don't want a prompt that appears on Ansible Tower. I want to pass those parameters in the same command so that a prompt is not required.
I have tried:
tower-cli job launch --job-template=33 -e "param1" -e "param2"
This is the error I get:
Error: failed to pass some of the extra variables
According to the Ansible Tower-CLI documentation the parameter -e is wrong. You need to use --extra-vars. This differs from ansible-playbook command. So an easy example is
tower-cli job launch --job-template 1 --extra-vars '{"x":"y"}'
Be aware that you write all vars in one argument. The --extra-vars expects JSON or YAML format.
Be also aware, that the given job template MUST be configured to ask for extra-vars. Otherwise the argument is ignored on Ansible Tower side.
Also - not the question but a good advice - if your Jenkins needs to wait for the job result add --monitor to the tower-cli command. Then the cli waits for the response code and the stage could "fail" if there is a problem.
I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.
Let us say that I have environment variable PO, with value 1.If I use the LINUX echo command I get:
>echo $PO
1
However, if I use TCL and exec, I do not get interpolation:
>exec echo "\$PO"
$PO
Now, if I do something more elaborate, by using regsub to replace every ${varname} with [ lindex array get env varname ] 0 ], and use substr, it works:
>subst [ regsub -all {\$\{(\S+?)\}} "\${PO}/1" "\[ lindex \[ array get env \\1 \] 1 \]" ]
1/1
I have some corner cases, sure. But why is the exec not giving back what the shell would do?
why is the exec not giving back what the shell would do?
Because exec is not a shell.
When you do echo $PO from a shell, echo is not responsible for resolving the value. It is the shell that converts $PO to the value 1 before calling echo. echo never sees $PO when calling it from the shell.
If you are trying to emulate what the shell does, then you need to do the same work as the shell (or, invoke an actual shell to do the work for you).
Tcl is a lot more careful about where it does interpolation than Unix shells normally are. It keeps environment variables out of the way so that you don't trip over them by accident, and does far less processing when it invokes a subprocess. This is totally by design!
As much as possible (with a few exceptions) Tcl passes the arguments to exec through to the subprocesses it creates. It also has standard mechanisms for quoting strings so that you can control exactly what substitutions happen before the arguments are actually passed to exec. This means that when you do:
exec echo "\$PO"
Tcl is going to do its normal substitution rules and get these exact arguments to the command dispatch: exec, echo, and $PO. This then calls into the exec command, which launches the echo program with one argument, $PO, which does exactly that. (Shells usually substitute the value first.) If you'd instead done:
exec echo {$PO}
you would have got the same effect. Or even if you'd done:
exec {*}{echo $PO}
You still end up feeding the exact same characters into exec as its arguments. If you want to run the shell on it, you should explicitly ask for it:
exec /bin/sh -c {echo $PO}
The bit in the braces there is a full (small) shell script, and will be evaluated as such. And you could do this even:
exec /bin/sh -c {exec echo '$PO'}
It's a bit of a useless thing to do but it works.
You can also do whatever substitutions you want from your own code. My current favourite from Tcl 8.7 (in development) is this:
exec echo [regsub -all -command {\$(\w+)} "\$PO" {apply {- name} {
global env
return $env($name)
}}]
OK, total overkill for this but since you can use any old complex RE and script to do the substitutions, it's a major power tool. (You can do similar things with string map, regsub and subst in older Tcl, but that's quite a bit harder to do.) The sky and your imagination are the only limits.
I'm using PBSPro and am trying to use qsub command line to submit a job but can't seem to get the output and error files to be named how I want them. Currently using:
qsub -N ${subjobname_short} \
-o ${path}.o{$PBS_JOBID} -e ${path}.e${PBS_JOBID}
... submission_script.sc
Where $path=fulljobname (i.e. more than 15 characters)
I'm aware that $PBS_JOBID won't be set until after the job is submitted...
Any ideas?
Thanks
The solution I came up with was following the qsub command with a qalter command like so:
jobid=$(qsub -N ${subjobname_short} submission_script.sc)
qalter -o ${path}.o{$jobid} -e ${path}.e${jobid} ${jobid}
This way, PBS Pro does not need to resolve the variables, as it failed to do so in our install (this may be a configuration issue)
If you want the ${PBS_JOBID} to be resolved by PBSPro, you need to escape it on the command line:
qsub -o \$PBS_JOBID
Otherwise, bash will attempt to resolve $PBS_JOBID before it gets to the qsub command. I don't know if $subjobname_short and $path are actual environment variables or ones you want pbs to resolve, but if you want pbs to resolve them you'll also need to escape these ones or place it inside the job script.
NOTE: I also notice that your -o argument says {$PBS_JOBID} and I'm pretty sure you want ${PBS_JOBID}. I don't know if that's a typo in the question or what you tried to pass to qsub.