I have a file named volumes with the list of volumes, separated by space. Here is an example of the file content
vol-0e9cd38819c7a6cb8 vol-0baba5cee0c7fc89a vol-0e7fae905aaffe3a1
I can delete all of them by using this loop in sh:
for volume in $(cat volumes); do; aws ec2 delete-volume --volume-id $volume; done
But when I try to do it inside of the Jenkinsfile like this:
sh "for volume in \$(cat volumes); do; aws ec2 delete-volume --volume-id \$volume; done"
I get the following error:
syntax error near unexpected token `;'
I've tried to escape the characters in different ways and also different types of the sh block, but it doesn't help, I still get the same error.
Please help me to resolve this issue.
There is no ; after do. It's:
for volume in $(cat volumes); do aws ec2 delete-volume --volume-id $volume; done
for i in $(cat is an anti pattern. https://mywiki.wooledge.org/BashFAQ/001 . In this case I would use xargs.
xargs -n1 aws ec2 delete-volume --volume-id < volumes
Related
I am trying to put a spring boot java .jar file into an Image off openJDK and commit the change to it in docker but docker command does not seem to work
I am following this article for steps : https://dzone.com/articles/docker-tutorial-for-beginners-with-java-and-spring
What argument is docker expecting which , I did not give in the command
docker container commit --change='CMD ["java","-jar","/tmp/mytroubleartifact-0.jar"]' upbeat_brahmagupta a-repo-name-of-choice/some-app-name:tagname2
the issue is with the apostrophe ' you should be using apostrophes (in plural - otherwise known as quotation marks) "
This command seems to be working for me
(no changes needed on your behalf):
docker container commit --change="CMD ['java','-jar','/tmp/mytroubleartifact-0.jar']" upbeat_brahmagupta a-repo-name-of-choice/some-app-name:tagname2
though I've seen other people using a single apostrophe for me it also didn't work with your situation. docker docs example also didnt work with just copying the --change part, with both options (-c, --change) and with just one of them, only using double qoutes did the trick, not exactly sure why though. (tried replacing names, making them shorter ¯_(ツ)_/¯)
docker commit --change='CMD ["apachectl", "-DFOREGROUND"]' -c "EXPOSE 80" c3f279d17e0a svendowideit/testimage:version4
thanks to these posts for the working example:
https://adamtheautomator.com/docker-commit/
https://docs.oracle.com/cd/E37670_01/E75728/html/ch04s18.html
Dipping my toes into Bash coding for the first time (not the most experienced person with Linux either) and I'm trying to read the version from the version.php inside a container at:
/config/www/nextcloud/version.php
To do so, I run:
docker exec -it 1c8c05daba19 grep -eo "(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?" /config/www/nextcloud/version.php
This uses a semantic versioning RegEx pattern (I know, a bit overkill, but it works for now) to read and extract the version from the line:
$OC_VersionString = '20.0.1';
However, when I run the command it tells me No such file or directory, (I've confirmed it does exist at that path inside the container) and then proceeds to spit out the entire contents of the file it just said doesn't exist?
grep: (0|[1-9]\d*).(0|[1-9]\d*).(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-])(?:.(?:0|[1-9]\d|\d*[a-zA-Z-][0-9a-zA-Z-]))))?(?:+([0-9a-zA-Z-]+(?:.[0-9a-zA-Z-]+)*))?: No such file or directory
/config/www/nextcloud/version.php:$OC_Version = array(20,0,1,1);
/config/www/nextcloud/version.php:$OC_VersionString = '20.0.1';
/config/www/nextcloud/version.php:$OC_Edition = '';
/config/www/nextcloud/version.php:$OC_VersionCanBeUpgradedFrom = array (
/config/www/nextcloud/version.php: 'nextcloud' =>
/config/www/nextcloud/version.php: 'owncloud' =>
/config/www/nextcloud/version.php:$vendor = 'nextcloud';
Anyone able to spot the problem?
Update 1:
For the sake of clarity, I'm trying to run this from a bash script. I just want to fetch the version number from that file, to use it in other areas of the script.
Update 2:
Responding to the comments, I tried to login to the container first, and then run the grep, and still get the same result. Then I cat that file and it shows it's contents no problem.
Many containers don't have the GNU versions of Unix tools and their various extensions. It's popular to base containers on Alpine Linux, which in turn uses a very lightweight single-binary tool called BusyBox to provide the base tools. Those tend to have the set of options required in the POSIX specs, and no more.
POSIX grep(1) in particular doesn't have an -o option. So the command you're running is
grep \
-eo \ # specify "o" as the regexp to match
"(regexps are write-only)" \ # a filename
/config/www/nextcloud/version.php # a second filename
Notice that the grep output in the interactive shell only contains lines with the letter "o", but not for example the line just containing array.
POSIX grep doesn't have an equivalent for GNU grep's -o option
Print only the matched (non-empty) parts of matching lines, with each such part on a separate output line. Output lines use the same delimiters as input....
but it's easy to do that with sed(1) instead. Ask it to match some stuff, the regexp in question, and some stuff, and replace it with the matched group.
sed -e 's/.*\(any regexp here\).*/\1/' input-file
(POSIX sed only accepts basic regular expressions, so you'll have to escape more of the parentheses.)
Well, for any potential future readers, I had no luck getting grep to do it, I'm sure it was my fault somehow and not grep's, but thanks to the help in this post I was able to use awk instead of grep, like so:
docker exec -it 1c8c05daba19 awk '/^\$OC_VersionString/ && match($0,/\047[0-9]+\.[0-9]+\.[0-9]+\047/){print substr($0,RSTART+1,RLENGTH-2)}' /config/www/nextcloud/version.php
That ended up doing exactly what I needed:
It logs into a docker container.
Scans and returns just the version number from the line I am looking for at: /config/www/nextcloud/version.php inside the container.
Exits stage left from the container with just the info I needed.
I can get right back to eating my Hot Cheetos.
I'm trying to use snakemake with a docker image, but am having trouble with the docker volume. Unfortunately, there are no details on how to use 'singularity-args' to do this.
My snakemake file is:
rule all:
input:
'a/file3.txt'
rule step1:
output:
touch('a/file1.txt')
rule step2:
input:
rules.step1.output[0]
output:
'a/file2.txt'
params:
text = 'this is a test',
path = '/data/file2.txt'
singularity:
"docker://XXX/test"
shell:
"python test.py {params.text} {params.path}"
rule step3:
input:
rules.step2.output[0]
output:
touch('a/file3.txt')
The docker image is basically a python file that writes a string to file (for testing purposes). I'm trying to mount my home directory to the docker /data directory. With docker, I'm able to mount a volume using '-v'.
What is the correct way of doing this with snakemake?
I've tried the following commands (on MacOS and Ubuntu 18.04) and both have failed.
snakemake -s pipeline.py --use-singularity --singularity-args “-B /home/XXX/snakemake/a:/data”
snakemake -s pipeline.py --use-singularity --singularity-args “-B /home/XXX/snakemake/a”
The error message is:
No rule to produce /home/XXX/snakemake/a:/data” (if you use input functions make sure that they don't raise unexpected exceptions).
Am I missing a step?
Thanks in advance!
Just a trivial check... In your command lines you have tilted double quotes (“) instead of the straight ones ("), e.g.:
snakemake -s pipeline.py --use-singularity --singularity-args “-B /home/XXX/snakemake/a”
Maybe you are are copying and pasting from a text editor that uses the tilted quotes? I would use straight quotes as the other type would probably be interpreted in the wrong way.
I was able to get it working on Ubuntu 18.04 with the following command:
SINGULARITY_BINDPATH=“/home/XXX/snakemake/a:/data”; snakemake -s pipeline.py --latency-wait 10 --use-singularity
Unfortunately I wasn’t able to get the flag “--singularity-args” to work. Regardless of using ‘--bind’ or ‘-B’, I got the error “No rule to produce /Users/XXX/Devel/snakemake/a:/data”.
I’m using Snakemake 5.6.0 inside a Python3 virtual environment.
Also, on a side note, I don’t believe the MacOS singularity binary works. It had issues with Snakemake.
This work-around is good enough for now.
UPDATE
While this solution worked, the real solution (typo) was provided by #dariober.
I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.
I'm using PBSPro and am trying to use qsub command line to submit a job but can't seem to get the output and error files to be named how I want them. Currently using:
qsub -N ${subjobname_short} \
-o ${path}.o{$PBS_JOBID} -e ${path}.e${PBS_JOBID}
... submission_script.sc
Where $path=fulljobname (i.e. more than 15 characters)
I'm aware that $PBS_JOBID won't be set until after the job is submitted...
Any ideas?
Thanks
The solution I came up with was following the qsub command with a qalter command like so:
jobid=$(qsub -N ${subjobname_short} submission_script.sc)
qalter -o ${path}.o{$jobid} -e ${path}.e${jobid} ${jobid}
This way, PBS Pro does not need to resolve the variables, as it failed to do so in our install (this may be a configuration issue)
If you want the ${PBS_JOBID} to be resolved by PBSPro, you need to escape it on the command line:
qsub -o \$PBS_JOBID
Otherwise, bash will attempt to resolve $PBS_JOBID before it gets to the qsub command. I don't know if $subjobname_short and $path are actual environment variables or ones you want pbs to resolve, but if you want pbs to resolve them you'll also need to escape these ones or place it inside the job script.
NOTE: I also notice that your -o argument says {$PBS_JOBID} and I'm pretty sure you want ${PBS_JOBID}. I don't know if that's a typo in the question or what you tried to pass to qsub.