GNU parallel: How can I set the working directory when using "--nonall"? - gnu-parallel

When not using --nonall, the --workdir ... option ("create working dirs under ~/.parallel/tmp/ on the remote computers") works as expected:
$ parallel -S target-server --workdir ... pwd ::: ""
/home/myuser/.parallel/tmp/my-machine-7285-1
However, when I add the --nonall option, it no longer has any effect:
$ parallel --nonall -S target-server --workdir ... pwd ::: ""
/home/myuser
Even when specifying an explicit working directory such as /home, this works:
$ parallel -S target-server --workdir /home pwd ::: ""
/home
...but this doesn't:
$ parallel --nonall -S target-server --workdir /home pwd ::: ""
/home/myuser
Any ideas why parallel is ignoring --workdir when using --nonall?

It is clearly a bug. Now registered in the bug list: https://savannah.gnu.org/bugs/index.php?46819
Feel free to add yourself as Cc if you want to be kept informed.

Related

Jenkins use expression/parenthesis in shell command

How to make file expressions work with Jenkinsfile?
I want to run simple command rm -rv !(_deps) to remove all populated files except _deps directory. What I've tried so far:
sh '''rm -rv !(_deps)''' which caused script.sh: line 1: syntax error near unexpected token `('
sh 'rm -rv !\\(_deps\\)' which caused rm: cannot remove '!(_deps)': No such file or directory (but it DOES exist)
sh 'rm -rv !\(_deps\)' which caused unexpected char: '\'
sh 'rm -rv !(_deps)' which caused syntax error near unexpected token `('
In Bash to use pattern matching you have to enable extglob:
So the following will work. Make sure your shell executor is set to bash.
sh """
shopt -s extglob
rm -rv !(_deps)
"""
If you don't want to enable extglob you can use a different command to get the directories deleted. Following is one option.
ls | grep -v _deps | xargs rm -rv
In Jenkins
sh """
ls | grep -v _deps | xargs rm -rv
"""

Run execlineb when container start failed. Docker for windosw

I'm trying to run simple script inside docker container after start. Initialy previous developer decided to use s6 inside.
#!/usr/bin/execlineb -P
foreground { sleep 2 }
nginx
When i'm trying to start i'm gettings this message
execlineb: usage: execlineb [ -p | -P | -S nmin | -s nmin ] [ -q | -w | -W ] [ -c commandline ] script args
Looks like something wrong with executing this scripts or with execline.
I'm using docker for windows under windows10, however if somebody else trying to build this container in ubuntu(or any othe linux) evething is ok.
Can anybody help with this kind of problem?
DockerImage: simple alpine
According to our research of this "HUGE" problem we found two ways to solve it. Definitely it's a problem with special symbols, like '\r'
Option 1 dostounix:
install dostounix in your container(in docker file)
RUN apk --no-cache add \
dos2unix \
run it againts your sh script.
RUN for file in {PathToYourFiles}; do \
dos2unix $file; \
chmod a+xwr $file; \
done
enjoy your scripts.
Option 2 VsCode(or any textEditor):
Change CRLF 'End Of Line Sequence' to LF
VS Code bottom panel
Line endings options
enjoy your scripts.

issues in accessing docker environment variables in systemd service files

1) I am running a docker container with following cmd (passing few env variables with -e option)
$ docker run --name=xyz -d -e CONTAINER_NAME=xyz -e SSH_PORT=22 -e NWMODE=HOST -e XDG_RUNTIME_DIR=/run/user/0 --net=host -v /mnt:/mnt -v /dev:/dev -v /etc/sysconfig/network-scripts:/etc/sysconfig/network-scripts -v /:/hostroot/ -v /etc/hostname:/etc/host_hostname -v /etc/localtime:/etc/localtime -v /var/run/docker.sock:/var/run/docker.sock --privileged=true cf3681e04bfb
2) After running the container as above, i check the env variable NWMODE inside the container, and it shows correctly as shown below :
$ docker exec -it xyz bash
$ env | grep NWMODE
NWMODE=HOST
3) Now, i created a sample service 'b' shown below which executes a script b.sh (where i try to access NWMODE) :
root#ubuntu16:/etc/systemd/system# cat b.service
[Unit]
Description=testing service b
[Service]
ExecStart=/bin/bash /etc/systemd/system/b.sh
root#ubuntu16:/etc/systemd/system# cat b.sh
#!/bin/bash`
systemctl import-environment
echo "NWMODE:" $NWMODE`
4) Now if i start service 'b' and see its logs, it shows that it is not able to access NWMODE env variable
$ systemctl start b
$ journalctl -fu b
...
systemd[1]: Started testing service b.
bash[641]: NWMODE: //blank for $NWMODE here`
5) Now rather than having 'systemctl import-environment' in b.sh, if i do following then the b.service logs show the correct value of NWMODE env variable:
$ systemctl import-environment
$ systemctl start b
Though the step 5 above works i can't go for it, as all the services in my system will be started automatically by systemd. In that case, can anyone please let me know how can i access the environment variables (passed using 'docker run...' cmd above) in a service file (say for e.g. in b.sh above). Can this be achieved somehow with systemctl import-environment or there is some other way ?
systemd unsets all environment variables to provide a clean environment. Afaik that is intended to be a security feature.
Workaround: Create a file /etc/systemd/system.conf.d/myenvironment.conf:
[Manager]
DefaultEnvironment=CONTAINER_NAME=xyz NWMODE=HOST XDG_RUNTIME_DIR=/run/user/0
systemd will set the environment variables declared in this file.
You can set up an ENTRYPOINT script that automatically creates this file before running systemd. Example:
RUN echo '#! /bin/bash \n\
echo "[Manager] \n\
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env) \n\
" >/etc/systemd/system.conf.d/myenvironment.conf \n\
exec /lib/systemd/systemd \n\
' >/usr/local/bin/setmyenv && chmod +x /usr/bin/setmyenv
ENTRYPOINT /usr/bin/setmyenv
Instead of creating the script within Dockerfile you can store it outside and add it with COPY:
#! /bin/bash
echo "[Manager]
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env)
" >/etc/systemd/system.conf.d/myenvironment.conf
exec /lib/systemd/systemd
TL;DR
Run the the command using bash, first store the docker environment variables to a file (or just pipe them two awk), extract & export the variable and finally run your main script.
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"
Whatever #mviereck is saying is true, still I have found another solution to this problem.
My use case is to pass an environment variable to my system-d container in the Docker run command (docker run -e MY_ENV_VARIABLE="some_val") and use that in the python script that is run through the system-d unit file.
According to this post (https://forums.docker.com/t/where-are-stored-the-environment-variables/65762) the container environment variables can be found in the running process /proc/1/environ inside the container. Performing a cat does show that the environment variable MY_ENV_VARIABLE=some_val does exist, though in some mangled form.
$ cat /proc/1/environ
HOSTNAME=271fbnd986bdMY_ENV_VARIABLE=some_valcontainer=dockerLC_ALL=CDEBIAN_FRONTEND=noninteractiveHOME=/rootroot#271fb0d986bd
The main task now would be to extract MY_ENV_VARIABLE="some_val" value and pass it to the ExecStart directive in the system-d unit file.
(extraction code referenced from How to grep for value in a key-value store from plain text)
# this outputs a nice key,value pair
$ cat /proc/1/environ | tr '\0' '\n'
HOSTNAME=861f23cd1b33
MY_ENV_VARIABLE=some_val
container=docker
LC_ALL=C
DEBIAN_FRONTEND=noninteractive
HOME=/root
# we can store this in a file for use, too
$ cat /proc/1/environ | tr '\0' '\n' > /home/env_var_file
# we can then reuse the file to extract the value of interest against a key
$ awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file
some_val
Now in the ExecStart directive in the system-d unit file we can do this:
[Service]
Type=simple
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"

jenkins pipeline: multiline shell commands with pipe

I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world

Why is capistrano interpreting a flag passed with a command to `run` as input?

I'm trying to do this:
run "echo -n 'foo' > bar.txt"
and the contents of bar.txt ends up being:
-n foo \n
(With \n representing an actual newline)
I use run for other commands like rm -rf and, to my knowledge, it works fine.
I just found this in man echo:
Some shells may provide a builtin echo command which is similar or identical to this utility. Most notably, the builtin echo in sh(1) does not accept the -n option. Consult the builtin(1) manual page.
My version of bash has an echo builtin but seems to be respecting the -n flag. It looks like the shell on your deployment machine doesn't, in which case using the full path to the echo binary might do what you want here:
run "/bin/echo -n 'foo' > bar.txt"
It appears as though the -n flag isn't being interpreted as a flag by the shell. If, from the command line, one executes echo -Y hi, the output will be -Y hi.

Resources