Shell scripting input redirection oddities - dash-shell

Can anyone explain this behavior?
Running:
#!/bin/sh
echo "hello world" | read var1 var2
echo $var1
echo $var2
results in nothing being ouput, while:
#!/bin/sh
echo "hello world" > test.file
read var1 var2 < test.file
echo $var1
echo $var2
produces the expected output:
hello
world
Shouldn't the pipe do in one step what the redirection to test.file did in the second example? I tried the same code with both the dash and bash shells and got the same behavior from both of them.

A recent addition to bash is the lastpipe option, which allows the last command in a pipeline to run in the current shell, not a subshell, when job control is deactivated.
#!/bin/bash
set +m # Deactiveate job control
shopt -s lastpipe
echo "hello world" | read var1 var2
echo $var1
echo $var2
will indeed output
hello
world

This has already been answered correctly, but the solution has not been stated yet. Use ksh, not bash. Compare:
$ echo 'echo "hello world" | read var1 var2
echo $var1
echo $var2' | bash -s
To:
$ echo 'echo "hello world" | read var1 var2
echo $var1
echo $var2' | ksh -s
hello
world
ksh is a superior programming shell because of little niceties like this. (bash is the better interactive shell, in my opinion.)

#!/bin/sh
echo "hello world" | read var1 var2
echo $var1
echo $var2
produces no output because pipelines run each of their components inside a subshell. Subshells inherit copies of the parent shell's variables, rather than sharing them. Try this:
#!/bin/sh
foo="contents of shell variable foo"
echo $foo
(
echo $foo
foo="foo contents modified"
echo $foo
)
echo $foo
The parentheses define a region of code that gets run in a subshell, and $foo retains its original value after being modified inside them.
Now try this:
#!/bin/sh
foo="contents of shell variable foo"
echo $foo
{
echo $foo
foo="foo contents modified"
echo $foo
}
echo $foo
The braces are purely for grouping, no subshell is created, and the $foo modified inside the braces is the same $foo modified outside them.
Now try this:
#!/bin/sh
echo "hello world" | {
read var1 var2
echo $var1
echo $var2
}
echo $var1
echo $var2
Inside the braces, the read builtin creates $var1 and $var2 properly and you can see that they get echoed. Outside the braces, they don't exist any more. All the code within the braces has been run in a subshell because it's one component of a pipeline.
You can put arbitrary amounts of code between braces, so you can use this piping-into-a-block construction whenever you need to run a block of shell script that parses the output of something else.

read var1 var2 < <(echo "hello world")

The post has been properly answered, but I would like to offer an alternative one liner that perhaps could be of some use.
For assigning space separated values from echo (or stdout for that matter) to shell variables, you could consider using shell arrays:
$ var=( $( echo 'hello world' ) )
$ echo ${var[0]}
hello
$ echo ${var[1]}
world
In this example var is an array and the contents can be accessed using the construct ${var[index]}, where index is the array index (starts with 0).
That way you can have as many parameters as you want assigned to the relevant array index.

Allright, I figured it out!
This is a hard bug to catch, but results from the way pipes are handled by the shell. Every element of a pipeline runs in a separate process. When the read command sets var1 and var2, is sets them it its own subshell, not the parent shell. So when the subshell exits, the values of var1 and var2 are lost. You can, however, try doing
var1=$(echo "Hello")
echo var1
which returns the expected answer. Unfortunately this only works for single variables, you can't set many at a time. In order to set multiple variables at a time you must either read into one variable and chop it up into multiple variables or use something like this:
set -- $(echo "Hello World")
var1="$1" var2="$2"
echo $var1
echo $var2
While I admit it's not as elegant as using a pipe, it works. Of course you should keep in mind that read was meant to read from files into variables, so making it read from standard input should be a little harder.

It's because the pipe version is creating a subshell, which reads the variable into its local space which then is destroyed when the subshell exits.
Execute this command
$ echo $$;cat | read a
10637
and use pstree -p to look at the running processes, you will see an extra shell hanging off of your main shell.
| |-bash(10637)-+-bash(10786)
| | `-cat(10785)

My take on this issue (using Bash):
read var1 var2 <<< "hello world"
echo $var1 $var2

Try:
echo "hello world" | (read var1 var2 ; echo $var1 ; echo $var2 )
The problem, as multiple people have stated, is that var1 and var2 are created in a subshell environment that is destroyed when that subshell exits. The above avoids destroying the subshell until the result has been echo'd. Another solution is:
result=`echo "hello world"`
read var1 var2 <<EOF
$result
EOF
echo $var1
echo $var2

Related

How to see the PATH inside a shell without opening a shell

Use the command flag looked like a solution but it doesn't work
Inside the following shell:
nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello
the path contain a directory with an executable hello
I've tried this:
nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello --command echo $PATH
I can't see the hello executable
My eyes are not the problem.
diff <( echo $PATH ) <( nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello --command echo $PATH)
It see no difference. It means that the printed path doesn't not contains hello.
Why?
The printed path does not contain hello because if your starting PATH was /nix/var/nix/profiles/default/bin:/run/current-system/sw/bin, then you just ran:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
echo /nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
That is to say, you passed your original path as an argument to the nix shell command, instead of passing it a reference to a variable for it to expand later.
The easiest way to accomplish what you're looking for is:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
sh -c 'echo "$PATH"'
The single quotes prevent your shell from expanding $PATH before a copy of sh invoked by nix is started.
Of course, if you really don't want to start any kind of child shell, then you can run a non-shell tool to print environment variables:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
env | grep '^PATH='

Export of a variable from a bash script

I've got to some variables which will be done with kubenetes command so I thought it best to put these in a bash script. I've managed to do that and called on it and see that variables get created but when it comes out of the bash script they are not assigned.
Within the Jenkinsfile script I have
steps {
sh '''
./bin/kube.sh
echo "Kube2 = ${SCRET}"
.....
and within the kube.sh file I have
#!/bin/bash
export SCRET=`kubectl -n keycloak get secret auser -o yaml | grep password | awk '{print $2}'`
echo "Kube2 = ${SCRET}"
I get the following results
+ ./bin/kube.sh
Kube1 = XXXXXXXX
+ echo 'SCRET = XXXXXXXX'
Kube2 =
Why is it that it gets unset again? What am I missing
Variables set in a subshell evaporate with that shell, and are not exported to the parent.
To set variables in the current environment using a script, you must source the code into the current context.
$: cat x
foo=bar
$: ./x && echo $foo # runs in a subshell - foo ends with ./x
$: . x && echo $foo # runs in current shell - foo is set
bar

Is it possible to send all output of the sh DSL command in the Jenkins pipeline to a file?

I'm trying to de-clutter my Jenkins output. Thanks to Is it possible to capture the stdout from the sh DSL command in the pipeline, I know I can send the output of each sh command to a file. However, the commands themselves will still be written to the Jenkins output instead of the file. For example:
sh '''echo "Hello World!"
./helloworld
./seeyoulater
'''
As is, this results in the Jenkins output looking like this:
echo "Hello World!"
Hello World!
./helloworld
<helloworld output, possibly many lines>
./seeyoulater
<seeyoulater output, possibly many lines>
However, if I send the output to a file, I get Jenkins output like this:
echo "Hello World!" > output.log
./helloworld >> output.log
./seeyoulater >> output.log
and output.log looking like this:
Hello World!
<helloworld output>
<seeyoulater output>
This leads to my Jenkins output being less cluttered, but output.log ends up not having any separators between the script outputs. I suppose I could have echo <command> right before each command, but that just means my Jenkins output gets more cluttered again.
Is there any way to send the entire output of the sh DSL command to a file? Basically something like sh '''<commands here>''' > output.log is what I'm looking for.
I wasn't able to find a solution, but I did find a workaround. As mentioned in the sh command documentation, the default is to run using the -xe flags. The -x flag is why the commands are shown in the Jenkins output. The remedy is to add set +x:
sh '''set +x
echo "Hello World!" > output.log
./helloworld >> output.log
./seeyoulater >> output.log
'''
The set +x shows up in the Jenkins output, but the rest of the commands do not. From there, it's just a matter of adding enough echo statements in there to make output.log sufficiently readable.

Iterate in RUN command in Dockerfile

I have a line that looks something like:
RUN for i in `x y z`; do echo "$i"; done
...with the intention of printing each of the three items
But it raises /bin/sh: 1: x: not found
Any idea what I'm doing wrong?
It looks like you're using backticks. What's in backticks gets executed and the text in the backticks gets replaced by what's returned by the results.
Try using single quotes or double quotes instead of backticks.
Try getting rid of the backticks like so:
RUN for i in x y z; do echo "$i"; done
I would suggest an alternative solution of this.
In stead of having the LOOP inside docker file, can we take a step back ...
Implement the loop inside a independent bash script;
Which means you would have a loop.sh as following:
#!/bin/bash
for i in $(seq 1 5); do echo "$i"; done
And in your Dockerfile, you will need to do:
COPY loop.sh loop.sh
RUN ./loop.sh
With the approach aboved it requires one extra step and costs one extra layer.
However, when you are going to do some more complicated stuff, I would recommend to put them all into the script.
All the operations inside the script will only cost one layer.
To me this approach is could be cleaner and more maintainable.
Please also have a read at this one:
https://hackernoon.com/tips-to-reduce-docker-image-sizes-876095da3b34
For a more maintainable Dockerfile, my preference is to use multiple lines with comments in RUN instructions especially if chaining multiple operations with &&
RUN sh -x \
#
# execute a for loop
#
&& for i in x \
y \
z; \
do \
echo "$i"
done \
\
#
# and tell builder to have a great day
#
&& echo "Have a great day!"
Run Docker container perpetually by writing a simple script
docker run -d <container id> \
sh -c "while :; do echo 'just looping here... nothing special'; sleep 1; done"
We can do it like
RUN for i in x \y \z; do echo "$i" "hi"; done
output of above command will be
x hi
y hi
z hi
Mind spaces while writing - for i in x \y \z;
example - snippet from git bash

Word splitting in bash with input from a file

I'm having some trouble getting bash to play nicely with parsing words off the command line. I find it easiest to give an example, so without further ado, here we go.
This is the script test.sh:
#!/bin/bash
echo "inside test with $# arguments"
if [[ $# -eq 0 ]]
then
data=cat data.txt
echo ./test $data
./test $data
else
for arg in "$#"
do
echo "Arg is \"$arg\""
done
fi
And here is the file data.txt:
"abc 123" 1 2 3 "how are you"
The desired output of
$ test.sh
is
inside test with 0 arguments
./test "abc 123" 1 2 3 "how are you"
inside test with 5 arguments
Arg is "abc 123"
Arg is "1"
Arg is "2"
Arg is "3"
Arg is "how are you"
But instead, I'm getting
inside test with 0 arguments
./test "abc 123" 1 2 3 "how are you"
inside test with 8 arguments
Arg is ""abc"
Arg is "123""
Arg is "1"
Arg is "2"
Arg is "3"
Arg is ""how"
Arg is "are"
Arg is "you""
The really annoying thing is that if I execute the command which is dumped from line 7 of test.sh, I do get the desired output (sans the first two lines of course).
So in essence, is there any way to get bash to parse words if given input which has been read from a file?
You can use eval for this:
eval ./test "$data"
You must be careful to know that you can trust the contents of the file when you use eval. To demonstrate why, add ; pwd at the end of the line in your data file. When you run your script, the current directory will be printed. Imagine if it was something destructive.
It might be better if you can choose a delimiter other than a space between fields in your data file. For example, if you use tabs, you could do:
while IFS=$'\t' read -r -a array
do
for item in "${array[#]}"
do
something with "$item"
done
done < data.txt
You wouldn't need to quote fields that contain spaces.
This is a correction to what I presume was a typo in your question:
data=$(cat data.txt)
No need to call the script twice.
If you find there are no arguments, you can use set to change them to something else, e.g.:
#!/bin/bash
if [ $# -eq 0 ]
then
echo "inside test with $# arguments"
eval set -- $(<data.txt)
fi
echo "inside test with $# arguments"
for arg in "$#"
do
echo "Arg is \"$arg\""
done
Yes, simply replace
./test $data
with
eval ./test $data

Resources