Shell script unknown operand alpine - docker

I am running into a problem where my shell script doesn't work if the variable is not set/set to empty string.
For this I am using the alpine image
docker run -dt alpine
docker exec -it <container> sh
Here is the problematic code:
x=""
sh -c "if [ "$x" != "required" ]; then sed; fi"
When x is not set, I get the error:
sh: required: unknown operand
This seems to only be the problem on an empty string. If I set x="lkajsdfasl", it will work just fine.
This just breaks down on an empty string/not set.
Due to the way my docker-compose is setup, I can only use sh and have to use sh -c

It's a quoting issue. Use single quotes around the command like this
sh -c 'if [ "$x" != "required" ]; then sed; fi'

Related

Build a docker container for a "custom" program [duplicate]

I am new to the docker world. I have to invoke a shell script that takes command line arguments through a docker container.
Ex: My shell script looks like:
#!bin/bash
echo $1
Dockerfile looks like this:
FROM ubuntu:14.04
COPY ./file.sh /
CMD /bin/bash file.sh
I am not sure how to pass the arguments while running the container
with this script in file.sh
#!/bin/bash
echo Your container args are: "$#"
and this Dockerfile
FROM ubuntu:14.04
COPY ./file.sh /
ENTRYPOINT ["/file.sh"]
you should be able to:
% docker build -t test .
% docker run test hello world
Your container args are: hello world
Use the same file.sh
#!/bin/bash
echo $1
Build the image using the existing Dockerfile:
docker build -t test .
Run the image with arguments abc or xyz or something else.
docker run -ti --rm test /file.sh abc
docker run -ti --rm test /file.sh xyz
There are a few things interacting here:
docker run your_image arg1 arg2 will replace the value of CMD with arg1 arg2. That's a full replacement of the CMD, not appending more values to it. This is why you often see docker run some_image /bin/bash to run a bash shell in the container.
When you have both an ENTRYPOINT and a CMD value defined, docker starts the container by concatenating the two and running that concatenated command. So if you define your entrypoint to be file.sh, you can now run the container with additional args that will be passed as args to file.sh.
Entrypoints and Commands in docker have two syntaxes, a string syntax that will launch a shell, and a json syntax that will perform an exec. The shell is useful to handle things like IO redirection, chaining multiple commands together (with things like &&), variable substitution, etc. However, that shell gets in the way with signal handling (if you've ever seen a 10 second delay to stop a container, this is often the cause) and with concatenating an entrypoint and command together. If you define your entrypoint as a string, it would run /bin/sh -c "file.sh", which alone is fine. But if you have a command defined as a string too, you'll see something like /bin/sh -c "file.sh" /bin/sh -c "arg1 arg2" as the command being launched inside your container, not so good. See the table here for more on how these two options interact
The shell -c option only takes a single argument. Everything after that would get passed as $1, $2, etc, to that single argument, but not into an embedded shell script unless you explicitly passed the args. I.e. /bin/sh -c "file.sh $1 $2" "arg1" "arg2" would work, but /bin/sh -c "file.sh" "arg1" "arg2" would not since file.sh would be called with no args.
Putting that all together, the common design is:
FROM ubuntu:14.04
COPY ./file.sh /
RUN chmod 755 /file.sh
# Note the json syntax on this next line is strict, double quotes, and any syntax
# error will result in a shell being used to run the line.
ENTRYPOINT ["file.sh"]
And you then run that with:
docker run your_image arg1 arg2
There's a fair bit more detail on this at:
https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options
https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
With Docker, the proper way to pass this sort of information is through environment variables.
So with the same Dockerfile, change the script to
#!/bin/bash
echo $FOO
After building, use the following docker command:
docker run -e FOO="hello world!" test
What I have is a script file that actually runs things. This scrip file might be relatively complicated. Let's call it "run_container". This script takes arguments from the command line:
run_container p1 p2 p3
A simple run_container might be:
#!/bin/bash
echo "argc = ${#*}"
echo "argv = ${*}"
What I want to do is, after "dockering" this I would like to be able to startup this container with the parameters on the docker command line like this:
docker run image_name p1 p2 p3
and have the run_container script be run with p1 p2 p3 as the parameters.
This is my solution:
Dockerfile:
FROM docker.io/ubuntu
ADD run_container /
ENTRYPOINT ["/bin/bash", "-c", "/run_container \"$#\"", "--"]
If you want to run it #build time :
CMD /bin/bash /file.sh arg1
if you want to run it #run time :
ENTRYPOINT ["/bin/bash"]
CMD ["/file.sh", "arg1"]
Then in the host shell
docker build -t test .
docker run -i -t test
I wanted to use the string version of ENTRYPOINT so I could use the interactive shell.
FROM docker.io/ubuntu
...
ENTRYPOINT python -m server "$#"
And then the command to run (note the --):
docker run -it server -- --my_server_flag
The way this works is that the string version of ENTRYPOINT runs a shell with the command specified as the value of the -c flag. Arguments passed to the shell after -- are provided as arguments to the command where "$#" is located. See the table here: https://tldp.org/LDP/abs/html/options.html
(Credit to #jkh and #BMitch answers for helping me understand what's happening.)
Another option...
To make this works
docker run -d --rm $IMG_NAME "bash:command1&&command2&&command3"
in dockerfile
ENTRYPOINT ["/entrypoint.sh"]
in entrypoint.sh
#!/bin/sh
entrypoint_params=$1
printf "==>[entrypoint.sh] %s\n" "entry_point_param is $entrypoint_params"
PARAM1=$(echo $entrypoint_params | cut -d':' -f1) # output is 1 must be 'bash' it will be tested
PARAM2=$(echo $entrypoint_params | cut -d':' -f2) # the real command separated by &&
printf "==>[entrypoint.sh] %s\n" "PARAM1=$PARAM1"
printf "==>[entrypoint.sh] %s\n" "PARAM2=$PARAM2"
if [ "$PARAM1" = "bash" ];
then
printf "==>[entrypoint.sh] %s\n" "about to running $PARAM2 command"
echo $PARAM2 | tr '&&' '\n' | while read cmd; do
$cmd
done
fi

Quotes in scripts results in “unterminated quoted string”

I'm trying to use scripts of Composer with quotes to pass an environment variable to the command that will be run with Docker.
I use sh -c 'A=b [command]' in order to run a command with an environment variable.
Here is a minimal example:
{
"scripts": {
"docker-run": "docker run --tty composer:2",
"docker-version": "#docker-run composer --version",
"docker-version2": "#docker-run sh -c 'CONSTANT=6.2.x-dev composer --version'"
}
}
When I run it, the script docker-version works as expected:
$ composer run-script docker-version
> docker run --tty composer:2 'composer' '--version'
Composer version 2.3.10 2022-07-13 15:48:23
But the script docker-version2 fails. The simple quotes are escaped and it breaks the command:
$ composer run-script docker-version2
> docker run --tty composer:2 'sh' '-c' ''\''CONSTANT=6.2.x-dev' 'composer' '--version'\'''
composer: line 0: syntax error: unterminated quoted string
Script docker run --tty composer:2 handling the docker-run event returned with error code 2
Script #docker-run sh -c 'CONSTANT=6.2.x-dev composer --version' was called via docker-version2
You can set environment variables with env command.
docker-run env CONSTANT=6.2.x-dev composer --version

docker exec error with sh command returns "Unterminated quoted string"

If I run two commands:
export TMPCMD='sh -c "if [ `uname -m` = aarch64 ]; then echo 0; fi"'
docker exec container sh -c "..."
It produces an error:
[: 1: [: Syntax error: Unterminated quoted string
How might I fix this?
So far I have tried I could not find to execute like docker exec container sh -c "..."
Alternatively, You can create a shell script file name example Stackoverflow.sh
Dockerfile
FROM nginx:latest
COPY Stackoverflow.sh bin/Stackoverflow.sh
RUN chmod u+x bin/Stackoverflow.sh
RUN bin/Stackoverflow.sh
docker exec -it <conyainer-ID> bash
root#<conyainer-ID>:/bin# ./Stackoverflow.sh
#=>Logging-Container has started.
#=>Check uname -m command.
#=>0
Hope this help you.

Access environment variable value in docker ENTRYPOINT ( exec ) from second parameter(with customerentrypoint script as first parameter)

I want to access the value of one of environment variable in my dockerfile , and pass it as first argument to the main script in docker ENTRYPOINT.
I came across this so link which shows two ways to do it. one with exec form and one with shell form.
The exec form worked fine to echo the environment variable with ["sh", "-c", "echo $VARIABLE"] but when I tried with my custom entrypoint script ENTRYPOINT ["/bin/customentrypoint.sh", "$VARIABLE"] it is not able to get the value for variable, instead its just taking it as constant $VARIABLE.
So I went with shell form approach and just called ENTRYPOINT /bin/customentrypoing "$VARIABLE", and it worked fine to get the value of $VARIABLE but It seems that its restricting the no of command line arguments in this case. as I am getting only one value of $# even after passing other command line arguments from docker run.Can someone please help me if I am doing something wrong , or I should tackle this in different way.Thanks in Advance.
docker looks is similar to
#!/usr/bin/env bash
...
ENV VARIABLE NO
...
RUN echo "#!/bin/bash" > /bin/customentrypoint.sh
RUN echo "if [ "\"\$1\"" = 'YES' ] ; then ; python ${LOCATION}/main.py" \"\$#\" "; else ; echo Please select -e VARIABLE=YES ; fi" >> /bin/customentrypoint.sh
RUN chmod +x /bin/customentrypoint.sh
RUN ln -s -T /bin/customentrypoint.sh /bin/customentrypoint
WORKDIR ${LOCATION}
ENTRYPOINT /bin/customentrypoint "$VARIABLE" # - works fine but limits no of command line arguments
# ENTRYPOINT ["bin/customentrypoint", "$VARIABLE"] # not able to get value of $VARIABLE instead taking as constant.
command I am using
docker run --rm -v $PWD:/mnt -e VARIABLE=VALUE docker_image:tag entrypoint -d /mnt/tmp -i /mnt/input_file
The environment for CMD is interpreted slightly differently depending on how you write the arguments. If you pass the CMD as a string (not inside an array), it gets launched as a shell instead of exec. See https://docs.docker.com/engine/reference/builder/#cmd.
What you can try if you want to use array is
ENTRYPOINT ["/bin/sh", "-c", "echo ${VARIABLE}"]

Access files written in docker volumes from the host

I have a docker container writing logfiles to a name volume.
From the host I want to analyce the logfiles and search for given log messages. But when I access the folder which 'docker inspect VOLUMNAME' gives, I get strange behavior, which I do not understand.
e.g. following command does give empty lines as output:
user#docker-host-01:~/docker-server-env/otaya-designdb$ sudo bash -c "for logfile in /var/lib/docker/volumes/design-db-logs/_data/*/*; do echo ${logfile}; done"
user#docker-host-01:~/docker-server-env/otaya-designdb$
What could be the reason?
Your local shell is expanding the variable expansion inside the double quotes before the loop happens. Change the double quotes to single quotes.
That is, when you run
sudo bash -c "for ... ; do echo ${logfile}; done"
first your local shell replaces the variable reference with whatever your local environment has set for $logfile, probably nothing
sudo bash -c 'for ...; do echo ; done'
and then it runs that command. If you change this to single quotes initially
sudo bash -c 'for ... ; do echo ${logfile}; done'
it will avoid this expansion.
You can see this just by putting the word echo at the front of the command: the shell will do its expansion, and then echo will print out the command that would have run.

Resources