Run parallel for loop to run k6 in posix - docker

I need to run a parallel for loop in posix shell (not bash). But it is currently not executing in docker and skipping that statement on adding & to the for loop.
Reference question with details: Shell script not running as bash using Dockerfile
Code snippet from reference question:
if [ -z "$GWC_PC_ID" ]
then
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js''
else
for pcId in $(printf '%s' "$GWC_PC_ID" | tr , ' ');
do
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT GWC_PC_ID=$pcId k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js'' &
done
fi
Here if I give empty input, it runs fine but for non-empty input, it does not execute statement inside for loop. Note that if I remove & then it executes the statement inside for loop but I need to run those statements parallely.

The issue was because it was not waiting for process to finish and exiting before it can execute the forked threads. For waiting for the forked threads, I needed an array of process ids. Since array is not supported I installed bash in my docker.
To do that I modified Dockerfile as follows:
FROM grafana/k6:0.40.0
COPY ./src/lib /lib
COPY ./src/scenarios /scenarios
COPY ./src/k6-run-all.sh /k6-run-all.sh
WORKDIR /
ENTRYPOINT []
USER root
RUN apk add --no-cache bash
CMD ["bash", "-c", "./k6-run-all.sh"]
Additionally I added the following to my script to wait for processes to finish:
if [ -z "$GWC_PC_ID" ]
then
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js''
else
i=0
for pcId in $(printf '%s' "$GWC_PC_ID" | tr , ' ');
do
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT GWC_PC_ID=$pcId k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js'' &
pids[${i}]=$!
((i=i+1))
done
fi
# wait for all pids
for pid in ${pids[*]}; do
echo "Waiting for" $pid
wait $pid
done
Also added #!/bin/bash to the top of this script file.

Related

Build a docker container for a "custom" program [duplicate]

I am new to the docker world. I have to invoke a shell script that takes command line arguments through a docker container.
Ex: My shell script looks like:
#!bin/bash
echo $1
Dockerfile looks like this:
FROM ubuntu:14.04
COPY ./file.sh /
CMD /bin/bash file.sh
I am not sure how to pass the arguments while running the container
with this script in file.sh
#!/bin/bash
echo Your container args are: "$#"
and this Dockerfile
FROM ubuntu:14.04
COPY ./file.sh /
ENTRYPOINT ["/file.sh"]
you should be able to:
% docker build -t test .
% docker run test hello world
Your container args are: hello world
Use the same file.sh
#!/bin/bash
echo $1
Build the image using the existing Dockerfile:
docker build -t test .
Run the image with arguments abc or xyz or something else.
docker run -ti --rm test /file.sh abc
docker run -ti --rm test /file.sh xyz
There are a few things interacting here:
docker run your_image arg1 arg2 will replace the value of CMD with arg1 arg2. That's a full replacement of the CMD, not appending more values to it. This is why you often see docker run some_image /bin/bash to run a bash shell in the container.
When you have both an ENTRYPOINT and a CMD value defined, docker starts the container by concatenating the two and running that concatenated command. So if you define your entrypoint to be file.sh, you can now run the container with additional args that will be passed as args to file.sh.
Entrypoints and Commands in docker have two syntaxes, a string syntax that will launch a shell, and a json syntax that will perform an exec. The shell is useful to handle things like IO redirection, chaining multiple commands together (with things like &&), variable substitution, etc. However, that shell gets in the way with signal handling (if you've ever seen a 10 second delay to stop a container, this is often the cause) and with concatenating an entrypoint and command together. If you define your entrypoint as a string, it would run /bin/sh -c "file.sh", which alone is fine. But if you have a command defined as a string too, you'll see something like /bin/sh -c "file.sh" /bin/sh -c "arg1 arg2" as the command being launched inside your container, not so good. See the table here for more on how these two options interact
The shell -c option only takes a single argument. Everything after that would get passed as $1, $2, etc, to that single argument, but not into an embedded shell script unless you explicitly passed the args. I.e. /bin/sh -c "file.sh $1 $2" "arg1" "arg2" would work, but /bin/sh -c "file.sh" "arg1" "arg2" would not since file.sh would be called with no args.
Putting that all together, the common design is:
FROM ubuntu:14.04
COPY ./file.sh /
RUN chmod 755 /file.sh
# Note the json syntax on this next line is strict, double quotes, and any syntax
# error will result in a shell being used to run the line.
ENTRYPOINT ["file.sh"]
And you then run that with:
docker run your_image arg1 arg2
There's a fair bit more detail on this at:
https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options
https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
With Docker, the proper way to pass this sort of information is through environment variables.
So with the same Dockerfile, change the script to
#!/bin/bash
echo $FOO
After building, use the following docker command:
docker run -e FOO="hello world!" test
What I have is a script file that actually runs things. This scrip file might be relatively complicated. Let's call it "run_container". This script takes arguments from the command line:
run_container p1 p2 p3
A simple run_container might be:
#!/bin/bash
echo "argc = ${#*}"
echo "argv = ${*}"
What I want to do is, after "dockering" this I would like to be able to startup this container with the parameters on the docker command line like this:
docker run image_name p1 p2 p3
and have the run_container script be run with p1 p2 p3 as the parameters.
This is my solution:
Dockerfile:
FROM docker.io/ubuntu
ADD run_container /
ENTRYPOINT ["/bin/bash", "-c", "/run_container \"$#\"", "--"]
If you want to run it #build time :
CMD /bin/bash /file.sh arg1
if you want to run it #run time :
ENTRYPOINT ["/bin/bash"]
CMD ["/file.sh", "arg1"]
Then in the host shell
docker build -t test .
docker run -i -t test
I wanted to use the string version of ENTRYPOINT so I could use the interactive shell.
FROM docker.io/ubuntu
...
ENTRYPOINT python -m server "$#"
And then the command to run (note the --):
docker run -it server -- --my_server_flag
The way this works is that the string version of ENTRYPOINT runs a shell with the command specified as the value of the -c flag. Arguments passed to the shell after -- are provided as arguments to the command where "$#" is located. See the table here: https://tldp.org/LDP/abs/html/options.html
(Credit to #jkh and #BMitch answers for helping me understand what's happening.)
Another option...
To make this works
docker run -d --rm $IMG_NAME "bash:command1&&command2&&command3"
in dockerfile
ENTRYPOINT ["/entrypoint.sh"]
in entrypoint.sh
#!/bin/sh
entrypoint_params=$1
printf "==>[entrypoint.sh] %s\n" "entry_point_param is $entrypoint_params"
PARAM1=$(echo $entrypoint_params | cut -d':' -f1) # output is 1 must be 'bash' it will be tested
PARAM2=$(echo $entrypoint_params | cut -d':' -f2) # the real command separated by &&
printf "==>[entrypoint.sh] %s\n" "PARAM1=$PARAM1"
printf "==>[entrypoint.sh] %s\n" "PARAM2=$PARAM2"
if [ "$PARAM1" = "bash" ];
then
printf "==>[entrypoint.sh] %s\n" "about to running $PARAM2 command"
echo $PARAM2 | tr '&&' '\n' | while read cmd; do
$cmd
done
fi

Docker entrypoint script not sourcing file

I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.

while infinite loop SH does not work as expected on docker startup

I have sh code (DashBoardImport.sh) like down below. It checks apı response to import a kibana dashboard in a infinite loop, If it gets a reponse with success, it breaks the loop :
#!/bin/sh
# use while loop to check if kibana is running
while true
do
response=$(curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson | grep -oE "^\{\"success")
#curl -X GET elk:9200/git-demo-topic | grep -oE "^\{\"git" > /dev/null
#match=$?
echo $response
if [ '{"success' = $response ]
then
echo "Running import dashboard.."
#curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson
break
else
echo "Kibana is not running yet"
sleep 5
fi
done
I run DashBoardImport.sh via docker file:
ADD ./CityCountDashBoard.ndjson /etc/elasticsearch/CityCountDashBoard.ndjson
ADD ./DashBoardImport.sh /etc/elasticsearch/DashBoardImport.sh
#ENTRYPOINT /etc/elasticsearch/DashBoardImport.sh &
USER root
RUN chmod +x /etc/elasticsearch/DashBoardImport.sh
#RUN /etc/elasticsearch/DashBoardImport.sh &
RUN nohup bash -c "/etc/elasticsearch/DashBoardImport.sh" >/dev/null 2>&1 &
I tried many options as you can see commented out. The sh works perfectly when I run it manually on the Docker Container. I kill the kibana service. then run the code. after I started the kibana, code succesfully workes as expected and imports the dashboard. But It does not work when it start on container automatically.
Do you have any idea?
Thanks alot in advance :)
A RUN step executes in a temporary container until the command returns and then docker captures the changes to the filesystem as a new layer in your image. Nothing else remains, no environment variables, running processes, etc, only the filesystem changes.
So when you RUN nohup ... & that process immediately returns since it's in the background (nohup ... & explicitly does that), and so the container exits, killing any processes that were running in the container, and captures the filesystem changes made, if any, to your image.
If you want something to run when you start the container, add it to your ENTRYPOINT or CMD.

How can we run flyway/migrations script inside Cassandra Dockerfile?

My docker file
FROM cassandra:4.0
MAINTAINER me
EXPOSE 9042
I want to run something like when cassandra image is fetched and super user is made inside container.
create keyspace IF NOT EXISTS XYZ WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
I have also tried out adding a shell script but it never connects to cassandra, my modified docker file is
FROM cassandra:4.0
MAINTAINER me
ADD entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod 755 /usr/local/bin/entrypoint.sh
RUN mkdir scripts
COPY alter.cql scripts/
RUN chmod 755 scripts/alter.cql
EXPOSE 9042
CMD ["entrypoint.sh"]
My entrypoint looks like this
#!/bin/bash
export CQLVERSION=${CQLVERSION:-"4.0"}
export CQLSH_HOST=${CQLSH_HOST:-"localhost"}
export CQLSH_PORT=${CQLSH_PORT:-"9042"}
cqlsh=( cqlsh --cqlversion ${CQLVERSION} )
# test connection to cassandra
echo "Checking connection to cassandra..."
for i in {1..30}; do
if "${cqlsh[#]}" -e "show host;" 2> /dev/null; then
break
fi
echo "Can't establish connection, will retry again in $i seconds"
sleep $i
done
if [ "$i" = 30 ]; then
echo >&2 "Failed to connect to cassandra at ${CQLSH_HOST}:${CQLSH_PORT}"
exit 1
fi
# iterate over the cql files in /scripts folder and execute each one
for file in /scripts/*.cql; do
[ -e "$file" ] || continue
echo "Executing $file..."
"${cqlsh[#]}" -f "$file"
done
echo "Done."
exit 0
This never connects to my cassandra
Any ideas please help.
Thanks .
Your problem is that you don't call the original entrypoint to start Cassandra - you overwrote it with your own code, but it just running the cqlsh, without starting Cassandra.
You need to modify your code to start Cassandra using the original entrypoint script (source) that is installed as /usr/local/bin/docker-entrypoint.sh, and then execute your script, and then wait for termination signal (you can't just exit from your script, because it will terminate the image.

Access files written in docker volumes from the host

I have a docker container writing logfiles to a name volume.
From the host I want to analyce the logfiles and search for given log messages. But when I access the folder which 'docker inspect VOLUMNAME' gives, I get strange behavior, which I do not understand.
e.g. following command does give empty lines as output:
user#docker-host-01:~/docker-server-env/otaya-designdb$ sudo bash -c "for logfile in /var/lib/docker/volumes/design-db-logs/_data/*/*; do echo ${logfile}; done"
user#docker-host-01:~/docker-server-env/otaya-designdb$
What could be the reason?
Your local shell is expanding the variable expansion inside the double quotes before the loop happens. Change the double quotes to single quotes.
That is, when you run
sudo bash -c "for ... ; do echo ${logfile}; done"
first your local shell replaces the variable reference with whatever your local environment has set for $logfile, probably nothing
sudo bash -c 'for ...; do echo ; done'
and then it runs that command. If you change this to single quotes initially
sudo bash -c 'for ... ; do echo ${logfile}; done'
it will avoid this expansion.
You can see this just by putting the word echo at the front of the command: the shell will do its expansion, and then echo will print out the command that would have run.

Resources