This is normal way of doing in shell
starttime=$(date '+%d/%m/%Y %H:%M:%S')
#echo $starttime
# sleep for 5 seconds
sleep 5
# end time
endtime=$(date '+%d/%m/%Y %H:%M:%S')
#echo $endtime
STARTTIME=$(date -d "${starttime}" +%s)
ENDTIME=$(date -d "${endtime}" +%s)
RUNTIME=$((ENDTIME-STARTTIME))
echo "Seconds ${RUNTIME} in sec"
Wanted the same way in a docker file
Wanted to get the timestamps before and after execution of a command in dockerfile
Could some please help on this.
It is exactly the same. A RUN command runs an ordinary Bourne shell command line (wrapping it in sh -c). If you have this much scripting involved you might consider writing it into a shell script, COPYing the script into your image, then RUNning it.
If this is just for temporary diagnostics, and you don't need to calculate the time in seconds, you can just run date as is without the rest of the scripting.
RUN date; make; date # except this won't actually stop on failure
If you were especially motivated you could take the script from the question, make it take a command as an argument, and write a script around it
#!/bin/sh
starttime=$(date '+%d/%m/%Y %H:%M:%S')
sh -c "$#"
rc=$?
endtime=$(date '+%d/%m/%Y %H:%M:%S')
...
exit "$rc"
Then in your Dockerfile you can use the SHELL directive to make this run RUN commands. You will rarely see RUN commands using JSON arrays, and this will bypass your script.
# must be executable and have a correct #!/bin/sh line
COPY timeit.sh /usr/local/bin
SHELL ["/usr/local/bin/timeit.sh"]
RUN make
RUN ["/bin/echo", "this will not be timed"]
Related
I am creating a Dockerfile that needs to source a script before a shell is run.
ENTRYPOINT ["/bin/bash", "-rcfile","<(echo '. ./mydir/scripttosource.sh')"]
However, the script isn't sourced as expected.
Combining these parameters on a command line (normal Linux instance, outside of any Docker container), it works properly, for example:
$ /bin/bash -rcfile <(echo '. ./mydir/scripttosource.sh')
So I took a look at what was actually used by the container when it was run.
$ docker ps --format "table {{.ID}} \t {{.Names}} \t {{.Command}}" --no-trunc
CONTAINER ID NAMES COMMAND
70a5f846787075bd9bd55432dc17366268c33c1ab06fb36b23a50f5c3aef19bb happy_cray "/bin/bash -rcfile '<(echo '. ./mydir/scripttosource.sh')'"
Besides the fact that it properly identified the emotional state of Cray computers, Docker seems to be sneaking in undesired single quotes into the third parameter to ENTRYPOINT.
'<(echo '. ./mydir/scripttosource.sh')'
Thus the command actually being executed is:
$ /bin/bash -rcfile '<(echo '. ./mydir/scripttosource.sh')'
Which doesn't work...
Now I realize there are more ways to skin this cat, and I could make this work a different way, I am curious about the insertion of single quotes to the third argument to ENTRYPOINT. Is there a way to avoid this?
Thank you,
At a super low level, the Unix execve(2) function launches a process by taking a sequence of words, where the first word is the actual command to run and the remaining words are its arguments. When you run a command interactively, the shell breaks it into words, usually at spaces, and then calls an exec-type function to run it. The shell also does other processing like replacing $VARIABLE references or the bash-specific <(subprocess) construct; all of these are at layers above simply "run a process".
The Dockerfile ENTRYPOINT (and also CMD, and less frequently RUN) has two forms. You're using the JSON-array exec form. If you do this, you're telling Docker that you want to run the main container command with exactly these three literal strings as arguments. In particular the <(...) string is passed as a literal argument to bash --rcfile, and nothing actually executes it.
The obvious answer here is to use the string-syntax shell form instead
ENTRYPOINT /bin/bash -rcfile <(echo '. ./mydir/scripttosource.sh')
Docker wraps this in an invocation of sh -c (or the Dockerfile SHELL). That causes a shell to preprocess the command string, break it into words, and execute it. Assuming the SHELL is bash and not a pure POSIX shell, this will handle the substitution.
However, there are some downsides to this, most notably that the sh -c invocation "eats" all of the arguments that might be passed in the CMD. If you want your main container process to be anything other than an interactive shell, this won't work.
This brings you to the point of trying to find simpler alternatives to doing this. One specific observation is that the substitution here isn't doing anything; <(echo something) will always produce the fixed string something and you can do it without the substitution. If you can avoid the substitution then you don't need the shell either:
ENTRYPOINT ["/bin/bash", "--rcfile", "./mydir/scripttosource.sh"]
Another sensible approach here is to use an entrypoint wrapper script. This uses the ENTRYPOINT to run a shell script that does whatever initialization is needed, then exec "$#" to run the main container command. In particular, if you use the shell . command to set environment variables (equivalent to the bash-specific source) those will "stick" for the main container process.
#!/bin/sh
# entrypoint.sh
# read the file that sets variables
. ./mydir/scripttosource.sh
# run the main container command
exec "$#"
# Dockerfile
COPY entrypoint.sh ./ # may be part of some other COPY
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ???
This should have the same net effect. If you get a debugging shell with docker run --rm -it your-image bash, it will run under the entrypoint wrapper and see the environment variables. You can do other setup in the wrapper script if required. This particular setup also doesn't use any bash-specific options, and might run better under minimal Alpine-based images.
insertion of single quotes can be avoided by using escape characters in the third argument to ENTRYPOINT.
ENTRYPOINT ["/bin/bash", "-rcfile","$(echo '. ./mydir/scripttosource.sh')"]
I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.
I have a docker image that basically schedules a cron job at a frequency defined when building the image using the below.
COPY myjobtime /etc/cron.d/myjobtime
RUN chmod 0644 /etc/cron.d/myjobtime &&\
crontab /etc/cron.d/myjobtime
CMD cron
I have the cron entry in the file myjobtime.
*/10 * * * * /usr/local/bin/sh /app/myscript.py
I would like to be able to pass the cron schedule during the runtime. Meaning, if someone wants to modify the cron schedule to a different frequency, they should be able to do that while running the container and passing an environment variable file with the new cron schedule in it. Can this be done?
The important detail is that you need to create and install the crontab file when the container starts up. I find an entrypoint wrapper script to be a useful pattern for this: set the image's ENTRYPOINT to be a shell script that does whatever first-time setup is required, then have it exec "$#" to run the image's CMD.
If your image is ultimately based on a Linux distribution based on the GNU toolset, then envsubst is a really helpful program here. It reads in a text file, expands environment variable references, and writes out the result. I'll assume you have this available; on Alpine-based images you can do similar tricks with sed(1) (though escaping around the cron schedule will become tricky).
This makes the entrypoint wrapper script something like:
#!/bin/sh
# entrypoint.sh
# Set a default schedule, if the user didn't provide one
if [ -z "$CRON_SCHEDULE" ]; then
export CRON_SCHEDULE='*/10 * * * *'
fi
# Run substitutions on the template file and inject the crontab
envsubst < /app/myjobtime.cron.tmpl | crontab
# Run the main container command
exec "$#"
Since the template isn't a "normal" crontab, it can't go in the "normal" crontab directory; putting it in the application directory is fine. That file has an environment variable reference where the schedule would go
# myjobtime.cron.tmpl
${CRON_SCHEDULE} /app/myscript.py
In your image, set the wrapper script to be the ENTRYPOINT, make sure the template file is in the right place, and leave the CMD unchanged.
# (assuming there's not a broad `COPY . .`)
COPY myjobtime.cron.tmpl .
COPY entrypoint.sh .
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array syntax
CMD cron # unchanged
This should allow you to override the cron schedule.
docker run -d --name hourly myappcron
docker run -d --name daily -e 'CRON_SCHEDULE=0 0 * * *' myappcron
Since the entrypoint wrapper script runs whatever command was provided, and you can override the command pretty easily, this also lets you double-check that the right schedule got set.
docker run --rm -e 'CRON_SCHEDULE=0 0 * * *' myappcron \
crontab -l # runs instead of the cron daemon
Tried to search the Docker documentation, however, I cannot find anything that directly relates to a backup on the down command. Additionally, I see you can add your own command script in the yml on up, so I was hoping that there might be something similar for down?
You need to make your own entrypoint script that will create an exit hook. You can see more details on the steps of building custom image with a custom entrypoint in this SO.
In your case, the entrypoint will look like this:
#!/bin/bash
set -e
execute_on_finish() {
echo "Execute on finish"
}
trap execute_on_finish EXIT
echo "CALLING ENTRYPOINT WITH CMD: $#"
exec /old_entrypoint.sh "$#" &
daemon_pid=$!
wait $daemon_pid
execute_on_finish
Note
Since the backup process is a long operation, and docker will execute a kill if the process doesn't shut-down in 10s, you will need to send option to the stop not to kill the container with -t. See more details here
I have base image with Jboss copied on it. Jboss is started with a script and takes around 2 minutes.
In my Dockerfile I have created a command.
CMD start_deploy.sh && tail -F server.log
I do a tail to keep the container alive otherwise "docker-compose up" exits when script finishes and container stops.
The problem is when I do "docker-compose up" through Jenkins the build doesn't finishes because of tail and I couldn't start the next build.
If I do "docker-compose up -d" then next build starts too early and starts executing tests against the container which hasn't started yet.
Is there a way to return from docker-compose up when server has started completely.
Whenever you have chained commands or piped commands (|), it is easier to:
easier wrap them in a script, and use that script in your CMD directive:
CMD myscript
or wrap them in an sh -c command:
sh -c 'start_deploy.sh && tail -F server.log'
(but that last one depends on the ENTRYPOINT of the image.
A default ENTRYPOINT should allow this CMD to work)