I'm using docker-compose. I need to insert some behavior into a container's startup lifecycle. (Creating a custom image is overkill and I want to avoid that.)
Ordinarily I'd override the entrypoint (run my commands then run the original entrypoint), but for this image I cannot as it performs important work.
The order I want:
entrypoint (I can't override this as it must run before my stuff)
the stuff I want to run
the original command (php-fpm)
So I tried this:
command: >
my_extra_command_1
my_extra_command_2
php-fpm
and this:
command: my_extra_command_1; my_extra_command_2; php-fpm
and this:
command: ["my_extra_command_1", "my_extra_command_2", "php-fpm"]
None of these work because the container stops after my first command. It doesn't run all the commands.
What is the correct syntax?
BTW the image's Dockerfile is defined using exec form as follows:
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
and the entrypoint script ends with exec "$#".
You need to arrange to have a shell run your command. Dockerfile syntax will automatically wrap commands in sh -c but Docker Compose won't.
command: sh -c 'my_extra_command_1; my_extra_command_2; php-fpm'
If you use the list syntax that manually breaks this into words, the argument to -c is a single word
command:
- sh
- -c
- my_extra_command_1; my_extra_command_2; php-fpm
Related
Trying to run a pod based on an image with this Dockerfile:
...
ENTRYPOINT [ "./mybashscript", ";", "flask" ]
CMD [ "run" ]
I would be expecting the full command to be ./mybashscript; flask run.
However, in this example, the pod / container executes ./mybashscript but not flask.
I also tried a couple of variations like:
...
ENTRYPOINT [ "/bin/bash", "-c", "./mybashscript && flask" ]
CMD [ "run" ]
Now, flask gets executed but run is ignored.
PS: I am trying to understand why this doesn't work and I am aware that I can fit all into the entrypoint or shove everything inside the bash script, but that is not the point.
In both cases you show here, you use the JSON-array exec form for ENTRYPOINT and CMD. This means no shell is run, except in the second case where you run it explicitly. The two parts are just combined together into a single command.
The first construct runs the script ./mybashscript, which must be executable and have a valid "shebang" line (probably #!/bin/bash). The script is passed three arguments, which you can see in the shell variables $1, $2, and $3: a semicolon ;, flask, and run.
The second construct runs /bin/sh -c './mybashscript && flask' run. sh -c takes a single argument, which is mybashscript && flask; the remaining argument run is interpreted as a positional argument, and the sh -c command would see it as $0.
The arbitrary split of ENTRYPOINT and CMD you show doesn't really make sense. The only really important difference between the two is that it is easier to change CMD when you run the container, for example by putting it after the image name in a docker run command. It makes sense to put all of the command in the command part, or none of it, but not really to put half of the command in one part and half in another.
My first pass here would be to write:
# no ENTRYPOINT
CMD ./mybashscript && flask run
Docker will insert a sh -c wrapper for you in bare-string shell form, so the && has its usual Bourne-shell meaning.
This setup looks like you're trying to run an initialization script before the main container command. There's a reasonably standard pattern of using an ENTRYPOINT for this. Since it gets passed the CMD as parameters, the script can end with exec "$#" to run the CMD (potentially as overridden in the docker run command). The entrypoint script could look like
#!/bin/sh
# entrypoint.sh
./mybashscript
exec "$#"
(If you wrote mybashscript, you could also end it with the exec "$#" line, and use that script as the entrypoint.)
In the Dockerfile, set this wrapper script as the ENTRYPOINT, and then whatever the main command is as the CMD.
ENTRYPOINT ["./entrypoint.sh"] # must be a JSON array
CMD ["flask", "run"] # can be either form
If you provide an alternate command, it replaces CMD, and so the exec "$#" line will run that command instead of what's in the Dockerfile, but the ENTRYPOINT wrapper still runs.
# See the environment the wrapper sets up
docker run --rm your-image env
# Double-check the data directory setup
docker run --rm -v $PWD/data:/data your-image ls -l /data
If you really want to use the sh -c form and the split ENTRYPOINT, then the command inside sh -c has to read $# to find its positional arguments (the CMD), plus you need to know the first argument is $0 and not $1. The form you show would be functional if you wrote
# not really recommended but it would work
ENTRYPOINT ["/bin/sh", "-c", "./mybashscript && flask \"$#\"", "flask"]
CMD ["run"]
How can I run a command against a container and tell docker not to run the entry point? e.g.
docker-compose run foo bash
The above will run the entry point of the foo machine. How to prevent it temporarily without modifying Dockerfile?
docker-compose run --entrypoint=bash foo bash
It'll run a nested bash, a bit useless, but you'll have your prompt.
If you control the image, consider moving the entire default command line into the CMD instruction. Docker concatenates the ENTRYPOINT and CMD together when running a container, so you can do this yourself at build time.
# Bad: prevents operators from running any non-Python command
ENTRYPOINT ["python"]
CMD ["myapp.py"]
# Better: allows overriding command at runtime
CMD ["python", "myapp.py"]
This is technically "modifying the Dockerfile" but it won't change the default operation of your container: if you don't specify entrypoint: or command: in the docker-compose.yml then it will run the exact same command, but it also allows running things like debug shells in the way you're trying to.
I tend to reserve ENTRYPOINT for two cases. There's a common pattern of using an ENTRYPOINT to do some first-time setup (e.g., running database migrations) and then exec "$#" to run whatever was passed as CMD. This preserves the semantics of CMD (your docker-compose run bash will still work, but migrations will happen first). If I'm building a FROM scratch or other "distroless" image where it's actually impossible to run other commands (there isn't a /bin/sh at all) then making the single thing in the image be the ENTRYPOINT makes sense.
I have a Dockerfile as follows.
ENV SPRING_ENV="local"
ENV APP_OPTS "-Xmx8144m"
RUN echo "/usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV} &" > /apps/demo/entrypoint.sh
RUN chmod +x /apps/demo/entrypoint.sh
When I run the dockerfile, I see a file 'entrypoint.sh' with the java command that I specified in the Dockerfile.
But I want to change the java max memory depending on the environment. So I am running like this.
docker run -it <image_id> sh -e "APP_OPTS=-Xmx9144m" -e "SPRING_ENV=dev"
But when I run it, i check the entrypoint.sh, i don't see the environment variables replaced. Am I missing something?
Does it replace only on the fly when I actually run the container?
You need to escape the $ in ${APP_OPTS} (i.e., change it to \${APP_OPTS}) -- during docker build, the variable is getting replaced with the "current" environment variable, which would be whatever is in your env output (otherwise null). Calling docker run ... -e "APP_OPTS=-Xmx9144m" won't do anything at this point because ${APP_OPTS} has been replaced after the image was created.
Otherwise, you could try saving the entrypoint.sh file and put it in the same folder as your Dockerfile instead of having your Dockerfile create it (and use COPY instead to put it where you want it). That way, the ${APP_OPTS} environment variable won't get replaced during docker build
The Dockerfile (and the RUN command) are only executed when you build the image. SPRING_ENV and APP_UMPFE_OPTS are being evaluated only once and during the build.
When you run the image, the --env=KEY=VALUE are passed to the shell (!) running the process defined in the ENTRYPOINT or CMD (which you need but do not have).
You're missing a FROM ... statement near the top of the Dockerfile too.
You will need to define (recommend the shell-form of) ENTRYPOINT that invokes the java runtime, passes the environment variables and runs your code, perhaps (have not tried this):
FROM ???
ENV SPRING_ENV="local"
ENV APP_OPTS "-Xmx8144m"
ENTRYPOINT /usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV}
Example:
FROM busybox
ENV DOG=Freddie
ENTRYPOINT echo ${DOG}
Then:
docker build --tag=58208029 --file=./Dockerfile .
docker run -it 58208029:latest
Freddie
docker run -it --env=DOG=Henry 58208029:latest
Henry
HTH!
The entrypoint.sh is being written when you build the image, so that RUN statement won't be executed again when you run the container. So the entrypoint.sh file itself will not be updated.
Another issue is that when you do the docker run, the -e options need to be before the image name and command:
docker run -it -e "APP_OPTS=-Xmx9144m" -e "SPRING_ENV=dev" <image_id> sh
Otherwise those are just being passed as arguments to the entrypoint/command
Also, in your Dockerfile, you probably want single quotes around your entrypoint script so that it doesn't interpolate the values at build time.
RUN echo '/usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV} &' > /apps/demo/entrypoint.sh
Then when you run the container, the entrypoint script should read the variable values at run time from the environment.
I want to include a cron task in a MariaDB container, based on the latest image mariadb, but I'm stuck with this.
I tried many things without success because I can't launch both MariaDB and Cron.
Here is my actual dockerfile:
FROM mariadb:10.3
# DB settings
ENV MYSQL_DATABASE=beurre \
MYSQL_ROOT_PASSWORD=beurette
COPY ./data /docker-entrypoint-initdb.d
COPY ./keys/keys.enc home/mdb/
COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf
# Installations
RUN apt-get update && apt-get -y install python cron
# Cron
RUN touch /etc/cron.d/bp-cron
RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron
RUN touch /var/log/cron.log
RUN chmod 0644 /etc/cron.d/bp-cron
RUN cron
With its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly.
So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose.
If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db.
Elaborating on #k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database.
Here's how I would approach it:
1. Create a Cron Docker Container
You can set up a cron container fairly simply. Here's an example Dockerfile that should do the job:
FROM alpine
COPY ./crontab /etc/crontab
RUN crontab /etc/crontab
RUN touch /var/log/cron.log
CMD crond -f
Just put your crontab into a crontab file next to that Dockerfile and you should have a working cron container.
An example crontab file:
* * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';"
2. Add the cron container to your docker-compose.yml as a service
Make sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service:
networks:
my_network:
services:
mysql:
image: mariadb
networks:
- my_network
cron:
image: my_cron
depends_on:
- mysql
build:
context: ./path/to/my/cron-docker-folder
networks:
- my_network
I recommend the solution provided by fjc. Treat this as nice-to-know to understand why your approach is not working.
Docker has RUN commands that are only being executed during build. Not on container startup.
It also has a CMD (or ENTRYPOINT) for executing specific scripts.
Since you are using mariadb your CMD it is:
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
(You can find the link to the actual dockerfles on dockerhub.)
This tells docker to run:
docker-entrypoint.sh mysqld
on startup.
You'd have to override its docker-entrypoint.sh to allow for the startup of the cron job as well.
See the relevant part of the Dockerfile for the CMD instruction:
CMD The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred
form) CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form) There can only be one CMD
instruction in a Dockerfile. If you list more than one CMD then only
the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
Note: If CMD is used to provide default arguments for the ENTRYPOINT
instruction, both the CMD and ENTRYPOINT instructions should be
specified with the JSON array format.
Note: The exec form is parsed as a JSON array, which means that you
must use double-quotes (“) around words not single-quotes (‘).
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, CMD [ "echo", "$HOME" ] will not do variable substitution on
$HOME. If you want shell processing then either use the shell form or
execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME"
]. When using the exec form and executing a shell directly, as in the
case for the shell form, it is the shell that is doing the environment
variable expansion, not docker.
When used in the shell or exec formats, the CMD instruction sets the
command to be executed when running the image.
If you use the shell form of the CMD, then the will execute
in /bin/sh -c:
FROM ubuntu CMD echo "This is a test." | wc - If you want to run your
without a shell then you must express the command as a JSON
array and give the full path to the executable. This array form is the
preferred format of CMD. Any additional parameters must be
individually expressed as strings in the array:
FROM ubuntu CMD ["/usr/bin/wc","--help"] If you would like your
container to run the same executable every time, then you should
consider using ENTRYPOINT in combination with CMD. See ENTRYPOINT.
If the user specifies arguments to docker run then they will override
the default specified in CMD.
Note: Don’t confuse RUN with CMD. RUN actually runs a command and
commits the result; CMD does not execute anything at build time, but
specifies the intended command for the image.
I am trying to figure out how to get the CMD command in dockerfile to run a script on startup for docker run I know that using the RUN command will get the image to prerun that script when building the image but I want it to run the script everytime I run a new container using that image. The script is just a simple script that outputs the current date/time to a file.
Here is the dockerfile that works if I use RUN
# Pull base image
FROM alpine:latest
# gcr.io/dev-ihm-analytics-platform/practice_docker:ulta
WORKDIR /root/
RUN apk --update upgrade && apk add bash
ADD ./script.sh ./
RUN ./script.sh
Here is the same dockerfile that doesnt work with CMD
# Pull base image
FROM alpine:latest
# gcr.io/dev-ihm-analytics-platform/practice_docker:ulta
WORKDIR /root/
RUN apk --update upgrade && apk add bash
ADD ./script.sh ./
CMD ["./script.sh"]
I have tried all sorts of things after the CMD command like ["/script.sh"], ["bash script.sh"], ["bash", "./script.sh"], bash script.sh but I always get an error and I don't know what I am doing wrong. All I want is to
docker run -it name_of_container bash
and then find that the script has executed be seeing there is an output file with the run information in the container once I am inside
There’s three basic ways to do this:
You can RUN ./script.sh. It will happen once, at docker build time, and be baked into your image.
You can CMD ./script.sh. It will happen once, and be the single command the container runs. If you provide some alternate command (docker run ... bash for instance) that runs instead of this CMD.
You can write a custom entrypoint script that does this first-time setup, then runs the CMD or whatever got passed on the command line. The main container process is the entrypoint, and it gets passed the command as arguments. This script (and whatever it does inside) will get run on every startup. This script can look something like
#!/bin/sh
./script.sh
exec "$#"
It needs to be separately COPYd into the image, and then you’d set something like ENTRYPOINT ["./entrypoint.sh"].
(Given the problem as you’ve actually described it — you have a shell script and you want to run it and inspect the file output in an interactive shell — I’d just run it at your local command prompt and not involve Docker at all. This avoids all of these sequencing and filesystem mapping issues.)
There are multiple ways to achieve what you want, but your first attempt, with the RUN ./script.sh line is probably the best.
The CMD and ENTRYPOINT commands are overridable on the command-line as flags to the container run command. So, if you want to ensure that this is run every time you start the container, then it shouldn't be part of the CMD or ENTRYPOINT commands.
Well, iam using the CMD command to start my Java applications and when the container is inside the WORKDIR iam executing the following:
CMD ["/usr/bin/java", "-jar", "-Dspring.profiles.active=default", "/app.jar"]
Have you tried to remove the "." in the CMD command so it looks like that:
CMD ["/script.sh"]
There might be a different syntax when using RUN or CMD.