How can I run a command against a container and tell docker not to run the entry point? e.g.
docker-compose run foo bash
The above will run the entry point of the foo machine. How to prevent it temporarily without modifying Dockerfile?
docker-compose run --entrypoint=bash foo bash
It'll run a nested bash, a bit useless, but you'll have your prompt.
If you control the image, consider moving the entire default command line into the CMD instruction. Docker concatenates the ENTRYPOINT and CMD together when running a container, so you can do this yourself at build time.
# Bad: prevents operators from running any non-Python command
ENTRYPOINT ["python"]
CMD ["myapp.py"]
# Better: allows overriding command at runtime
CMD ["python", "myapp.py"]
This is technically "modifying the Dockerfile" but it won't change the default operation of your container: if you don't specify entrypoint: or command: in the docker-compose.yml then it will run the exact same command, but it also allows running things like debug shells in the way you're trying to.
I tend to reserve ENTRYPOINT for two cases. There's a common pattern of using an ENTRYPOINT to do some first-time setup (e.g., running database migrations) and then exec "$#" to run whatever was passed as CMD. This preserves the semantics of CMD (your docker-compose run bash will still work, but migrations will happen first). If I'm building a FROM scratch or other "distroless" image where it's actually impossible to run other commands (there isn't a /bin/sh at all) then making the single thing in the image be the ENTRYPOINT makes sense.
Related
I'm stuck trying to achieve the objective described in the title. Tried various options last of which is found in this article. Currently my Dockerfile is as follows:
FROM ubuntu:18.04
EXPOSE 8081
CMD cd /var/www/html/components
CMD "bash myscript start" "-D" "FOREGROUND"
#ENTRYPOINT ["bash", "myscript", "start"]
Neither the CMD..."FOREGROUND" nor the commented-out ENTRYPOINT lines work. However, when I open an interactive shell into the container, cd into /var/.../components folder and execute the exact same command to run the script, it works.
What do I need to change?
Once you pass your .sh file, run it with CMD. This is a snippet:
ADD ./configure.and.run.myapp.sh /tmp/
RUN chmod +x /tmp/configure.and.run.myapp.sh
...
CMD ["sh", "-c", "/tmp/configure.and.run.myapp.sh"]
And here is my full dockerfile, have a look.
I see three problems with the Dockerfile you've shown.
There are multiple CMDs. A Docker container only runs one command (and then exits); if you have multiple CMD directives then only the last one has an effect. If you want to change directories, use the WORKDIR directive instead.
Nothing is COPYd into the image. Unless you explicitly COPY your script into the image, it won't be there when you go to run it.
The CMD has too many quotes. In particular, the quotes around "bash myscript start" make it into a single shell word, and so the system looks for an executable program named exactly that, including spaces as part of the filename.
You should be able to correct this to something more like:
FROM ubuntu:18.04
# Instead of `CMD cd`; a short path like /app is very common
WORKDIR /var/www/html/components
# Make sure the application is part of the image
COPY ./ ./
EXPOSE 8081
# If the script is executable and begins with #!/bin/sh then
# you don't need to explicitly say "bash"; you probably do need
# the path if it's not in /usr/local/bin or similar
CMD ./myscript start -D FOREGROUND
(I tend to avoid ENTRYPOINT here, for two main reasons. It's easier to docker run --rm -it your-image bash to get a debugging shell or run other one-off commands without an ENTRYPOINT, especially if the command requires arguments. There's also a useful pattern of using ENTRYPOINT to do first-time setup before running the CMD and this is a little easier to set up if CMD is already the main container command.)
I have a Dockerfile that I use to build the same image but for slightly different purposes. Most of the time I want it to just be an "environment" without a specific entrypoint so that the user just specifies that on the Docker run line:
docker run --rm -it --name ${CONTAINER} ${IMAGE} any_command parameters
But for some applications I want users to download the container and run it without having to set a command.
docker build -t ${IMAGE}:demo (--entrypoint ./demo.sh) <== would be nice to have
Yes, I can have a different Dockerfile for that, or append an entrypoint to the basic Dockerfile during builds, or various other mickey-mouse hacks, but those are all just one more thing that can go wrong, adding complexity, and are workarounds for the essential requirement.
Any ideas? staged builds?
The Dockerfile CMD directive sets the default command. So if your Dockerfile ends with
CMD default_command
then you can run the image in multiple ways
docker run "$IMAGE"
# runs default_command
docker run "$IMAGE" any_command parameters
# runs any_command instead
A container must run a command; you can't "just run a container" with no process in it.
You do not want ENTRYPOINT here since its syntax is noticeably harder to work with at the command line. Your any_command would be passed as arguments to the entrypoint process, rather than replacing the built-in default.
I am trying to run tests in docker as part of my build process. What i'd like to do is start the docker container, ignore the normal entry point, run a test command, and immediately exit with the test status.
Something like:
results=`docker run my_image --entrypoint python -m unittest discover`
When I try this I get: entrypoint requires the handler name to be the first argument
Which I believe is a specific to the image I am building off of (aws lambda).
So far I'm only seeing options to each A) start the container and issue an arbitrary command, or B) have a second Dockerfile just for testing.
Is it possible to run a docker image with an arbitrary command (ignoring the default entrypoint) where after the command is executed the container is killed?
Ideally you should restructure your application to avoid needing to override the entrypoint.
Remember that, when you run an image, the ENTRYPOINT and CMD are combined to form a single command. If you'll frequently be replacing this (combined) command string, it's best to put the whole command into CMD. If you have ENTRYPOINT at all, it should be a wrapper that runs the command passed to it as arguments (in a shell script, with exec "$#").
# Optional entrypoint -- MUST be JSON-array syntax, and MUST `exec "$#"`
# ENTRYPOINT ["/entrypoint.sh"]
CMD python ... whatever you had before
Then once you do this, you can easily override the command part at the docker run command
docker run my_image python -m unittest discover
(There are two other ENTRYPOINT patterns I've seen. One is a "container as command" pattern, where the entire command line is in ENTRYPOINT, and the command part is used to take additional arguments; this supports a docker run imagename --extra-args pattern. If you really need this pattern, see below to override the whole thing. The second arbitrarily splits ENTRYPOINT ["python"], CMD ["script.py"], but there's no particular reason to do this; just combine them into CMD.)
If you can't refactor your image's Dockerfile, then you need to override --entrypoint. This option only takes a single command word, though, and it's treated as a Docker option so it needs to come before the image name. That leads to this awkward construction (split into multiple lines for readability):
docker run \
--entrypoint python \
my_image \
-m unittest discover
Also consider the possibilities of using a non-Docker host virtual environment for routine tasks like running your service's unit tests.
I'm trying to setup a Dockerfile for keycloak and I want to run some commands once my container has started
The reason for this is once the server is started, I want to add some custom configuration each time the Dockerfile is run. I've tried using the "RUN" command however since my container hasn't started when I use the run command, it causes the whole Dockerfile to bomb out
I thought to run a command after the container has started, I could use "CMD" however when I even try running CMD ["echo", "hi"] or CMD ["sh", "echo", "hi"], I get an error "invalid option echo"
Is there a way to get commands to run once a container is running and if so how?
The way to define what your container does when you start it is to specify either CMD or ENTRYPOINT. These commands are executed when you use docker run. You can use RUN to perform various tasks during the build phase. Depending on what you want to do it may or may not be appropriate.
Try CMD sh -c 'echo hi' or CMD ["sh", "-c", "echo hi"]
The exec (list style) format is preferred but the shell format is also acceptable.
Also, keep in mind that the Dockerfile is used only for the build process. Containers are generally designed to be stateless. You shouldn't have to rebuild every time you change something in your application config.
I have built my docker image using openjdk.
# config Dockerfile
FROM openjdk:8
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
# build image
docker build -t shantanuo/dbt .
It is working as expected using this command...
docker run -p 8081:8080 -it shantanuo/dbt
Once I log-in, I have to run this command...
sh bin/startup.sh
My Question: Is it possible to add the startup command to dockerfile? I tried adding this line in my dockerfile.
CMD ["sh", "bin/startup.sh"]
But after building the image, I can not use -d parameter to start the container.
You can use the entrypoint to run the startup script. In the entrypoint you can specify your custom script, and then run catlina.sh.
Example:
ENTRYPOINT "bin/startup.sh && catalina.sh run"
This will run your startup script and then start your tomcat server, and it won't exit the container.
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
A docker container must have a dedicated task. It is important that this task/startup script does not terminate. When it does, the task is done and everything for docker is done right.
It makes no sense to start a container only with the JDK. You have to put your application in it.
I think it would help when you will post what you exactly want to do.
The Docker reference is always a good place to look at: https://docs.docker.com/engine/reference/builder/#entrypoint