Right way to use ENTRYPOINT to enable container start and stop - docker

I am having a custom image built using the Dockerfile. Apparently a fresh run works fine however when I stop the container and start it again - it doesn't start and remain in the state of Exit 0.
The image is composed of apache2 and bunch of php modules for symfony web application.
This is how Dockerfile end
RUN a2enmod rewrite
CMD service apache2 restart
ENTRYPOINT ["/usr/sbin/apache2ctl"]
CMD ["-D", "FOREGROUND"]
EXPOSE 80
I see containers commonly using docker-entrypoint.sh but unsure of what goes in and the role it plays.

The entrypoint shouldn't have anything to do with your container not restarting. Your problem is most likely elsewhere and you need to look at the logs from the container to debug. The output of docker diff ... may also help to see what has changed in the container filesystem.
If an ENTRYPOINT isn't defined, docker runs the CMD by default. If an ENTRYPOINT is defined, anything in CMD becomes a cli argument to the entrypoint script. So in your above example, it will start (or restart) the container with /usr/sbin/apache2ctl -D FOREGROUND. Anything you append after the container name in the docker run command will override the value of CMD. And you can override the value of the ENTRYPOINT with docker run --entrypoint ....
See Docker's documentation on the entrypoint option for more details.

Related

Launching gunicorn instances a docker image, using docker run

In my dockerfile, for a Flask app, I have a set of commands that work as planned.
The last line of my dockerfile is currently:
ENTRYPOINT [ "/bin/bash", "-c" ]
I need to launch some gunicorn instances for this image.
So, I run the following commands in the terminal, outside the image.
$ docker run -itd --name running_name -p 5000:5000 image_name bash
If I run without bash, I'll just enter exit the container automatically after a few seconds...
$docker container exec -it running_name /bin/bash -c bash
Now that I'm in, I launch the gunicorn instances, and do docker exit. Because of exec, the instances are still running.
Is there a way to launch the gunicorn instances from docker run, without having to enter into the container?
I've tried ENTRYPOINT [ "gunicorn", "--bind", "0.0.0.0:5000" ] but I still exit automatically
I've also tried substituting the last line for CMD gunicorn --bind 0.0.0.0:5000 and then do docker run -d --name run_name -p 5000:5000 image_name
I still exit automatically.
Edit: To reflect the possible answer below, here's my updated tries and extra information.
The following files are all at the same level of the directory structure.
In the api_docker.py file, I have:
app = Flask(__name__)
api = Api(app)
api.add_resource(<some_code>)
In the gunicorn.conf.py file, I have:
worker_class = "gevent"
workers = 2
timeout = 90
bind = "0.0.0.0:5000"
wsgi_app = "wsgi:app"
errorlog = "logging/error.log"
capture_output = True
loglevel = "debug"
daemon = True
enable_stdio_inheritance = True
preload = True
I've also tried removing the bind and wsgi_app rows, from this file.
In the dockerfile:
<some_code>
CMD ["gunicorn", "--conf", "gunicorn.conf.py", "--bind", "0.0.0.0:5000", "api_docker:app"]
I build successfully, and then I do:
docker run -d --name name_run -p 5000:5000 name_image
You need to give gunicorn a module to actually run, e.g. app:main for an app.py file with a main function, and you should do this all as the CMD, not ENTRYPOINT, or from docker run unless you plan on providing further gunicorn-related arguments when you actually run the image. (run arguments or the CMD are appended to the ENTRYPOINT)
Or, you could use an existing image that already has these details for you - e.g. https://github.com/tiangolo/meinheld-gunicorn-flask-docker
To solve this issue, I did the following:
Removed the options daemon , enable_stdio_inheritance, and preload from the conf file.
I also increased the timeout and graceful timeout parameters to 120.
Gunicorn will look for a conf file, and will use the parameter values defined therein, unless they are overwritten in th CLI. Therefore, I just ran CMD ["gunicorn"].
I think the most important change was that of point 1, namely the daemon to false(which is the default), not sure why though... I would guess that as a daemon process, the docker container would not monitor it correctly and just exit.

Docker on windows can mount a folder for nginx container but not for ubuntu

I am building an image from this docker file for NGinx
FROM nginx
COPY html /usr/share/nginx/html
I then run the container using this command
docker run -v /C/nginx/html:/usr/share/nginx/html -p 8081:80 -d --name cntr-mynginx mynginx:abc
This works and I am able to mount the folder and the changes made in the html folder on the host can be seen when within the container file system. The edits made on the container filesystem under the /usr/share/nginx/html folder are visible on the host as well.
Why does the same not work when I use an Ubuntu base? This is the docker file for the Ubuntu container I am trying to spin up.
FROM ubuntu:18.04
COPY html /home
I used this command to run it
docker run -v /C/ubuntu-only/html:/home -p 8083:8080 --name cntr-ubuntu img-ubuntu:abc
The command above runs and when I do a docker ps -a, I see that the container stopped as soon as it started.
I removed the copy of the html and made the ubuntu image even more smaller by keeping just the first line FROM ubuntu:18.04 and even then I get the same result. Container Exited almost soon as it started. Any idea why this works for NGINX but not for Ubuntu and what do I need to do to make it work?
The issue you are experiencing does not have to do with mounting a directory into your container.
The command above runs and when I do a docker ps -a, I see that the container stopped as soon as it started.
The container is exiting due to the fact that there is no process being specified for it to run.
In the NGINX case, you can see that a CMD instruction is set at the end of the Dockerfile.
CMD ["nginx", "-g", "daemon off;"]
This starts NGINX as a foreground process, and prevents the container from exiting immediately.
The Ubuntu Dockerfile is different in that it specifies bash as the command the container will run at start.
CMD ["/bin/bash"]
Because bash does not run as a foreground process here, the container exits immediately.
Try augmenting your docker run command to include a process that stays in the foreground, like sleep.
docker run -v /C/ubuntu-only/html:/home -p 8083:8080 --name cntr-ubuntu img-ubuntu:abc sleep 9000
If you run docker exec -it cntr-ubuntu /bin/bash you should find yourself inside the container and verify that the mounted directory is present.

How to create a Dockerfile so that container can run without an immediate exit

Official Docker images like MySQL can be run like this:
docker run -d --name mysql_test mysql/mysql-server:8.0.13
And it can run indefinitely in the background.
I want to try to create an image which does the same, specifically a Flask development server (just for testing). But my container exit immediately. My Dockerfile is like this:
FROM debian:buster
ENV TERM xterm
RUN XXXX # some apt-get and Python installation stuffs
ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ]
EXPOSE 80
EXPOSE 5000
USER myuser
WORKDIR /home/myuser
However it exited immediately as soon as it is ran. I also tried "bash" as an entry point just so to make sure it isn't a Flask configuration issue and it also exited.
How do I make it so that it runs as THE process in the container?
EDIT
OK someone posted below (but later deleted), the command to test is to use tail -f /dev/null, and it does run indefinitely. I still don't understand why bash doesn't work as a process which doesn't exist (does it?). But my flask configuration is probably off.
EDIT 2
I see that running without the -d flag print out the stdout (or stderr) so I can diagnose the problem.
Let's clear things out.
In general, a container exits as soon as its entrypoint is successfully executed.
In your case, without being a python expert this ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ] would be enough to keep the container alive. But I guess you have some configuration error and due to that error the container exited before running flask command. You can validate this by running docker ps -a and inspect the exit code(possibly 1).
Let's now discuss about the questions in your edits.
The key part of your misunderstanding derives from the -d flag.
You are right to think that setting bash as entrypoint would be enough to keep container alive but you need to attach to that shell.
When running in detach mode(-d), container will execute bash command but as soon as no one is attached to that shell, it will exit. In addition, using this flag will prevent you from viewing container logs lively(however you may use docker logs container_id to debug) which is very useful when you are in an early phase of setting thing up. So I recommend using this flag only when you are sure that everything works as intended.
To attach to bash shell and keep container alive, you should use the -it flag so that the bash shell will be attached to the current shell invoking the docker run command.
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
Please also consult official documentation about foreground vs background mode.
The answer to your edit is: when do docker run <container> bash it will literally call bash and exit 0, because the command (bash) was successful. Bash isn't a shell, it's a command.
If you ran docker run -it <container> tail -f /dev/null and then docker exec -it /bin/bash. You'd drop into the shell, because its the command you ran.
Your Dockerfile doesn't have a command to run in the background that is persistent, in mysqls case, it runs mysqld, which starts a server on PID 0.
When PID 0 exits, the container stops.
Your entrypoint is most likely failing to start, or starting and exiting because of how your command is running.
I would try changing your entrypoint to a

docker exit immediately after launching apache and neo4j

I have a script /init that launches apache and neo4j. This script is already in the image ubuntu:14. The following is the content of /init:
service apache2 start
service neo4j start
From this image, I am creating another image with the following dockerfile
FROM ubuntu:v14
EXPOSE 80 80
ENTRYPOINT ["/init"]
When I run the command docker run -d ubuntu:v15, the container starts and then exit. As far as I understood, -d option runs the container in the background. Also, the script\init launches two daemons. Why does the container exit ?
In fact, I think your first problem is the #! in init file, if you did not add something like #!/bin/bash at the start, container will complain like next:
shubuntu1#shubuntu1:~$ docker logs priceless_tu
standard_init_linux.go:207: exec user process caused "exec format error"
But even you fix above problem, you will still can't start your container, the reason same as other folks said: the PID 1 should always there, in your case after service xxx start finish, the PID 1 exit which will also result in container exit.
So, to conquer this problem you should set one command never exit, a minimal workable example for your reference:
Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y apache2
COPY init /
RUN chmod +x /init
EXPOSE 80
ENTRYPOINT ["/init"]
init:
#!/bin/bash
# you can add other service start here
# e.g. service neo4j start as you like if you have installed it already
# next will make apache run in foreground, so PID1 not exit.
/usr/sbin/apache2ctl -DFOREGROUND
When your Dockerfile specifies an ENTRYPOINT, the lifetime of the container is exactly the length of whatever its process is. Generally the behavior of service ... start is to start the service as a background process and then return immediately; so your /init script runs the two service commands and completes, and now that the entrypoint process is completed, the container exits.
Generally accepted best practice is to run only one process in a container. That's especially true when one of the processes is a database. In your case there are standard Docker Hub Apache httpd and neo4j images, so I'd start by using an orchestration tool like Docker Compose to run those two containers side-by-side.

Can not run docker container?

I use docker build -t iot . to build a image
my Dockerfile is :
FROM centos
USER root
ADD jdk1.8.0_101.tar.gz /root
COPY run.sh /etc
RUN chmod 755 /etc/run.sh
CMD "/etc/run.sh"
my run.sh is:
#!/bin/bash
echo "aaaa"
I use docker run -itd iot to run a container,but I find my container can not be run.
what should I do?
Your image builds and runs correctly. You just need to remove the d flag from run (for detached) or the docker command will exit immediately and run your container in the background. You can see that it actually exited with code zero according to the status column in docker ps -a.
You can corroborate this by running docker logs d63a (which is your container id). You should see aaaa.
Your description is inaccurate. When you docker run the container, it normally started, printed aaaa, and then exited.
So I guess what you would ask is "why my container cannot keep running, such as a daemon process". This is because you're executing a shell script which is actually a one-shot thing. Modify the CMD line in your Dockerfile to CMD "bash", your container will then not exit.

Resources