Launching gunicorn instances a docker image, using docker run - docker

In my dockerfile, for a Flask app, I have a set of commands that work as planned.
The last line of my dockerfile is currently:
ENTRYPOINT [ "/bin/bash", "-c" ]
I need to launch some gunicorn instances for this image.
So, I run the following commands in the terminal, outside the image.
$ docker run -itd --name running_name -p 5000:5000 image_name bash
If I run without bash, I'll just enter exit the container automatically after a few seconds...
$docker container exec -it running_name /bin/bash -c bash
Now that I'm in, I launch the gunicorn instances, and do docker exit. Because of exec, the instances are still running.
Is there a way to launch the gunicorn instances from docker run, without having to enter into the container?
I've tried ENTRYPOINT [ "gunicorn", "--bind", "0.0.0.0:5000" ] but I still exit automatically
I've also tried substituting the last line for CMD gunicorn --bind 0.0.0.0:5000 and then do docker run -d --name run_name -p 5000:5000 image_name
I still exit automatically.
Edit: To reflect the possible answer below, here's my updated tries and extra information.
The following files are all at the same level of the directory structure.
In the api_docker.py file, I have:
app = Flask(__name__)
api = Api(app)
api.add_resource(<some_code>)
In the gunicorn.conf.py file, I have:
worker_class = "gevent"
workers = 2
timeout = 90
bind = "0.0.0.0:5000"
wsgi_app = "wsgi:app"
errorlog = "logging/error.log"
capture_output = True
loglevel = "debug"
daemon = True
enable_stdio_inheritance = True
preload = True
I've also tried removing the bind and wsgi_app rows, from this file.
In the dockerfile:
<some_code>
CMD ["gunicorn", "--conf", "gunicorn.conf.py", "--bind", "0.0.0.0:5000", "api_docker:app"]
I build successfully, and then I do:
docker run -d --name name_run -p 5000:5000 name_image

You need to give gunicorn a module to actually run, e.g. app:main for an app.py file with a main function, and you should do this all as the CMD, not ENTRYPOINT, or from docker run unless you plan on providing further gunicorn-related arguments when you actually run the image. (run arguments or the CMD are appended to the ENTRYPOINT)
Or, you could use an existing image that already has these details for you - e.g. https://github.com/tiangolo/meinheld-gunicorn-flask-docker

To solve this issue, I did the following:
Removed the options daemon , enable_stdio_inheritance, and preload from the conf file.
I also increased the timeout and graceful timeout parameters to 120.
Gunicorn will look for a conf file, and will use the parameter values defined therein, unless they are overwritten in th CLI. Therefore, I just ran CMD ["gunicorn"].
I think the most important change was that of point 1, namely the daemon to false(which is the default), not sure why though... I would guess that as a daemon process, the docker container would not monitor it correctly and just exit.

Related

Docker on windows can mount a folder for nginx container but not for ubuntu

I am building an image from this docker file for NGinx
FROM nginx
COPY html /usr/share/nginx/html
I then run the container using this command
docker run -v /C/nginx/html:/usr/share/nginx/html -p 8081:80 -d --name cntr-mynginx mynginx:abc
This works and I am able to mount the folder and the changes made in the html folder on the host can be seen when within the container file system. The edits made on the container filesystem under the /usr/share/nginx/html folder are visible on the host as well.
Why does the same not work when I use an Ubuntu base? This is the docker file for the Ubuntu container I am trying to spin up.
FROM ubuntu:18.04
COPY html /home
I used this command to run it
docker run -v /C/ubuntu-only/html:/home -p 8083:8080 --name cntr-ubuntu img-ubuntu:abc
The command above runs and when I do a docker ps -a, I see that the container stopped as soon as it started.
I removed the copy of the html and made the ubuntu image even more smaller by keeping just the first line FROM ubuntu:18.04 and even then I get the same result. Container Exited almost soon as it started. Any idea why this works for NGINX but not for Ubuntu and what do I need to do to make it work?
The issue you are experiencing does not have to do with mounting a directory into your container.
The command above runs and when I do a docker ps -a, I see that the container stopped as soon as it started.
The container is exiting due to the fact that there is no process being specified for it to run.
In the NGINX case, you can see that a CMD instruction is set at the end of the Dockerfile.
CMD ["nginx", "-g", "daemon off;"]
This starts NGINX as a foreground process, and prevents the container from exiting immediately.
The Ubuntu Dockerfile is different in that it specifies bash as the command the container will run at start.
CMD ["/bin/bash"]
Because bash does not run as a foreground process here, the container exits immediately.
Try augmenting your docker run command to include a process that stays in the foreground, like sleep.
docker run -v /C/ubuntu-only/html:/home -p 8083:8080 --name cntr-ubuntu img-ubuntu:abc sleep 9000
If you run docker exec -it cntr-ubuntu /bin/bash you should find yourself inside the container and verify that the mounted directory is present.

Docker bind-mount not working as expected within AWS EC2 Instance

I have created the following Dockerfile to run a spring-boot app: myapp within an EC2 instance.
# Use an official java runtime as a parent image
FROM openjdk:8-jre-alpine
# Add a user to run our application so that it doesn't need to run as root
RUN adduser -D -s /bin/sh myapp
# Set the current working directory to /home/myapp
WORKDIR /home/myapp
#copy the app to be deployed in the container
ADD target/myapp.jar myapp.jar
#create a file entrypoint-dos.sh and put the project entrypoint.sh content in it
ADD entrypoint.sh entrypoint-dos.sh
#Get rid of windows characters and put the result in a new entrypoint.sh in the container
RUN sed -e 's/\r$//' entrypoint-dos.sh > entrypoint.sh
#set the file as an executable and set myapp as the owner
RUN chmod 755 entrypoint.sh && chown myapp:myapp entrypoint.sh
#set the user to use when running the image to myapp
USER myapp
# Make port 9010 available to the world outside this container
EXPOSE 9010
ENTRYPOINT ["./entrypoint.sh"]
Because I need to access myapp's logs from the EC2 host machine, i want to bind-mount a folder into the logs folder sitting within "myapp" container here: /home/myapp/logs
This is the command that i use to run the image in the ec2 console:
docker run -p 8090:9010 --name myapp myapp:latest -v home/ec2-user/myapp:/home/myapp/logs
The container starts without any issues, but the mount is not achieved as noticed in the following docker inspect extract:
...
"Mounts": [],
...
I have tried the followings actions but ended up with the same result:
--mount type=bind instead of -v
use volumes instead of bind-mount
I have even tried the --privileged option
In the Dockerfile: I tried to use the USER root instead of myapp
I believe that, this has nothing to do with the ec2 machine but my container. Since running other containers with bind-mounts on the same host works like a charm.
I am pretty sure i am messing up with my Dockerfile.
But what am i doing wrong in that Dockerfile ?
or
What am i missing out ?
Here you have the entrypoint.sh if needed:
#!/bin/sh
echo "The app is starting ..."
exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar -Dspring.profiles.active=${SPRING_ACTIVE_PROFILES} "${HOME}/myapp.jar" "$#"
I think the issue might be the order of the options on the command line. Docker expects the last two arguments to be the image id/name and (optionally) a command/args to run as pid 1.
https://docs.docker.com/engine/reference/run/
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You have the mount options (-v in the example you provided) after the image name (myall:latest). I'm not sure but perhaps the -v ... is being interpreted as arguments to be passed to your entrypoint script (which are being ignored) and docker run isn't seeing as a mount option.
Also, the source of the mount here (home/ec2-user/myapp) doesn't start with a leading forward slash (/), which, I believe, will make it relative to where the docker run command is executed from. You should make sure the source path starts with a forward slash (i.e. /home/ec2-user/myapp) so that you're sure it will always mount the directory you expect. I.e. -v /home/ec2-user...
Have you tried this order:
docker run -p 8090:9010 --name myapp -v /home/ec2-user/myapp:/home/myapp/logs myapp:latest

How to create a Dockerfile so that container can run without an immediate exit

Official Docker images like MySQL can be run like this:
docker run -d --name mysql_test mysql/mysql-server:8.0.13
And it can run indefinitely in the background.
I want to try to create an image which does the same, specifically a Flask development server (just for testing). But my container exit immediately. My Dockerfile is like this:
FROM debian:buster
ENV TERM xterm
RUN XXXX # some apt-get and Python installation stuffs
ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ]
EXPOSE 80
EXPOSE 5000
USER myuser
WORKDIR /home/myuser
However it exited immediately as soon as it is ran. I also tried "bash" as an entry point just so to make sure it isn't a Flask configuration issue and it also exited.
How do I make it so that it runs as THE process in the container?
EDIT
OK someone posted below (but later deleted), the command to test is to use tail -f /dev/null, and it does run indefinitely. I still don't understand why bash doesn't work as a process which doesn't exist (does it?). But my flask configuration is probably off.
EDIT 2
I see that running without the -d flag print out the stdout (or stderr) so I can diagnose the problem.
Let's clear things out.
In general, a container exits as soon as its entrypoint is successfully executed.
In your case, without being a python expert this ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ] would be enough to keep the container alive. But I guess you have some configuration error and due to that error the container exited before running flask command. You can validate this by running docker ps -a and inspect the exit code(possibly 1).
Let's now discuss about the questions in your edits.
The key part of your misunderstanding derives from the -d flag.
You are right to think that setting bash as entrypoint would be enough to keep container alive but you need to attach to that shell.
When running in detach mode(-d), container will execute bash command but as soon as no one is attached to that shell, it will exit. In addition, using this flag will prevent you from viewing container logs lively(however you may use docker logs container_id to debug) which is very useful when you are in an early phase of setting thing up. So I recommend using this flag only when you are sure that everything works as intended.
To attach to bash shell and keep container alive, you should use the -it flag so that the bash shell will be attached to the current shell invoking the docker run command.
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
Please also consult official documentation about foreground vs background mode.
The answer to your edit is: when do docker run <container> bash it will literally call bash and exit 0, because the command (bash) was successful. Bash isn't a shell, it's a command.
If you ran docker run -it <container> tail -f /dev/null and then docker exec -it /bin/bash. You'd drop into the shell, because its the command you ran.
Your Dockerfile doesn't have a command to run in the background that is persistent, in mysqls case, it runs mysqld, which starts a server on PID 0.
When PID 0 exits, the container stops.
Your entrypoint is most likely failing to start, or starting and exiting because of how your command is running.
I would try changing your entrypoint to a

Right way to use ENTRYPOINT to enable container start and stop

I am having a custom image built using the Dockerfile. Apparently a fresh run works fine however when I stop the container and start it again - it doesn't start and remain in the state of Exit 0.
The image is composed of apache2 and bunch of php modules for symfony web application.
This is how Dockerfile end
RUN a2enmod rewrite
CMD service apache2 restart
ENTRYPOINT ["/usr/sbin/apache2ctl"]
CMD ["-D", "FOREGROUND"]
EXPOSE 80
I see containers commonly using docker-entrypoint.sh but unsure of what goes in and the role it plays.
The entrypoint shouldn't have anything to do with your container not restarting. Your problem is most likely elsewhere and you need to look at the logs from the container to debug. The output of docker diff ... may also help to see what has changed in the container filesystem.
If an ENTRYPOINT isn't defined, docker runs the CMD by default. If an ENTRYPOINT is defined, anything in CMD becomes a cli argument to the entrypoint script. So in your above example, it will start (or restart) the container with /usr/sbin/apache2ctl -D FOREGROUND. Anything you append after the container name in the docker run command will override the value of CMD. And you can override the value of the ENTRYPOINT with docker run --entrypoint ....
See Docker's documentation on the entrypoint option for more details.

Docker passing environment variable

Here is my DockerFile, it has a default EN, which I would like to override upon container deployment if specified.
ENV domain example.com
CMD ["cd","/etc/httpd/conf.d/"]
CMD [ "cp", "VirtualHost", "${domain}" ]
However when passing EN using -e command
docker run -it -e domain="test.com" container_id
I'm able to login to the container, echo $domain and it displays EV which has been passed however the copy command didn't copy the file.
Any ideas on what possibly I'm doing wrong?
Thanks
You can't have two CMD lines, the second one will simply override the first. But in your case, I think you want to use the WORKDIR command to set the directory, not CMD e.g:
WORKDIR /etc/httpd/conf.d/
This should set the current directory for all following instructions and container start-up.
BTW I'm not sure how you're logging into this container - when you run it, the cp command will fire and the container will exit once it has completed. If you override the CMD to get a shell (for example with docker run -it mycontainer bash) the cp command will never be executed.

Resources