NGINX Docker container won't stay running? - docker

I've Googled and looked thru several answers on SO but nothing I'm trying seems to work
I have a Dockerfile which downloads PHP 7 (cli+fpm) and installs NGINX as a final step with this command in attempt to keep the container running:
RUN apt-get update && apt-get -y install nginx
CMD ["service","nginx", "start", "-g", "daemon off;"]
What am I not understanding about containers? I previously used the PHP binary itself as the web server, the final command would fire the built int server and the container stayed running and everything worked great.
NGINX exists with code 0?
Thoughts?

Try
CMD ["nginx", "-g", "daemon off;"]
When you run service nginx start, the command is responsible to only start the service. After starting nginx the command finishes its job successfully, with exit code 0.
As a result the container exits, since it's main process exited
You can see this by running
docker logs container_name
The logs will end with following line
Starting nginx: nginx.
Instead, if you run the command proposed it will iniate the nginx process without exiting

Related

Trying to run nginx image with php-fpm

I'm just learning Docker and have a problem with running nginx together with php. This is my Dockerfile
FROM nginx
RUN apt-get update -y
RUN apt-get install php7.4-fpm -y
ADD start.sh /
RUN chmod +x /start.sh
CMD ["/start.sh"]
start.sh content:
#!/bin/bash
service php7.4-fpm start
nginx -g 'daemon off;'
If I omit the last line CMD ["/start.sh"], accessing files from the host machine works but php files aren't processed because php7.4-fpm is not running. But when I add this line nginx stops serving any files. Through I can confirm that nginx and php are running inside the container with docker exec nginx-custom service nginx status and docker exec nginx-custom service php7.4-fpm status. The nginx error log is empty.
This is the CMD of the original nginx image, which I thought is the only thing that gets overwritten? I guess I have some basic problems in understanding how Docker works at this point.
CMD ["nginx", "-g", "daemon off;"]

Docker on windows can mount a folder for nginx container but not for ubuntu

I am building an image from this docker file for NGinx
FROM nginx
COPY html /usr/share/nginx/html
I then run the container using this command
docker run -v /C/nginx/html:/usr/share/nginx/html -p 8081:80 -d --name cntr-mynginx mynginx:abc
This works and I am able to mount the folder and the changes made in the html folder on the host can be seen when within the container file system. The edits made on the container filesystem under the /usr/share/nginx/html folder are visible on the host as well.
Why does the same not work when I use an Ubuntu base? This is the docker file for the Ubuntu container I am trying to spin up.
FROM ubuntu:18.04
COPY html /home
I used this command to run it
docker run -v /C/ubuntu-only/html:/home -p 8083:8080 --name cntr-ubuntu img-ubuntu:abc
The command above runs and when I do a docker ps -a, I see that the container stopped as soon as it started.
I removed the copy of the html and made the ubuntu image even more smaller by keeping just the first line FROM ubuntu:18.04 and even then I get the same result. Container Exited almost soon as it started. Any idea why this works for NGINX but not for Ubuntu and what do I need to do to make it work?
The issue you are experiencing does not have to do with mounting a directory into your container.
The command above runs and when I do a docker ps -a, I see that the container stopped as soon as it started.
The container is exiting due to the fact that there is no process being specified for it to run.
In the NGINX case, you can see that a CMD instruction is set at the end of the Dockerfile.
CMD ["nginx", "-g", "daemon off;"]
This starts NGINX as a foreground process, and prevents the container from exiting immediately.
The Ubuntu Dockerfile is different in that it specifies bash as the command the container will run at start.
CMD ["/bin/bash"]
Because bash does not run as a foreground process here, the container exits immediately.
Try augmenting your docker run command to include a process that stays in the foreground, like sleep.
docker run -v /C/ubuntu-only/html:/home -p 8083:8080 --name cntr-ubuntu img-ubuntu:abc sleep 9000
If you run docker exec -it cntr-ubuntu /bin/bash you should find yourself inside the container and verify that the mounted directory is present.

docker exit immediately after launching apache and neo4j

I have a script /init that launches apache and neo4j. This script is already in the image ubuntu:14. The following is the content of /init:
service apache2 start
service neo4j start
From this image, I am creating another image with the following dockerfile
FROM ubuntu:v14
EXPOSE 80 80
ENTRYPOINT ["/init"]
When I run the command docker run -d ubuntu:v15, the container starts and then exit. As far as I understood, -d option runs the container in the background. Also, the script\init launches two daemons. Why does the container exit ?
In fact, I think your first problem is the #! in init file, if you did not add something like #!/bin/bash at the start, container will complain like next:
shubuntu1#shubuntu1:~$ docker logs priceless_tu
standard_init_linux.go:207: exec user process caused "exec format error"
But even you fix above problem, you will still can't start your container, the reason same as other folks said: the PID 1 should always there, in your case after service xxx start finish, the PID 1 exit which will also result in container exit.
So, to conquer this problem you should set one command never exit, a minimal workable example for your reference:
Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y apache2
COPY init /
RUN chmod +x /init
EXPOSE 80
ENTRYPOINT ["/init"]
init:
#!/bin/bash
# you can add other service start here
# e.g. service neo4j start as you like if you have installed it already
# next will make apache run in foreground, so PID1 not exit.
/usr/sbin/apache2ctl -DFOREGROUND
When your Dockerfile specifies an ENTRYPOINT, the lifetime of the container is exactly the length of whatever its process is. Generally the behavior of service ... start is to start the service as a background process and then return immediately; so your /init script runs the two service commands and completes, and now that the entrypoint process is completed, the container exits.
Generally accepted best practice is to run only one process in a container. That's especially true when one of the processes is a database. In your case there are standard Docker Hub Apache httpd and neo4j images, so I'd start by using an orchestration tool like Docker Compose to run those two containers side-by-side.

Right way to use ENTRYPOINT to enable container start and stop

I am having a custom image built using the Dockerfile. Apparently a fresh run works fine however when I stop the container and start it again - it doesn't start and remain in the state of Exit 0.
The image is composed of apache2 and bunch of php modules for symfony web application.
This is how Dockerfile end
RUN a2enmod rewrite
CMD service apache2 restart
ENTRYPOINT ["/usr/sbin/apache2ctl"]
CMD ["-D", "FOREGROUND"]
EXPOSE 80
I see containers commonly using docker-entrypoint.sh but unsure of what goes in and the role it plays.
The entrypoint shouldn't have anything to do with your container not restarting. Your problem is most likely elsewhere and you need to look at the logs from the container to debug. The output of docker diff ... may also help to see what has changed in the container filesystem.
If an ENTRYPOINT isn't defined, docker runs the CMD by default. If an ENTRYPOINT is defined, anything in CMD becomes a cli argument to the entrypoint script. So in your above example, it will start (or restart) the container with /usr/sbin/apache2ctl -D FOREGROUND. Anything you append after the container name in the docker run command will override the value of CMD. And you can override the value of the ENTRYPOINT with docker run --entrypoint ....
See Docker's documentation on the entrypoint option for more details.

What's wrong with this dockerfile

What's wrong with my dockerfile?
The dockerfile is in the rootfolder of my repo and the dist-folder too.
FROM nginx
# copy folder
COPY dist /usr/share/nginx/html
EXPOSE 8080
CMD ["nginx"]
I build the image:
docker build -f Dockerfile.nginx -t localhost:5000/test/image:${version} .
The image is there after performing docker images
It looks so simple but when I try to run the image as a container:
docker run -d -p 80:8080 localhost:5000/test/image:15
545445f961f4ec22becc0688146f3c73a41504d65467020a3e572d136354e179
But: Exited (0) About a minute ago
The docker logs shows nothing
Default nginx behaviour is run as a daemon. To prevent this run nginx with parameter daemon off.
CMD ["nginx", "daemon off"]
By default, Nginx will fork into the background and -- as the original foreground process has terminated -- the Docker container will stop immediately. You can have a look at how the original image's Dockerfile handles this:
CMD ["nginx", "-g", "daemon off;"]
The flag -g "daemon off;" causes Nginx to not fork, but continue running in the foreground, instead. And since you're already extending the official nginx image, you can drop your CMD line altogether, as it will be inherited from the base image, anyway.

Resources