run uwsgi after official nginx container start? - docker

I just write a customized container dockerfile including CMD["uwsgi", "--ini", "uwsgi.ini"] based on nginx official image
And I see there's a CMD["nginx", "-g", "daemon off"] in the end of Dockerfile of this nginx official image.
That should means starting nginx when container starts.
So my CMD["uwsgi", "--ini", "uwsgi.ini"] in my dockerfile will overridde it, thus the container will immediately exit.
How should I not override it and make both nginx and uwsgi work?
I'v googled a lot but none of those solutions are based on nginx official image.
Obviously I can run another container just for uwsgi and connect it to nginx container(i.e. the container runned by nginx offcial image), but I think it's troublesome and unnecessary.
the nginx offcial image here

You can use ENTRYPOINT or CMD to run multiple processes inside a container by feeding a shell script/wrapper. You should try to refrain from it since that isn't a best practice. Single container should be responsible for managing single process.
However there is a workaround by which you can manage multiple processes inside a container i.e by using a shell script wrapper or supervisor.
It's there in official docs -
https://docs.docker.com/config/containers/multi-service_container/

First, this is not docker philosophy to run 2 processes in one container.
This is a commonly accepted one : officialy, and through the community
So you'd rather build a stack, with both a nginx and your application.
Provided you really want or need to do this your way, you can pipe several command within the CMD instruction if you specify the command shell at first... but you can also use a script here.
Remember that the script will be executed from within your container, so think from a container POV, and not from a host one!

Related

How to understand Container states

I am trying to understand the life cycle of a container. Downloaded alpine image, built containers using "docker container run" command, all of those containers ran and in "Exited" status. While using "docker container start" command, some of the containers are staying in up status(running) and some or Exited immediately. Any thoughts on why the difference in such behavior around statuses? One difference I observed is, containers staying in up status are modified with respect to file structure from base image.
Hope i was able to put the scenario with proper context. Help me in understanding the concept.
The long sequence is as follows:
You docker create a container with its various settings. Some settings may be inherited from the underlying image. It is in a "created" status; its filesystem exists but nothing is running.
You docker start the container. If the container has an entrypoint (Dockerfile ENTRYPOINT directive, docker create --entrypoint option) then that entrypoint is run, taking the command as arguments; otherwise the command (Dockerfile CMD directive, any options after the docker create image name) is run directly. This process gets process ID 1 in the container and the rights and responsibilities that go along with that. The container is in "running" status.
The main process exits, or an administrator explicitly docker stops it. The container is in "exited" status.
Optionally you can restart a stopped container (IME this is unusual though); go to step 2.
You docker rm the stopped container. Anything in the container filesystem is permanently lost, and it no longer shows up in docker ps -a or anywhere else.
Typically you'd use docker run to combine these steps together. docker run on its own does the first two steps together (creates a container and then starts it). If you docker run --rm it does everything listed above.
(All of these commands are identical to the docker container ... commands, but I'm used to the slightly shorter form.)
The key point here is that there is some main process that the container runs. Typically this is some sort of daemon or server process, and generally specified in the image's Dockerfile. If you, for example, docker run ... nginx, then its Dockerfile ends with
CMD ["nginx", "-g", "daemon off;"]
and that becomes the main container process.
In early exploration it's pretty common to just run some base distribution image (docker run --rm -it alpine) but that's not really interesting: the end of the lifecycle sequence is removing the container and once you do that everything in the container is lost. In standard use you'd want to use a Dockerfile to build a custom image, and there's a pretty good Docker tutorial on the subject.

Deploying Spring Boot App in Docker container without auto starting Tomcat

I was under the impression that including the line
CMD ["catalina.sh", "run"]
to the Dockerfile would be what triggers the Tomcat server to start.
I've removed that line but it still starts the server on deployment. I basically want to add the catalina.sh run and include CATALINA_OPTS all in a Kubernetes deployment to handle this stuff, but Tomcat still auto starts up when I deploy to a container.
Docker image usually has an entry point or a command already.
If you create your custom image based on another image from a registry, which is a common case, you can override base image entry point by specifying the ENTRYPOINT or the CMD directive.
If you omit the entry point in your docker file, it will be inherited from the base image.
Consider reading Dockerfile: ENTRYPOINT vs CMD article to have a better understanding of how it works.

nginx and custom Jar in a docker container

I am trying to have a custom java application and nginx to run in the same docker container. The nginx acts as a reverse proxy here and redirects requests to the java application. so
outside world -> { nginx -> application } (docker).
How do I set this up?
First I would separate the proxy from the java executable, as #jonrsharpe suggested. Just use an official nginx image for that, in another container.
Then writing the Dockerfile is pretty straightforward:
choose a base image (java official image is probably your best choice)
install dependencies and copy the artefact to the container
expose the port(s) your jar is going to use
have the Entrypoint executing your jar with the relevant options, or use something like supervisord
EDIT:
If you need to package both applications in the same container, as you mention, using supervisord as an entrypoint is almost mandatory.
Docker containers exit once the process with PID 1 dies/exits. You cannot have both java and nginx with PID 1, so you risk having a working proxy with no jar running or a jar running without proxy.
Here is where supervisord comes handy: you can add both application to it and have the container exit as soon as one of the apps terminates.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Health Check command for docker(1.12) container (Not in Dockerfile!)

Docker Version 1.12,
I got a Dockerfile from Here
FROM nginx:latest
RUN touch /marker
ADD ./check_running.sh /check_running.sh
RUN chmod +x /check_running.sh
HEALTHCHECK --interval=5s --timeout=3s CMD ./check_running.sh
I'm able to roll the updates and health checks with check_running.sh shell script. Here, the check_running.sh script is copied to image, so the launched container has it.
Now, my question is there any way to Health Check from out side of the container and script also located outside.
I'm excepting a health check command to get the container performance(Depends on what we wrote in script), IF the container is not performing good it should roll-back to previous version ( Kind of a process that monitors the containers, if it is not good, it should roll-back to previous)
Thanks
is there any way to Health Check from out side of the container and
script also located outside.
Kind of a process that monitors the containers, if it is not good, it should roll-back to previous
You have several options:
From outside, you run a process inside the container to check its health with docker exec. This could be any sequence of shell commands. If you want to keep your scripts outside of the container, you might use something like cat script.sh | docker exec -it container sh -s.
You check the container health from outside the container, e.g. by looking for a process that should be running inside the container (try to set a security profile and use ps -Zax or try looking for children of the daemon), or you can give each container a specific user ID with --user 12345 and then look for that or e.g. connecting to its services. You'd have to make sure it's running inside the right container. You can access the containers' filesystem below /var/lib/docker/devicemapper/mnt/<hash>/rootfs.
You run a HEALTHCHECK inside the container and check its health with docker inspect --format='{{json .State.Health.Status}}' <containername> combined with e.g. a line in the Dockerfile:
HEALTHCHECK CMD wget -q -s http://some.host to check the container has internet access.
I'd recommend option 3, because it's likely to be more compatible with other tools in the future.
Just got comment from a blog!. He refered Docker documentation HealthCheck section. There is a health check "option" for docker command to "override" the dockerfile defaults. I have not checked yet!. But it seems good for me to get what I want. Will check and update the answer!
The Docker inspect command lets you view the output of commands that succeed or fail
docker inspect --format='{{json .State.Health}}' your-container-name
That's not available with the Dockerfile HEALTHCHECK option, all checks run inside the container. To me, this is a good thing since it avoids potentially untrusted code running directly on the host, and it allows you to include the dependencies for the health check inside your container.
If you need to monitor your container from outside, you'll need to use another tool or monitoring application, there are quite a few of them out there.
You can view the results of the health check by running docker inspect on a container.
Another approach depending on your application would be to expose a /healthz endpoint that the healthcheck also probes, this way it can be queried externally or internally as needed.

Resources