Docker container exits after executing entrypoint point - docker

FROM openjdk:8
LABEL maintainer="test"
EXPOSE 8080
ADD test-demo.jar assembly.jar
ENTRYPOINT ["java","-jar","assembly.jar"]
The container i start with this docker file exits soon after it starts. Please advice what to do to keep this running.

You need to make sure a java -jar assembly.jar will keep being active as a foreground process, or said main process would exit, and with it the docker container itself.
You should wrap the java -jar call in its own script, which allows various ways to keep that script alive, as described here.

Related

how to write entrypoint scripts on windows

I was asked to build an image for python programs and for example if we create 3 python programs and create a an image for them and if we run that image, basically a container will be created and will execute and will exit, and for the second program another container will be created.
that's what usually will happen. but here i was informed that a single container should be created for all the programs and it should be in the run state continuously and if we give the program name in the run command it should execute that program, not the other two programs, and it should start and stop based on the commands i give.
for this to happen i was given a hint/suggestion i should say that if i create an entrypoint script and copy that in the docker file it'll work. but unfortunately, when i researched on it in internet the entrypoint scripts are available for linux, but I'm using windows here.
So, first to explain why the container exits after you run it: Containers are not like VMs. Docker (or the container runtime you choose) will check for what is running on the containers. This "what is running" is defined on the ENTRYPOINT on your dockerfile. If you don't have an entrypoint, there's nothing running and Docker stops the containers. Or it might be the case that something ran and the container stopped after it executed.
Now, the Windows Server base images don't have an entrypoint. If you just ask to run the container, it will start and stop immediately. That is a problem for background services like web servers, for example IIS. To solve that, Microsoft created a service called Service Monitor. If you look for the docker file of the IIS image that Microsoft produces, you'll notice that the entrypoint is the service monitor that in turn checks the status of the IIS service. If IIS is running, service monitor will continue to run and thus the container keeps running indefinitely. (Here's the dockerfile: https://github.com/Microsoft/iis-docker/blob/main/windowsservercore-ltsc2022/Dockerfile)
Now, for your case, what you need is a job on your python container. Look at the description on the link provided by Mihai: https://hub.docker.com/_/python
This is their example docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
Note the last line. It's not an entry point, which means that the python app will run and exit, which will stop the container. IF you need the container to run indefinetly, either you leverage something like service monitor (but need to build a service in the background) or you create your own logic to keep something running. For example, a infinte loop.
Does that helps?

How to understand Container states

I am trying to understand the life cycle of a container. Downloaded alpine image, built containers using "docker container run" command, all of those containers ran and in "Exited" status. While using "docker container start" command, some of the containers are staying in up status(running) and some or Exited immediately. Any thoughts on why the difference in such behavior around statuses? One difference I observed is, containers staying in up status are modified with respect to file structure from base image.
Hope i was able to put the scenario with proper context. Help me in understanding the concept.
The long sequence is as follows:
You docker create a container with its various settings. Some settings may be inherited from the underlying image. It is in a "created" status; its filesystem exists but nothing is running.
You docker start the container. If the container has an entrypoint (Dockerfile ENTRYPOINT directive, docker create --entrypoint option) then that entrypoint is run, taking the command as arguments; otherwise the command (Dockerfile CMD directive, any options after the docker create image name) is run directly. This process gets process ID 1 in the container and the rights and responsibilities that go along with that. The container is in "running" status.
The main process exits, or an administrator explicitly docker stops it. The container is in "exited" status.
Optionally you can restart a stopped container (IME this is unusual though); go to step 2.
You docker rm the stopped container. Anything in the container filesystem is permanently lost, and it no longer shows up in docker ps -a or anywhere else.
Typically you'd use docker run to combine these steps together. docker run on its own does the first two steps together (creates a container and then starts it). If you docker run --rm it does everything listed above.
(All of these commands are identical to the docker container ... commands, but I'm used to the slightly shorter form.)
The key point here is that there is some main process that the container runs. Typically this is some sort of daemon or server process, and generally specified in the image's Dockerfile. If you, for example, docker run ... nginx, then its Dockerfile ends with
CMD ["nginx", "-g", "daemon off;"]
and that becomes the main container process.
In early exploration it's pretty common to just run some base distribution image (docker run --rm -it alpine) but that's not really interesting: the end of the lifecycle sequence is removing the container and once you do that everything in the container is lost. In standard use you'd want to use a Dockerfile to build a custom image, and there's a pretty good Docker tutorial on the subject.

Run docker with just copied file inside

I'm building an image that just contains a copied file, with the following Dockerfile:
FROM alpine:3.8
COPY ./somefile /srv/somefile
When I try to docker run the image, it exits immediately, that is just after docker run I have:
Exited (0) 1 second ago.
I tried adding CMD ["/bin/sh"] or ENTRYPOINT ["/bin/sh"] but it doesn't change anything.
Is it possible to have such container with just copied file and make it up and running, until I stop it?
So there is really no problem - you have succeeded running your docker with file inside. But as long as you didn't supply any additional job for your container, your run process had taken 1 second.
First you should get acquainted with what is considered as "running" container in Docker terminology. Container is "running" until its main process (PID 1) is running. Process exits => container stops. When you want your container to remain running (as a service for example), you need to keep your main process active.
Second, what is your main process? It is the process being launched upon container start. It is combined from ENTRYPOINT and CMD directives (with some rules). These directives are often given the default value in Dockerfile but you can override them. If you just run docker run <image>, default values are taken, but if you provide some arguments after <image>, they override CMD.
So, for alpine you can simply run shell like docker run -it alpine sh. And until you exit the shell, your container is running.
And the last. Argument -it connects both STDIN and STDOUT/STDERR to your console. So your sh process, which is main, keeps alive till you close the console.

run uwsgi after official nginx container start?

I just write a customized container dockerfile including CMD["uwsgi", "--ini", "uwsgi.ini"] based on nginx official image
And I see there's a CMD["nginx", "-g", "daemon off"] in the end of Dockerfile of this nginx official image.
That should means starting nginx when container starts.
So my CMD["uwsgi", "--ini", "uwsgi.ini"] in my dockerfile will overridde it, thus the container will immediately exit.
How should I not override it and make both nginx and uwsgi work?
I'v googled a lot but none of those solutions are based on nginx official image.
Obviously I can run another container just for uwsgi and connect it to nginx container(i.e. the container runned by nginx offcial image), but I think it's troublesome and unnecessary.
the nginx offcial image here
You can use ENTRYPOINT or CMD to run multiple processes inside a container by feeding a shell script/wrapper. You should try to refrain from it since that isn't a best practice. Single container should be responsible for managing single process.
However there is a workaround by which you can manage multiple processes inside a container i.e by using a shell script wrapper or supervisor.
It's there in official docs -
https://docs.docker.com/config/containers/multi-service_container/
First, this is not docker philosophy to run 2 processes in one container.
This is a commonly accepted one : officialy, and through the community
So you'd rather build a stack, with both a nginx and your application.
Provided you really want or need to do this your way, you can pipe several command within the CMD instruction if you specify the command shell at first... but you can also use a script here.
Remember that the script will be executed from within your container, so think from a container POV, and not from a host one!

Docker dealing with processes that don't end?

I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?
You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.

Resources