how to write entrypoint scripts on windows - docker

I was asked to build an image for python programs and for example if we create 3 python programs and create a an image for them and if we run that image, basically a container will be created and will execute and will exit, and for the second program another container will be created.
that's what usually will happen. but here i was informed that a single container should be created for all the programs and it should be in the run state continuously and if we give the program name in the run command it should execute that program, not the other two programs, and it should start and stop based on the commands i give.
for this to happen i was given a hint/suggestion i should say that if i create an entrypoint script and copy that in the docker file it'll work. but unfortunately, when i researched on it in internet the entrypoint scripts are available for linux, but I'm using windows here.

So, first to explain why the container exits after you run it: Containers are not like VMs. Docker (or the container runtime you choose) will check for what is running on the containers. This "what is running" is defined on the ENTRYPOINT on your dockerfile. If you don't have an entrypoint, there's nothing running and Docker stops the containers. Or it might be the case that something ran and the container stopped after it executed.
Now, the Windows Server base images don't have an entrypoint. If you just ask to run the container, it will start and stop immediately. That is a problem for background services like web servers, for example IIS. To solve that, Microsoft created a service called Service Monitor. If you look for the docker file of the IIS image that Microsoft produces, you'll notice that the entrypoint is the service monitor that in turn checks the status of the IIS service. If IIS is running, service monitor will continue to run and thus the container keeps running indefinitely. (Here's the dockerfile: https://github.com/Microsoft/iis-docker/blob/main/windowsservercore-ltsc2022/Dockerfile)
Now, for your case, what you need is a job on your python container. Look at the description on the link provided by Mihai: https://hub.docker.com/_/python
This is their example docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
Note the last line. It's not an entry point, which means that the python app will run and exit, which will stop the container. IF you need the container to run indefinetly, either you leverage something like service monitor (but need to build a service in the background) or you create your own logic to keep something running. For example, a infinte loop.
Does that helps?

Related

Console application in docker not working

I am trying to learn docker practically. To start with I have created a simple .net core 3.1 console application. This application simply writes a message in a text file in a specific location. I have created a docker image from it and then docker container from the image. When I run the docker container, it runs and stops successfully.
The docker file:
FROM mcr.microsoft.com/dotnet/aspnet:3.1
COPY bin/Release/netcoreapp3.1/publish App/
WORKDIR /App
ENTRYPOINT ["dotnet", "ConsoleApp1.dll"]
I also checked the logs using command "docker logs container_id". But it returns nothing.
Am I missing anything?
Docker runs a process inside a container, when that process ends the container stops and end too. As the process in your container only writes something and exits, the container exits and stops too.
Also the text file is written in the container file system. So you will not be able to see it in your host, unless you use a volume. Try printing the string to standard output instead

Cloud Run error: Container failed to start. Running a background task without exposing a PORT or URL

I am facing the issue
(gcloud.run.deploy) Cloud Run error: Container failed to start. Failed
to start and then listen on the port defined by the PORT environment
variable. Logs for this revision might contain more information.
There are a few post with this error but I couldn't find my particular case.
I am running a background task, nothing to expose, it connects to firebase process some data and store it back. I wanted this process to run on a container on Cloud Run so I made it a container, which runs perfectly locally, but when uploading it to CR it fails with the above error.
I tried to expose 8080 on dockerfile and a few more things but if you try to connect to they container it has no server running to connect to. It is a batch task.
Can anyone tell me if it is possible at all to upload this type of tasks to Cloud Run, I do not know how to solve the issue. I wouldnt believe google requires a server running on the container to allow it, I saw some posts with dev pulling an nginx on the image so they can expose the port but this would be totally unnecessary in my case.
Thanks for your advice
UPDATE
Cloud Logging: The error simply say there was a fail to start the container, which is funny because the container starts and also shows some logs like if it were working but then it stops.
Build on MAC yes.
DockerFile is pretty simple.
FROM openjdk:11
ENV NOTIFIER_HOME /opt/app/
ENV NOTIFIER_LOGS /opt/notifications/logs/
RUN mkdir -p $NOTIFIER_HOME RUN mkdir -p $NOTIFIER_LOGS
RUN apt update
#RUN apt install curl
COPY docker/* $NOTIFIER_HOME
EXPOSE 8080
ENV TMP_OPTS -Djava.io.tmpdir=/tmp ENV LOG4j_OPTS
-Dlog4j.configurationFile=$NOTIFIER_HOME/logback.xml ENV NOTIFIER_OPTS $TMP_OPTS $LOG4j_OPTS
ENV JAVA_GC_OPTS -Xms1g -Xmx1g
WORKDIR $NOTIFIER_HOME ENTRYPOINT ["sh", "-c", "/opt/app/entrypoint.sh"]
You can't run background jobs on Cloud Run. Wrap it in a webserver as proposed by MBHA if the process take less than 1h.
Else you can you GKE Autopilot to run your container for a while. you pay only when your container run. And the first cluster is free. You can have a try on it!
As hack you can run your container in Cloud Build also, or in Vertex AI custom container training.
I've run in to a similar issue with building custom image on MAC + deploying in to Cloud Run. In my case, it turned out to be the docker platform causing the problem. The way I isolated this was by building the same image in Cloud Shell and that would work perfectly fine in Cloud Run.
Now, if you need to build it locally on MAC go ahead and test it by changing the Docker platform:
export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker build -t mytag:myver .
Once the image has been built, you can inspect the architecture:
docker image inspect mytag:myver | grep -i Architecture
Then deploy it to Cloud Run.
The explanation is in your question:
I am running a background task, nothing to expose
A cloud run application, so your container, must be listening for incoming HTTP requests as stated in the Container runtime contract. That's why in all cloud run examples, java in your case, spring boot is used with #RestController. Other explanation can be found in this answer.
Update:
So the solution is either to
add a webserver to your code and wrap it with spring boot and controller logic
use Cloud Function rather than Cloud Run and get rid of the Dockerfile and in the same time have simpler code and less configuration

Container manager keep terminate container on signal 9

I am trying to play with Google Cloud Run, I have the same service that works fine in App Engine Flex. Any thoughts what could be the issue?
Somehow it shows that service is healthy.
This means infrastructure (container manager) scales down the number instances when traffic drops.
It's safe to ignore.
For others who find this question when their container didn't start the first time you deployed it: It's important to note that you need to have it listening on the environment variable PORT.
It appears that Cloud Run will dynamically map your container to a port at invocation, and the service that you're running needs to (dynamically) use this to serve it's content.
For reference, here's how I got the base Apache Docker image to work with Cloud Run to host a static site built via Node:
FROM node:lts AS build
COPY . .
RUN npm install
RUN npm run build
FROM httpd:latest
ENV PORT=80
RUN sed -i 's/80/${PORT}/g' /usr/local/apache2/conf/httpd.conf
COPY --from=build ./dist/ /usr/local/apache2/htdocs/
For me, it was because billing was disabled
Make sure billing is enabled on your GCP project
https://console.cloud.google.com/billing

Docker container exits after executing entrypoint point

FROM openjdk:8
LABEL maintainer="test"
EXPOSE 8080
ADD test-demo.jar assembly.jar
ENTRYPOINT ["java","-jar","assembly.jar"]
The container i start with this docker file exits soon after it starts. Please advice what to do to keep this running.
You need to make sure a java -jar assembly.jar will keep being active as a foreground process, or said main process would exit, and with it the docker container itself.
You should wrap the java -jar call in its own script, which allows various ways to keep that script alive, as described here.

Understanding Dockerfile CMD/ENTRYPOINT

I'm new to Docker. Trying to build small image with Transmission.
Here is my Dockerfile:
#base image
FROM alpine:latest
#install Transmission
RUN apk update
RUN apk add transmission-daemon
#expose port
EXPOSE 9091
#start app
CMD ["/usr/bin/transmission-daemon"]
Then I start container:
docker run transmission
and it immediately quits. I supposed that it will stay running, as transmission-daemon should stay running.
I tried ENTRYPOINT also, but result is the same. However, next version works as expected:
ENTRYPOINT ["/usr/bin/transmission-daemon"]
CMD ["-h"]
It runs, show Transmission help and quits.
What I am missing about how Docker runs apps inside containers?
Docker keeps a container running as long as the process which the container starts is active. If your container starts a daemon when it runs, then the daemon start script is the process Docker watches. When that completes, the container exits - because Docker isn't watching the background process the script spawns.
Typically your CMD or ENTRYPOINT will run the interactive process rather than the daemonized version, and you let Docker take care of putting the container in the background with docker run -d. (The actual difference between CMD and ENTRYPOINT is about giving users flexibility to run containers from your image in different ways).
It's worth checking the Docker Hub if you're looking at running an established app in a container. There are a bunch of Transmission images on Docker Hub which you can use directly, or check out their Dockerfiles to see how the image is built.

Resources