Container manager keep terminate container on signal 9 - google-cloud-run

I am trying to play with Google Cloud Run, I have the same service that works fine in App Engine Flex. Any thoughts what could be the issue?
Somehow it shows that service is healthy.

This means infrastructure (container manager) scales down the number instances when traffic drops.
It's safe to ignore.

For others who find this question when their container didn't start the first time you deployed it: It's important to note that you need to have it listening on the environment variable PORT.
It appears that Cloud Run will dynamically map your container to a port at invocation, and the service that you're running needs to (dynamically) use this to serve it's content.
For reference, here's how I got the base Apache Docker image to work with Cloud Run to host a static site built via Node:
FROM node:lts AS build
COPY . .
RUN npm install
RUN npm run build
FROM httpd:latest
ENV PORT=80
RUN sed -i 's/80/${PORT}/g' /usr/local/apache2/conf/httpd.conf
COPY --from=build ./dist/ /usr/local/apache2/htdocs/

For me, it was because billing was disabled
Make sure billing is enabled on your GCP project
https://console.cloud.google.com/billing

Related

how to write entrypoint scripts on windows

I was asked to build an image for python programs and for example if we create 3 python programs and create a an image for them and if we run that image, basically a container will be created and will execute and will exit, and for the second program another container will be created.
that's what usually will happen. but here i was informed that a single container should be created for all the programs and it should be in the run state continuously and if we give the program name in the run command it should execute that program, not the other two programs, and it should start and stop based on the commands i give.
for this to happen i was given a hint/suggestion i should say that if i create an entrypoint script and copy that in the docker file it'll work. but unfortunately, when i researched on it in internet the entrypoint scripts are available for linux, but I'm using windows here.
So, first to explain why the container exits after you run it: Containers are not like VMs. Docker (or the container runtime you choose) will check for what is running on the containers. This "what is running" is defined on the ENTRYPOINT on your dockerfile. If you don't have an entrypoint, there's nothing running and Docker stops the containers. Or it might be the case that something ran and the container stopped after it executed.
Now, the Windows Server base images don't have an entrypoint. If you just ask to run the container, it will start and stop immediately. That is a problem for background services like web servers, for example IIS. To solve that, Microsoft created a service called Service Monitor. If you look for the docker file of the IIS image that Microsoft produces, you'll notice that the entrypoint is the service monitor that in turn checks the status of the IIS service. If IIS is running, service monitor will continue to run and thus the container keeps running indefinitely. (Here's the dockerfile: https://github.com/Microsoft/iis-docker/blob/main/windowsservercore-ltsc2022/Dockerfile)
Now, for your case, what you need is a job on your python container. Look at the description on the link provided by Mihai: https://hub.docker.com/_/python
This is their example docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
Note the last line. It's not an entry point, which means that the python app will run and exit, which will stop the container. IF you need the container to run indefinetly, either you leverage something like service monitor (but need to build a service in the background) or you create your own logic to keep something running. For example, a infinte loop.
Does that helps?

Stop currently running container process without killing the container

I'm developing app with few people and decided to fully dockerize dev environment (for easier deployment and collaboration). I did it by making super short Dockerfile.dev for services such as this one
FROM node:18
COPY ./frontend/dev_autostart.sh .
Each service has it's own dev_autostart.sh script that waits for volume to be mounted, then depending on the service it installs dependencies and runs it in dev mode (f.e. 'npm i && npm run dev').
My issue with this setup is that installing dependencies is harder because to do that i'd need to stop container and install dependencies using my machine or change docker entrypoint to something like 'tail -F anything'.
My ideal workflow would be
'docker attach $containerName'
Somehow stop current process without killing the container
Do stuff
Rerun process manually (cargo run . / npm run dev)
Is there any way to achieve that?

Cloud Run error: Container failed to start. Running a background task without exposing a PORT or URL

I am facing the issue
(gcloud.run.deploy) Cloud Run error: Container failed to start. Failed
to start and then listen on the port defined by the PORT environment
variable. Logs for this revision might contain more information.
There are a few post with this error but I couldn't find my particular case.
I am running a background task, nothing to expose, it connects to firebase process some data and store it back. I wanted this process to run on a container on Cloud Run so I made it a container, which runs perfectly locally, but when uploading it to CR it fails with the above error.
I tried to expose 8080 on dockerfile and a few more things but if you try to connect to they container it has no server running to connect to. It is a batch task.
Can anyone tell me if it is possible at all to upload this type of tasks to Cloud Run, I do not know how to solve the issue. I wouldnt believe google requires a server running on the container to allow it, I saw some posts with dev pulling an nginx on the image so they can expose the port but this would be totally unnecessary in my case.
Thanks for your advice
UPDATE
Cloud Logging: The error simply say there was a fail to start the container, which is funny because the container starts and also shows some logs like if it were working but then it stops.
Build on MAC yes.
DockerFile is pretty simple.
FROM openjdk:11
ENV NOTIFIER_HOME /opt/app/
ENV NOTIFIER_LOGS /opt/notifications/logs/
RUN mkdir -p $NOTIFIER_HOME RUN mkdir -p $NOTIFIER_LOGS
RUN apt update
#RUN apt install curl
COPY docker/* $NOTIFIER_HOME
EXPOSE 8080
ENV TMP_OPTS -Djava.io.tmpdir=/tmp ENV LOG4j_OPTS
-Dlog4j.configurationFile=$NOTIFIER_HOME/logback.xml ENV NOTIFIER_OPTS $TMP_OPTS $LOG4j_OPTS
ENV JAVA_GC_OPTS -Xms1g -Xmx1g
WORKDIR $NOTIFIER_HOME ENTRYPOINT ["sh", "-c", "/opt/app/entrypoint.sh"]
You can't run background jobs on Cloud Run. Wrap it in a webserver as proposed by MBHA if the process take less than 1h.
Else you can you GKE Autopilot to run your container for a while. you pay only when your container run. And the first cluster is free. You can have a try on it!
As hack you can run your container in Cloud Build also, or in Vertex AI custom container training.
I've run in to a similar issue with building custom image on MAC + deploying in to Cloud Run. In my case, it turned out to be the docker platform causing the problem. The way I isolated this was by building the same image in Cloud Shell and that would work perfectly fine in Cloud Run.
Now, if you need to build it locally on MAC go ahead and test it by changing the Docker platform:
export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker build -t mytag:myver .
Once the image has been built, you can inspect the architecture:
docker image inspect mytag:myver | grep -i Architecture
Then deploy it to Cloud Run.
The explanation is in your question:
I am running a background task, nothing to expose
A cloud run application, so your container, must be listening for incoming HTTP requests as stated in the Container runtime contract. That's why in all cloud run examples, java in your case, spring boot is used with #RestController. Other explanation can be found in this answer.
Update:
So the solution is either to
add a webserver to your code and wrap it with spring boot and controller logic
use Cloud Function rather than Cloud Run and get rid of the Dockerfile and in the same time have simpler code and less configuration

Docker Autoreload with CompileDaemon

I am working on trying to improve my development environment using Docker and Go but I am struggling to get auto reloading in my containers working when there is a file change. I am on Windows running Docker Desktop version 18.09.1 if that matters.
I am using CompileDaemon for reloading and my DockerFile is defined as follows
FROM golang:1.11-alpine
RUN apk add --no-cache ca-certificates git
RUN go get github.com/githubnemo/CompileDaemon
WORKDIR /go/src/github.com/testrepo/app
COPY . .
EXPOSE 8080
ENTRYPOINT CompileDaemon -log-prefix=false -directory="." -build="go build /go/src/github.com/testrepo/app/cmd/api/main.go" -command="/go/src/github.com/testrepo/app/main"
My project structure follows
app
api
main.go
In my docker-compose file I have the correct volumes set and the files are being updated in my container on when I make changes locally.
The application is also started correctly using CompileDaemon on its first load, its just not ever updated on file changes.
On first load I see...
Running build command!
Build ok.
Restarting the given command.
Then any changes I make are not resulting in a restart even though I can connect to the container and see that the changes are reflected in the expected files.
Ensure that you have volumes mounted for the services you are using, it is what makes the hot reloading work inside of a Docker container
See the full explanation
Proper way of installing the Compile Daemon is
RUN go install -mod=mod github.com/githubnemo/CompileDaemon
Reference: https://github.com/githubnemo/CompileDaemon/issues/45#issuecomment-1218054581

New to Docker - how to essentially make a cloneable setup?

My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.

Resources