I'm developing app with few people and decided to fully dockerize dev environment (for easier deployment and collaboration). I did it by making super short Dockerfile.dev for services such as this one
FROM node:18
COPY ./frontend/dev_autostart.sh .
Each service has it's own dev_autostart.sh script that waits for volume to be mounted, then depending on the service it installs dependencies and runs it in dev mode (f.e. 'npm i && npm run dev').
My issue with this setup is that installing dependencies is harder because to do that i'd need to stop container and install dependencies using my machine or change docker entrypoint to something like 'tail -F anything'.
My ideal workflow would be
'docker attach $containerName'
Somehow stop current process without killing the container
Do stuff
Rerun process manually (cargo run . / npm run dev)
Is there any way to achieve that?
Related
I was asked to build an image for python programs and for example if we create 3 python programs and create a an image for them and if we run that image, basically a container will be created and will execute and will exit, and for the second program another container will be created.
that's what usually will happen. but here i was informed that a single container should be created for all the programs and it should be in the run state continuously and if we give the program name in the run command it should execute that program, not the other two programs, and it should start and stop based on the commands i give.
for this to happen i was given a hint/suggestion i should say that if i create an entrypoint script and copy that in the docker file it'll work. but unfortunately, when i researched on it in internet the entrypoint scripts are available for linux, but I'm using windows here.
So, first to explain why the container exits after you run it: Containers are not like VMs. Docker (or the container runtime you choose) will check for what is running on the containers. This "what is running" is defined on the ENTRYPOINT on your dockerfile. If you don't have an entrypoint, there's nothing running and Docker stops the containers. Or it might be the case that something ran and the container stopped after it executed.
Now, the Windows Server base images don't have an entrypoint. If you just ask to run the container, it will start and stop immediately. That is a problem for background services like web servers, for example IIS. To solve that, Microsoft created a service called Service Monitor. If you look for the docker file of the IIS image that Microsoft produces, you'll notice that the entrypoint is the service monitor that in turn checks the status of the IIS service. If IIS is running, service monitor will continue to run and thus the container keeps running indefinitely. (Here's the dockerfile: https://github.com/Microsoft/iis-docker/blob/main/windowsservercore-ltsc2022/Dockerfile)
Now, for your case, what you need is a job on your python container. Look at the description on the link provided by Mihai: https://hub.docker.com/_/python
This is their example docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
Note the last line. It's not an entry point, which means that the python app will run and exit, which will stop the container. IF you need the container to run indefinetly, either you leverage something like service monitor (but need to build a service in the background) or you create your own logic to keep something running. For example, a infinte loop.
Does that helps?
I am facing the issue
(gcloud.run.deploy) Cloud Run error: Container failed to start. Failed
to start and then listen on the port defined by the PORT environment
variable. Logs for this revision might contain more information.
There are a few post with this error but I couldn't find my particular case.
I am running a background task, nothing to expose, it connects to firebase process some data and store it back. I wanted this process to run on a container on Cloud Run so I made it a container, which runs perfectly locally, but when uploading it to CR it fails with the above error.
I tried to expose 8080 on dockerfile and a few more things but if you try to connect to they container it has no server running to connect to. It is a batch task.
Can anyone tell me if it is possible at all to upload this type of tasks to Cloud Run, I do not know how to solve the issue. I wouldnt believe google requires a server running on the container to allow it, I saw some posts with dev pulling an nginx on the image so they can expose the port but this would be totally unnecessary in my case.
Thanks for your advice
UPDATE
Cloud Logging: The error simply say there was a fail to start the container, which is funny because the container starts and also shows some logs like if it were working but then it stops.
Build on MAC yes.
DockerFile is pretty simple.
FROM openjdk:11
ENV NOTIFIER_HOME /opt/app/
ENV NOTIFIER_LOGS /opt/notifications/logs/
RUN mkdir -p $NOTIFIER_HOME RUN mkdir -p $NOTIFIER_LOGS
RUN apt update
#RUN apt install curl
COPY docker/* $NOTIFIER_HOME
EXPOSE 8080
ENV TMP_OPTS -Djava.io.tmpdir=/tmp ENV LOG4j_OPTS
-Dlog4j.configurationFile=$NOTIFIER_HOME/logback.xml ENV NOTIFIER_OPTS $TMP_OPTS $LOG4j_OPTS
ENV JAVA_GC_OPTS -Xms1g -Xmx1g
WORKDIR $NOTIFIER_HOME ENTRYPOINT ["sh", "-c", "/opt/app/entrypoint.sh"]
You can't run background jobs on Cloud Run. Wrap it in a webserver as proposed by MBHA if the process take less than 1h.
Else you can you GKE Autopilot to run your container for a while. you pay only when your container run. And the first cluster is free. You can have a try on it!
As hack you can run your container in Cloud Build also, or in Vertex AI custom container training.
I've run in to a similar issue with building custom image on MAC + deploying in to Cloud Run. In my case, it turned out to be the docker platform causing the problem. The way I isolated this was by building the same image in Cloud Shell and that would work perfectly fine in Cloud Run.
Now, if you need to build it locally on MAC go ahead and test it by changing the Docker platform:
export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker build -t mytag:myver .
Once the image has been built, you can inspect the architecture:
docker image inspect mytag:myver | grep -i Architecture
Then deploy it to Cloud Run.
The explanation is in your question:
I am running a background task, nothing to expose
A cloud run application, so your container, must be listening for incoming HTTP requests as stated in the Container runtime contract. That's why in all cloud run examples, java in your case, spring boot is used with #RestController. Other explanation can be found in this answer.
Update:
So the solution is either to
add a webserver to your code and wrap it with spring boot and controller logic
use Cloud Function rather than Cloud Run and get rid of the Dockerfile and in the same time have simpler code and less configuration
I am trying to play with Google Cloud Run, I have the same service that works fine in App Engine Flex. Any thoughts what could be the issue?
Somehow it shows that service is healthy.
This means infrastructure (container manager) scales down the number instances when traffic drops.
It's safe to ignore.
For others who find this question when their container didn't start the first time you deployed it: It's important to note that you need to have it listening on the environment variable PORT.
It appears that Cloud Run will dynamically map your container to a port at invocation, and the service that you're running needs to (dynamically) use this to serve it's content.
For reference, here's how I got the base Apache Docker image to work with Cloud Run to host a static site built via Node:
FROM node:lts AS build
COPY . .
RUN npm install
RUN npm run build
FROM httpd:latest
ENV PORT=80
RUN sed -i 's/80/${PORT}/g' /usr/local/apache2/conf/httpd.conf
COPY --from=build ./dist/ /usr/local/apache2/htdocs/
For me, it was because billing was disabled
Make sure billing is enabled on your GCP project
https://console.cloud.google.com/billing
I run want to know how to automate my npm project better with docker.
I'm using webpack with a Vue.js project. When I run npm run buld I get a output folder ./dist this is fine. If I then build a docker image via docker build -t projectname . and run this container all is working perfectly.
This is my Dockerfile (found here)
FROM httpd:2.4
COPY ./dist /usr/local/apache2/htdocs/
But it would be nice if I could just build the docker image and not have to build the project manually via npm run build. Do you understand my problem?
What could be possible solutions?
If you're doing all of your work (npm build and others) outside of the container, and have infrequent changes you could use a simple shell script to wrap the two commands.
If you're doing more frequent iterative development you might consider using a task runner (grunt maybe?) as a container service (or running it locally).
If you want to do the task running/building inside of Docker, you might look at docker-compose. The exact details of how to set this would would require more detail about your workflow, but docker-compose makes it relatively easy to define & link multiple services in a single file, and start and stop them with a simple set of commands.
I'm working on a Docker container built on Phusion's baseimage which needs to have a number of services only started on demand. I'd like these services to remain as runit services, I'd just like them to not automatically start on boot.
As seen in their documentation, you can easily add a service by creating a folder in /etc/service with the name of your service, ie: /etc/service/jboss. Next, you must create and chmod +x a file in that service directory called run which will execute the startup of your service.
How can I do this and ensure that the service will not start on boot? The goal is still to be able to do sv start jboss, but to not have it start on boot.
Add your services to /etc/sv/<SERVICE_NAME>/ and add the run executable just like you are doing now. When you are ready to run the service, simply symlink it to /etc/service and runit will pick it up and start running it automatically.
Here's a short (non-optimized) Dockerfile that shows a disabled service and an enabled service. The enabled service will start at Docker run. The disabled service will not start until it is symlinked to /etc/service, at which time runit will start it within five seconds.
FROM phusion/baseimage
RUN mkdir /etc/sv/disabled_service
ADD disabled_service.sh /etc/sv/disabled_service/run
RUN chmod 700 /etc/sv/disabled_service/run
RUN mkdir /etc/sv/enabled_service
ADD enabled_service.sh /etc/sv/enabled_service/run
RUN chmod 700 /etc/sv/enabled_service/run
RUN ln -s /etc/sv/enabled_service /etc/service/enabled_service
CMD ["/sbin/my_init"]
With phusion/baseimage:0.9.17 (not sure in which version it was introduced) you can bake RUN touch /etc/service/jboss/down in your Dockerfile. It prevents the runit from starting it on boot and you're still able to sv start jboss later.
I'm looking at exactly the same problem (when running Cassandra in a container) and I haven't found a clean answer. Here are the two hacky ways I've come up with.
-Have an early runlevel script that moves a file in and out of run depending on whether you want something to start at boot.
-(mis)Use one of the service control commands for runit to actually start your service and use a dummy run command to bypass the automatic start.
Both methods are clearly less than ideal, but they've worked for some purposes.