Container exits after script is run in cloud run - google-cloud-run

I have a Dockerfile that runs a python script to process a dataset and store in GCS. I've uploaded the image to GCR and I can run it with Cloud run. It runs fine except for the fact when the script execution is done, the container exits, see image.
This causes the service to fail.
How do I ensure that while the container is scaled down to 0 it doesn't report that the service failed.

Related

How can I let a Docker image run inside a container, so it awaits for commands?

I have a problem with my Docker Container.
My backend image stops running after it has finished installing dependencies.
While my app is running there will be some querys to the backend, but because it is stopped this doesn't work.
My Dockerfile for the backend
Is there a command I need to add so it runs in the background and listens for querys?
I tried using node as a base image but than it doesnt know R.

Does running docker-compose restart re-run init script

I have a Docker container with a init script CMD ["init_server.sh"]
which is orchestrated by docker-compose.
Does running docker-compose restart re-run the init script,
or will only running docker-compose down followed by docker-compose up
trigger the script to be run again?
I imagine whatever the answer to this will apply to docker restart as well.
Am I correct?
A Docker container only runs one process, defined by the "entrypoint" and "command" settings (typically from a Dockerfile, you can override them in a docker-compose.yml). Whatever that process does, it will do every time the container starts.
In terms of Docker commands, the Compose commands you show aren't different from their underlying plain-Docker variants. restart is just stop followed by start, so it will re-run the main container process in its existing container with the existing (possibly modified) container filesystem. If you do a docker rm in between these (or docker-compose down) the process starts in a clean container based on the image.
It's typical for an initialization script to check if the initialization it requires has already been done. For things like the standard Docker Hub database images, this works by checking if the data directory is totally empty; initialization only happens on the very first startup. An init script that runs something like database migrations will generally keep track of which migrations have already been done and won't repeat work.

How to automatically extract build artifacts from a docker container keeping ownership of the calling process?

I'm writing some automated build scripts which use a docker container as a build environment.
One thing that's been bugging me is finding a way to extract the build artifacts from the container and retaining the user ownership of the calling process.
Usually this is automatic; when a process creates a file, the file is owned by the user running the process. But where a process invokes a docker container, the container runs as a different user (often root). I see no simple way for the container to run as the same user as the calling process. So if I map a local directory when invoking docker (docker run --volume $(pwd)/target:/target) then when the build script in the image writes it's files, they will turn up in the host's build directory owned by root.
The other alternative I can see is to run the container, wait for it to complete, then use docker cp to extract the build artifacts. The trouble with this is I don't see a way to run a container to completion and then get the container ID of the recently created container.
Is there a common way to automatically / programmatically extract build artifacts from a docker container keeping the ownership of the calling process?

How can I run script automatically after Docker container startup without altering main process of container

I have a Docker container which runs a web service. After the container process is started, I need to run a single command. How can I do this automatically, either by using Docker Compose or Docker?
I'm looking for a solution that does not require me to substitute the original container process with a Bash script that runs sleep infinity etc. Is this even possible?

Docker dealing with processes that don't end?

I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?
You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.

Resources