Is there an easy way to automatically run a script whenever I (re)start a container? - docker

I have built a Docker image, copied a script into the image, and automatically execute it when I run the image, thanks to this Dockerfile command:
ENTRYPOINT ["/path/to/script/my_script.sh"]
(I had to give it chmod rights in a RUN command to actually make it run)
Now, I'm quite new to Docker, so I'm not sure if what I want to do is even good practice:
My basic idea is that I would rather not always have to create a new container whenever I want to run this script, but to instead find a way to re-execute this script whenever I (re)start the same container.
So, instead of having to type docker run my_image, accomplishing the same via docker (re)start container_from_image.
Is there an easy way to do this, and does it even make sense from a resource parsimony perspective?

docker run is fairly cheap, and the typical Docker model is generally that you always start from a "clean slate" and set things up from there. A Docker container doesn't have the same set of pre-start/post-start/... hooks that, for instance, a systemd job does; there is only the ENTRYPOINT/CMD mechanism. The way you have things now is normal.
Also remember that you need to delete and recreate containers for a variety of routine changes, with the most important long-term being that you have to delete a container to change the underlying image (because the installed software or the base Linux distribution has a critical bug you need a fix for). I feel like a workflow built around docker build/run/stop/rm is the "most Dockery" and fits well with the immutable-infrastructure pattern. Repeated docker stop/start as a workflow feels like you're trying to keep this specific container alive, and in most cases that shouldn't matter.
From a technical point of view you can think of the container environment and its filesystem, and the main process inside the container. docker run is actually docker create plus docker start. I've never noticed the "create" half of this taking substantial time, but if you're doing something like starting a JVM or loading a large dataset on startup, the "start" half will be slow whether or not it's coupled with creating a new container.

For chmod issue you can do something like this
COPY . /path/to/script/my_script.sh
RUN chmod 777 -R /path/to/script/my_script.sh
For rerun script issue
The ENTRYPOINT specifies a command that will always be executed when the container starts.
It can be either
docker run container_from_image
or
docker start container_from_image
So whenever your container start your ENTRYPOINT command will be executed.
You can refer this for more detail

Related

docker how to disable resetting data after restart

Im new in docker.
I have wrote:
docker pull *docker from dockerhub*
docker run *image*
sudo apt-get install nano
And when i restart this image, nano is not installed
Is it possible to turn off resetting data in docker container?
The container filesystem is intrinsically temporary. If you docker run -d image twice, the two copies will each start from a fresh copy of the container filesystem and not share anything. There is no option to disable this.
Correspondingly, it is usually a mistake to install software in an interactive shell in a container, since that installation will be lost as soon as the container exits. It's usually unnecessary to install interactive editors like nano or vim, again since they can't make permanent changes. It's better to install your application and only the specific supporting programs it needs in a Dockerfile.
(There is a Docker command that can create a new image from a container, but this is pretty much never considered a best practice. It's hard to specify things like the command the resulting image should run, a chain of images made this way will grow over time, and it's all but impossible to take security updates from the original image. You may also run afoul of licensing or corporate source-tracking requirements with this approach.)

Mandatory command or entrypoint in docker-compose

We just started moving our app to containers so I am very new to container world.
In our container image we only have the base linux image with some rpms installed and some scripts copied to the container. We were thinking that we will not have any command/entrypoint in the image itself. When the container comes up, our deployment job will then run a script inside the container to bring up the services (jetty/hbase/..). i.e. container bringup and services bringup are 2 different steps in the deployment job.
This was working until I was bringing up the container using the docker run/podman run command.
Now, we thought of moving to the docker-compose way. However when I say "docker-compose up" it complaints that "Error: No command specified on command line or as CMD or ENTRYPOINT in this image". i.e. while starting a container using the run command it's ok to not have any CMD or ENTRYPOINT, but while starting a container using docker-compose it's mandatory to provide one, why is that so ?
In order to get past that error, we tried putting some simple CMD in the compose file like, say, /bin/bash. However, with this approach, the container exits immediately. I found many stackoverflow links explaining why this is happening, eg: Why docker container exits immediately. If I put CMD as tail -f /dev/null in the compose file only then the container stays up.
Can you please help clarify what is the right thing to do. As mentioned, our requirement is that we want to bringup container without any services, and then bringup the services separately. Hence we don't have any use case for CMD/ENTRYPOINT.
Container( Image)s should be the thing that you deploy not a thing that you deploy code into; it is considered good practice to have immutable infrastructure (containers, VMs etc.).
Your build process should probably (!?) generate container images. A container image is (sha-256) hashed to uniquely identify it.
Whenever your sources change, you should consider generating a new container image. It is a good idea to label container images so that you can tie a specific image (not image name but tagged version) to a specific build so that you can always determine which e.g. commit resulted in which image version.
Corollary: it is considered bad practice to change container images.
One reason for preferring immutable infrastructure is that you will have reproducible deployments. If you have issues in a container version, you know you didn't change it and you know what build produced it and you know what source was used ...
There are other best practices for containers including that they should contain no state etc. It's old but seems comprehensive 10 thinks to avoid in containers and there are many analogs to The Twelve-Factor App
(Too!?) Often containers use CMD to start their process but, in my experience, it is better to use ENTRYPOINT. Both can be overridden but CMD is trivially overwritten while ENTRYPOINT requires a specific --entrypoint flag. In essence, if you use CMD, your users must remember to also run your process if they want to use command-line args. Whereas, ENTRYPOINT containers act more like running a regular-old binary.

Is there a point in Docker start?

So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.

Dockerfile entrypoint

I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?
You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/
The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.

Is it possible to use a "blank" docker container without any install on it?

I'm new to Docker and I think having understood that Docker is a Software virtualization tool (by opposition to OS virtualization). I understand, by this image, that Docker provides a very blank environment with a given file structure and is executing on the kernel Host. What we need to do is to put our application and its dependencies (with no OS) to have a very light portable container of our app.
But it seems there is a dark side of Docker : each Dockerfile begins with a "FROM ".
I saw this and this but I'm not sure to understand. It sounds that Docker is near an kind of simplified OS virtualizer.
I was interesting in the advantage of images size. But if we have to install an OS on each image my "portable" application will be quite heavy quickly.
Is there really no way to use a "blank image" ?
You can start with FROM scratch which is an empty filesystem.
Please see the section on Creating a Base Image if you'd like to spin up your own minimal root file system.
You might be surprised how many dependencies your application actually has on the root file system, and in the end, it is usually more efficient to use one of the standard root file systems in your FROM statement, as Charles Duffy commented above.
empty/Dockerfile
FROM scratch
WORKDIR /
build and check size
docker build empty/ -t empty
docker images | grep empty
This may be a bit too late. But I just had a use case where I needed to create a bare bone container that I could launch as part of multi-container docker-compose and get into it afterwards via /bin/bash. Keep in mind, a docker container must run a service and the container will be in existence only for as long as the service is running. So, I created this container with just python in it. I copied a 2 line python script that just makes it sleep. Here's what I did.
1. Create the python script wait_service.py with the following code:
import time
time.sleep(1000)
2. Create the Dockerfile with just the following lines:
FROM python:2.7
RUN mkdir -p /test
WORKDIR /test
COPY wait_service.py /test/
CMD python wait_service.py
3. Build and run the container. Using the container id, I could then get inside it. Please adjust the sleep time based on how long you want to keep this container.
Your application haveto have some underlying OS, without, there is no way for it to start..
I think the most basic one in the docker index is busybox, so a FROM busybox will give you a very minimal setup.
Docker is also using a lot of caching for each of its layers. So every docker container that uses FROM centos:centos7 at the top will only use 1 single set of minimal centos7 image.
The base images are very minimalistic, so it is nothing to worry about..

Resources