Run docker with just copied file inside - docker

I'm building an image that just contains a copied file, with the following Dockerfile:
FROM alpine:3.8
COPY ./somefile /srv/somefile
When I try to docker run the image, it exits immediately, that is just after docker run I have:
Exited (0) 1 second ago.
I tried adding CMD ["/bin/sh"] or ENTRYPOINT ["/bin/sh"] but it doesn't change anything.
Is it possible to have such container with just copied file and make it up and running, until I stop it?

So there is really no problem - you have succeeded running your docker with file inside. But as long as you didn't supply any additional job for your container, your run process had taken 1 second.
First you should get acquainted with what is considered as "running" container in Docker terminology. Container is "running" until its main process (PID 1) is running. Process exits => container stops. When you want your container to remain running (as a service for example), you need to keep your main process active.
Second, what is your main process? It is the process being launched upon container start. It is combined from ENTRYPOINT and CMD directives (with some rules). These directives are often given the default value in Dockerfile but you can override them. If you just run docker run <image>, default values are taken, but if you provide some arguments after <image>, they override CMD.
So, for alpine you can simply run shell like docker run -it alpine sh. And until you exit the shell, your container is running.
And the last. Argument -it connects both STDIN and STDOUT/STDERR to your console. So your sh process, which is main, keeps alive till you close the console.

Related

How to use container exits immediately after startup in docker?

As everyone knows, we can use docker start [dockerID] to start a closed container.
But, If this container exits immediately after startup. What should I do?
For example, I have a MySQL container, it runs without any problems. But the system is down. At next time I start this container. It tell me a file is worry so that this container immediately exit.
Now I want to delete this file, but this container can not be activated, so I can't enter this container to delete this file. What should I do?
And if I want to open bash in this state container, What should I do?
Delete the container and launch a new one.
docker rm dockerID
docker run --name dockerID ... mysql:5.7
Containers are generally treated as disposable; there are times you're required to delete and recreate a container (to change some networking or environment options; to upgrade to a newer version of the underlying image). The flip side of this is that containers' state is generally stored outside the container filesystem itself (you probably have a docker run -v or Docker Compose volumes: option) so it will survive deleting and recreating the container. I almost never use docker start.
Creating a new container gets you around the limitations of docker start:
If the container exits immediately but you don't know why, docker run or docker-compose up it without the -d option, so it prints its logs to the console
If you want to run a different command (like an interactive shell) as the main container command, you can do it the same as any other container,
docker run --rm -it -v ...:/var/lib/mysql/data mysql:5.6 sh
docker-compose run db sh
If the actual problem can be fixed with an environment variable or other setting, you can add that to the startup-time configuration, since you're already recreating the container

How can I run script automatically after Docker container startup without altering main process of container

I have a Docker container which runs a web service. After the container process is started, I need to run a single command. How can I do this automatically, either by using Docker Compose or Docker?
I'm looking for a solution that does not require me to substitute the original container process with a Bash script that runs sleep infinity etc. Is this even possible?

How to understand Container states

I am trying to understand the life cycle of a container. Downloaded alpine image, built containers using "docker container run" command, all of those containers ran and in "Exited" status. While using "docker container start" command, some of the containers are staying in up status(running) and some or Exited immediately. Any thoughts on why the difference in such behavior around statuses? One difference I observed is, containers staying in up status are modified with respect to file structure from base image.
Hope i was able to put the scenario with proper context. Help me in understanding the concept.
The long sequence is as follows:
You docker create a container with its various settings. Some settings may be inherited from the underlying image. It is in a "created" status; its filesystem exists but nothing is running.
You docker start the container. If the container has an entrypoint (Dockerfile ENTRYPOINT directive, docker create --entrypoint option) then that entrypoint is run, taking the command as arguments; otherwise the command (Dockerfile CMD directive, any options after the docker create image name) is run directly. This process gets process ID 1 in the container and the rights and responsibilities that go along with that. The container is in "running" status.
The main process exits, or an administrator explicitly docker stops it. The container is in "exited" status.
Optionally you can restart a stopped container (IME this is unusual though); go to step 2.
You docker rm the stopped container. Anything in the container filesystem is permanently lost, and it no longer shows up in docker ps -a or anywhere else.
Typically you'd use docker run to combine these steps together. docker run on its own does the first two steps together (creates a container and then starts it). If you docker run --rm it does everything listed above.
(All of these commands are identical to the docker container ... commands, but I'm used to the slightly shorter form.)
The key point here is that there is some main process that the container runs. Typically this is some sort of daemon or server process, and generally specified in the image's Dockerfile. If you, for example, docker run ... nginx, then its Dockerfile ends with
CMD ["nginx", "-g", "daemon off;"]
and that becomes the main container process.
In early exploration it's pretty common to just run some base distribution image (docker run --rm -it alpine) but that's not really interesting: the end of the lifecycle sequence is removing the container and once you do that everything in the container is lost. In standard use you'd want to use a Dockerfile to build a custom image, and there's a pretty good Docker tutorial on the subject.

Difference between Docker Build and Docker Run

If I want to run a python script in my container what is the point in having the RUN command, if I can pass in an argument at build along with running the script?
Each time I run the container I want x.py to be run on an ENV variable passed in during the build stage.
If I were to use Swarm, and the only goal was to run the x.py script, swarm would only be building nodes, rather than building and eventually running, since the CMD and ENTRYPOINT instructions only happen at run time.
Am I missing something?
The docker build command creates an immutable image. The docker run command creates a container that uses the image as a base filesystem, and other metadata from the image is used as defaults to run that image.
Each RUN line in a Dockerfile is used to add a layer to the image filesystem in docker. Docker actually performs that task in a temporary container, hence the selection of the confusing "run" term. The only thing preserved from that RUN command are the filesystem changes, running processes, changes to environment variables, shell settings like the current working directory, are all lost when the temporary container is cleaned up at the completion of the RUN command.
The ENTRYPOINT and CMD value are used to specify the default command to run when the container is started. When both are defined, the result is the value of the entrypoint is run with the value of the cmd appended as a command line argument. The value of CMD is easily overridden at the end of the docker run command line, so by using both you can get easy to reconfigure containers that run the same command with different user input parameters.
If the command you are trying to run needs to be performed every time the container starts, rather than being stored in the immutable image, then you need to perform that command in your ENTRYPOINT or CMD. This will add to the container startup time, so if the result of that command can be stored as a filesystem change and cached for all future containers being run, you want to make that setting in a RUN line.

Understanding Dockerfile CMD/ENTRYPOINT

I'm new to Docker. Trying to build small image with Transmission.
Here is my Dockerfile:
#base image
FROM alpine:latest
#install Transmission
RUN apk update
RUN apk add transmission-daemon
#expose port
EXPOSE 9091
#start app
CMD ["/usr/bin/transmission-daemon"]
Then I start container:
docker run transmission
and it immediately quits. I supposed that it will stay running, as transmission-daemon should stay running.
I tried ENTRYPOINT also, but result is the same. However, next version works as expected:
ENTRYPOINT ["/usr/bin/transmission-daemon"]
CMD ["-h"]
It runs, show Transmission help and quits.
What I am missing about how Docker runs apps inside containers?
Docker keeps a container running as long as the process which the container starts is active. If your container starts a daemon when it runs, then the daemon start script is the process Docker watches. When that completes, the container exits - because Docker isn't watching the background process the script spawns.
Typically your CMD or ENTRYPOINT will run the interactive process rather than the daemonized version, and you let Docker take care of putting the container in the background with docker run -d. (The actual difference between CMD and ENTRYPOINT is about giving users flexibility to run containers from your image in different ways).
It's worth checking the Docker Hub if you're looking at running an established app in a container. There are a bunch of Transmission images on Docker Hub which you can use directly, or check out their Dockerfiles to see how the image is built.

Resources