Here is my simple docker file
FROM java:8
EXPOSE 4000
now when I run it using the following command
sudo docker run --name hello dockerfile
and do docker ps -a it shows the status as exited. I just want to keep this container up and running so I can ssh into this container and probably transfer files and so on. It looks like containers are mainly used to run servers am I correct?
you can at least keep your container up with something like docker run -d hello sleep infinity but as said by René M, you should put in your Dockerfile something to do in your CMD or ENTRYPOINT, see the doc
https://docs.docker.com/engine/reference/builder/#cmd
and
https://docs.docker.com/engine/reference/builder/#entrypoint
That is realy simple.
Because your container is running nothing that last long. What happens is, that this container starts, has nothing to do and stops.
What you can do is:
Run the container in interactive mode with attached tty. This way your console enters the container after it's start, and let him run a tty, which is something to do and prevends the container from stopping. Then you can work inside this container, like installing an application. Doing this your work will be lost after stoping the container. But you can run docker commit on that container, which makes your changes persistent.
docker run -i -t --name hello dockerfile
Enhance your dockerfile with something usefull. Like copying an application into the container and provide a CMD command to run, when the container starts.
After this the container will last as long as your CMD command runs. If the command is a server or deamon application, the container will last for ever and will only stop when you stop him.
Related
I'm fairly new to Docker. I have a long Dockerfile that I inherited from a previous developer, which has many errors and I'm trying to get it back to a working point. I commented out most of the file except for just the first line:
FROM ubuntu:14.04
I did the following:
docker build -t pm . to build the image - this works because I can see the image when I execute docker images
docker run <image-id> returns without error or any message. Now I'm expecting the container to be created from the image and started. But when I do a docker ps -a it shows the container exited:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b05f9727f516 f216cfb59484 "/bin/bash" About a
minute ago Exited (0) About a minute ago
lucid_shirley
Not sure why can't I get a running container and why does it keep stopping after the docker run command.
executing docker logs <container_id> displays nothing - it just returns without any output.
Your Docker image doesn’t actually do anything, container stop when finish its job. Since here no foreground process running it will start and then immediately stop.
To confirm your container have no issues, try to put below code into a docker-compose.yml(in same folder as the Dockerfile) and run docker-compose up, now you will see your container is running without exiting.
version: '3'
services:
my-service:
build: .
tty: true
Please have a look here Docker official tutorial it will guide you to how to work with docker.
try
docker run -it <image> /bin/bash
to run a shell inside the container.
That won't do much for you, but that'll show you what is happening: as soon as you exit the shell, it will exit the container too.
Your container basically doesn't do anything: it has an image of Ubuntu but doesn't have an ENTRYPOINT or a CMD command to run 'something'
Containers are ephemeral when ran: they run a single command and exit when the command finishes.
Docker container categorized following way.
Task Based : When container start it will start processing and it complete the process then exited.
Background container : It will wait for some request.
As you not provided your docker file so I assume that you have only one statement.
FROM ubuntu:14.04
your build statement create image with name pm.
Now you run
docker run pm
It will start container and stop as you did not provide any entry point.
Now try this
This is one command prompt or terminal.
docker run -it pm /bin/bash
Open another terminal or command prompt.
docker ps ( Now you will see there is one container).
If you want to see container that is continuously running then use following image.
(This is just a example)
docker run -d -p 8099:80 nginx
Above line run one container with Nginx image and when you open your browser http://localhost:8099 you can see the response.
Docker Containers are closely related to the process they are running. This process is specified by the "CMD" part on the Dockerfile. This process has the PID "1". If you kill it, your container is killed. If you haven't one, your container will stop instantly. In your case, you have to "override" your CMD. You can do it with a simple : "docker run -it ubuntu:18.04 bash". "-it" is mandatory since it allows the stdin to be attached to your container.
Have fun with docker.
Each instruction of Dockerfile is a layer within a container which perform some task. In your docker file It's just the loading the ubuntu which is completed when you run the docker within a fraction of seconds and exit since process finished. So if want to have your container running all the time then there should be a foreground process running in your docker.
For testing if you run
docker run <imageid> echo hi it will return the output means your container is fine.
I was wondering if anyone knew if it is possible to know when a docker CMD is done executing?
I initially have tried putting an ENTRYPOINT command after the CMD but it runs immediately when you run the docker container.
Also, if this can only be done with docker-compose that would be fine as well if there is a way to know when the command: is finished?
The container stops and exits once the CMD has finished running.
You can use:
$ docker wait [container name/id]
to wait on a container to stop. If the container is already stopped, this command will return immediately. Otherwise, it'll wait until the container finishes its work, or is otherwise stopped.
From https://docs.docker.com/engine/reference/commandline/wait/
Block until one or more containers stop, then print their exit codes
The Docker CMD will not be done until the container is stopped/killed. The CMD instruction is a way start the main process that will run inside the container. This process will keep running until the
container is stopped or killed.
Inside the Dockerfile, it doesn't matter where you put the CMD or ENTRYPOINT instruction. When you include both and ENTYPOINT and CMD instructions inside the same dockerfile, the CMD will be appended to the ENTRYPOINT command as arguments.
The first two answers are indeed correct but don't directly answer your questions. Once the CMD is finished the container will exit, but will still exist on your host until it's removed.
Assuming you started the docker run or docker-compose up with a -d so that they run in the background (detached):
docker container ps only shows running containers, so once it's exited then docker container ps -a will show it.
docker-compose ps shows the status of all compose services, including stopped containers.
If command "docker run ubuntu bash" the container won't last.
but if I command "docker run -it ubuntu bash"
the container will make a pseudo-tty and keep this container alive.
my question is
is there any way I can make a Dockerfile for building an image based on ubuntu/centos then I just need to command "docker run my-image" and
the container will last.
apologize for my poor english, I don't know if my question is clear enough.
thanks for any response
There are three ways to run containers:
task containers - do one thing and then exit, like docker run ubuntu ls /
interactive containers - open a connection to the container with -it, like docker run -it ubuntu bash
background containers - keep a container running detached in the background with -d, like docker run -d ubuntu:14.04 ping localhost
Docker keeps the container running as long as there is an active process in the container. The first container exits when the ls command completes. The second container will exit when you exit the bash session. The third container will stay running as long as the ping process keeps running (note that ping has been removed from the recent Ubuntu images, which is why the example specifies 14.04).
In practice to start a container I do:
docker run a8asd8f9asdf0
If thats the case, what does:
docker start
do?
In the manual it says
Start one or more stopped containers
This is a very important question and the answer is very simple, but fundamental:
Run: create a new container of an image, and execute the container. You can create N clones of the same image. The command is:
docker run IMAGE_ID and not docker run CONTAINER_ID
Start: Launch a container previously stopped. For example, if you had stopped a database with the command docker stop CONTAINER_ID, you can relaunch the same container with the command docker start CONTAINER_ID, and the data and settings will be the same.
run runs an image
start starts a container.
The docker run doc does mention:
The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
That is, docker run is equivalent to the API /containers/create then /containers/(id)/start.
You do not run an existing container, you docker exec to it (since docker 1.3).
You can restart an exited container.
Explanation with an example:
Consider you have a game (iso) image in your computer.
When you run (mount your image as a virtual drive), a virtual drive is created with all the game contents in the virtual drive and the game installation file is automatically launched. [Running your docker image - creating a container and then starting it.]
But when you stop (similar to docker stop) it, the virtual drive still exists but stopping all the processes. [As the container exists till it is not deleted]
And when you do start (similar to docker start), from the virtual drive the games files start its execution. [starting the existing container]
In this example - The game image is your Docker image and virtual drive is your container.
run command creates a container from the image and then starts the root process on this container. Running it with run --rm flag would save you the trouble of removing the useless dead container afterward and would allow you to ignore the existence of docker start and docker remove altogether.
run command does a few different things:
docker run --name dname image_name bash -c "whoami"
Creates a Container from the image. At this point container would have an id, might have a name if one is given, will show up in docker ps
Starts/executes the root process of the container. In the code above that would execute bash -c "whoami". If one runs docker run --name dname image_name without a command to execute container would go into stopped state immediately.
Once the root process is finished, the container is stopped. At this point, it is pretty much useless. One can not execute anything anymore or resurrect the container. There are basically 2 ways out of stopped state: remove the container or create a checkpoint (i.e. an image) out of stopped container to run something else. One has to run docker remove before launching container under the same name.
How to remove container once it is stopped automatically? Add an --rm flag to run command:
docker run --rm --name dname image_name bash -c "whoami"
How to execute multiple commands in a single container? By preventing that root process from dying. This can be done by running some useless command at start with --detached flag and then using "execute" to run actual commands:
docker run --rm -d --name dname image_name tail -f /dev/null
docker exec dname bash -c "whoami"
docker exec dname bash -c "echo 'Nnice'"
Why do we need docker stop then? To stop this lingering container that we launched in the previous snippet with the endless command tail -f /dev/null.
daniele3004's answer is already pretty good.
Just a quick and dirty formula for people like me who mixes up run and start from time to time:
docker run [...] = docker pull [...] + docker start [...]
It would have been wiser to name the command "new" instead of "run".
Run creates a container instance of an existing (or downloadable) image and starts it.
I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:
service supervisor start
service nginx start
And I start it inside of my Dockerfile as follows:
CMD ["sh", "/root/credentialize_and_run.sh"]
I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:
➜ docker_test docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7706edc4189 some_name/some_repo:blah "sh /root/run-all.sh 8 minutes ago Exited (0) 8 minutes ago grave_jones
What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?
If you are using a Dockerfile, try:
ENTRYPOINT ["tail", "-f", "/dev/null"]
(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)
I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.
docker run -td <image>
Here is what the flags do (according to docker run --help):
-d, --detach=false Run container in background and print container ID
-t, --tty=false Allocate a pseudo-TTY
The most important one is the -t flag. -d just lets you run the container in the background.
This is not really how you should design your Docker containers.
When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.
The container will "exit" when the process itself exits (in your case, that process is your bash script).
However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.
They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.
you can run plain cat without any arguments as mentioned by bro #Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)
The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.
You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:
CMD ["/usr/bin/supervisord", "-n"]
And your supervisord.conf:
[supervisord]
nodaemon=true
[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0
[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true
Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.
That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.
Motivation:
There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.
The answer:
Stick with a good base image like the phusion base image. There may be others. Please comment.
And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.
The phusion image itself will just start and keep running if you issue this basic docker run statement:
moin#stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin#stretchDEV:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
521e8a12f6ff phusion/baseimage "/sbin/my_init" 12 seconds ago Up 11 seconds
Or dead simple:
If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
Or this for busybox:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
This is nice, because it will exit immediately on a docker stop.
Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.
Updates
As response to Charles Desbiens concerning running multiple processes in one container:
This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.
Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image
Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting
service supervisor start && nginx
In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.
So you need to understand the trade-offs and make your decision accordingly.
Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.
So my current CMD when running a container that should run into infinity:
CMD ["sleep", "infinity"]
and then run it using:
docker build
docker run --rm --init app
crf.:
rm docs and init docs
Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do
wait $NGNIX_PID
In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well
Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>
This holds good for docker version 20.10.2
There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.
It is very easy to write a bash placeholder that simulates a running service:
while true; do
sleep 100
done
You replace this by something more serious as the development progress.
How about using the supervise form of service if available?
service YOUR_SERVICE supervise
Once supervise is successfully running, it will not exit unless it is
killed or specifically asked to exit.
Saves having to create a supervisord.conf