So on my server, I run my docker container as a daemon process via:
docker run -p 80:80 -td example
It seems to work fine - for now!
But, what if I restart my server or my docker container crashes? Then it doesn't work fine.
What is the best/conventional/standard way to keep my docker container running?
Thanks!
What you are looking for is an orchestrator. An orchestrator will manage your container life cycle for you. You might want to try the Docker orchestrator, Swarm. You can also check Kubernetes or Mesos.
You could also use Docker compose to make things easier.
As #wassim-dif pointed out you might want to use an orchestrator.
If you just want your docker container to restart automatically in case of failure and when you restart your server then you need to run it using the --restart flag, such as:
docker run -p 80:80 -td --restart=always example
This way, your container will restart automatically.
Related
I am reading a docker run command where it maps /var/run/docker.sock
like:
docker run -it --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock theimage /bin/bash
Why would the container would need access to the socket? (this article says it is a very bad idea.)
What would be one case where the container need access to the socket?
It is not necessary until the container needs to invoke itself the docker daemon, for example, in order to create and run an inner container.
For example, in my CI chain Jenkins builds a docker image to run the build and test process. Inside it we need to create an image to test and then submit it to K8S. In such situation Jenkins, when builds the pipeline container, passes to it the docker socket to allow the container to create other containers using the host server docker daemon.
I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost
On my host machine, I have installed docker. Then I pull a Jenkins image.
I want to run that image like daemon service like some services runs on my host machine after rebooting my machine every time. And how can I fix Jenkins port permanent(like 8080) in mine docker?
docker run -d --restart always -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
-d: for running the container in background
--restart always: for the container to always restart (unless manually stopped), it will start automatically at boot.
The rest of the arguments are from the jenkins image documentation, you may need to adapt your port mapping and volume path.
I have some docker containers running on my docker environment (on a CentOS VM) which need docker inside. So I mount /var/run/docker.sock inside the containers.
Now I'm creating /etc/default/docker in which I put
DOCKER_OPTS="-H tcp://xx.xx.xx.xx:2376"
But now my question is: which IP is xx.xx.xx.xx? Is it the IP of the host or the IP of a container? + Is this the savest way to let a docker container use the socket? (=use docker in docker)
Running docker within docker is not so trivial an you might have a good reason for doing that.
The last time I was doing that, I was using dind (docker in docker) and had to mount the socket (/var/run/docker.sock) and used it in a combination with the --privileged flag. However things might have changed now (see https://github.com/docker/docker/pull/15596) and it should be able to run it without the socket mount:
docker run --privileged -d docker:dind
So be sure to check out this comprehensive guide at https://hub.docker.com/_/docker/
Working with Docker in Docker can be tricky. I would recommend using the official Docker image with the dind tag. You shouldn't need to specify the DOCKER_HOST in options as it will be correctly configured. For example running:
docker run -ti --name docker -v /var/run/docker.sock:/var/run/docker.sock --privileged docker:dind sh
Will drop you to a shell inside the container. Then if your run docker ps you should see a list of containers running on the host machine. Note the --privileged flag is required in this case as we are accessing the Docker daemon outside the container.
Hope this helps!
Dylan
Edit
Drop the --privileged flag from the above command due to security issues highlighted by Alexander in the comments. You also can drop the dind tag as its not required.
Is there any way to stop a docker container which started with --restart=always like following
sudo docker run -it --restart=always <image_id>
Here's the mighty eagle that docker has recently included. :D
You can update docker container.
use sudo docker update --restart=no <container_id> to update --restart flag of the container.
Now you can stop the container.
You should be able to just use docker stop and then docker rm to make sure the container doesn't restart when the daemon restarts.
Your question is an issue on the docker github and someone has made some comments about to how to solve here
I'm not sure if it's intended behavior to restart a stopped container on daemon restart... but for sure docker rm would be all that is needed, no need to remove the image.
If you use docker stop or docker kill, you're manually stopping the container so it will not restart. You can do some tests about restart policies: restarting the docker daemon, rebooting your server, using a CMD inside a container and running an exit...
See this answer for more details:
https://serverfault.com/a/884823/381420
TL;DR
Also check docker swarm if there are any stacks that spin up containers there. Simply run docker stack ls followed by docker rm <stack_name>.
Longer version
This is not exactly an answer to your question, but I had a very similar issue where containers kept spinning up even if I ran docker update --restart=no <container_id>, docker stop <container_id> and docker rm <container_id>. These were some old containers, so I had no clue how I generated them.
After some Googling, I realized that it was a docker swarm stack that kept spinning up containers. By running docker stack ls followed by docker rm <stack_name>, I was able to stop the auto spin-up of the containers and thus remove them entirely.