I currently have 8 containers across 4 different host machines in my docker setup.
ubuntu#ip-192-168-1-8:~$ docker service ls
ID NAME MODE REPLICAS IMAGE
yyyytxxxxx8 mycontainers global 8/8 myapplications:latest
Running a ps -a on the service yields the following.
ubuntu#ip-192-168-1-8:~$ docker service ps -a yy
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
\_ mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
My question is how can i execute a restart of all the containers using the service ID? I dont want to manually log into every node and execute a restart.
In the latest stable version of docker 1.12.x, it is possible to restart the container by updating the service configuration, but in the docker 1.13.0 which is released soon, even if the service setting is not changed, by specifying the --force flag, the container will be restarted. If you do not mind to use the 1.13.0 RC4 you can do it now.
$ docker service update --force mycontainers
Update: Docker 1.13.0 has been released.
https://github.com/docker/docker/releases/tag/v1.13.0
Pre-Docker 1.13, I found that scaling all services down to 0, waiting for shutdown, then scaling everything back up to the previous level works.
docker service scale mycontainers=0
# wait
docker service scale mycontainers=8
Updating the existing service, swarm will recreate all containers. For example, you can simply update a property of the service to archive restarting.
Related
This question already has an answer here:
Why do I have to delete docker containers?
(1 answer)
Closed 1 year ago.
I am new to Docker and just getting started. I pulled a basic ubuntu image and started a few containers with it and stopped them. When I run the command to list all the docker containers (even the stopped ones) I get an output like this:
> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099c42011f24 ubuntu:latest "/bin/bash" 6 seconds ago Exited (0) 6 seconds ago sleepy_mccarthy
dde61c10d522 ubuntu:latest "/bin/bash" 8 seconds ago Exited (0) 7 seconds ago determined_rosalind
cd1a6fa35741 ubuntu:latest "/bin/bash" 9 seconds ago Exited (0) 8 seconds ago unruffled_lichterman
ff926b6eba23 ubuntu:latest "/bin/bash" 10 seconds ago Exited (0) 10 seconds ago cool_rosalind
8bd50c2c4729 ubuntu:latest "/bin/bash" 12 seconds ago Exited (0) 11 seconds ago cranky_darwin
My question is, is there a reason why docker does not delete the stopped containers by default?
The examples you've provided show that you're using an Ubuntu container just to run bash. While this is fairly common pattern while learning Docker, it's not what docker is used for in production scenarios, which is what Docker cares about and is optimizing for.
Docker is used to deploy an application within a container with a given configuration.
Say you spool up a database container to hold information about your application. Then your docker host restarts for some reason, and that database disappears by default. That would be a disaster.
It's therefore much safer for Docker to assume that you want to keep your containers, images, volumes, and so on, unless you explicitly ask for them to be removed and decide this is what you want when you start them, with docker run --rm <image> for example.
In my opinion, it may have some reasons. Consider below condition:
I build my image and start the container (production environment, for some reason I stop the current container, do some changes to image and run another instance, so new container with new name is running.
I see new container does not work properly as expected, so as now I have the old container, I can run the old one and stop the new so the clients will not face any issues.
But what if containers were automatically deleted if they were stopped?
Simple answer, I would have lost my clients (even my job) simply:) And one person would be added to unemployed people :D
As #msanford mentioned, Docker assumes you want to keep your data, volumes, etc. so you'll probably re-use them when needed.
Since Docker is used to deploy and run applications (as simple as WordPress with MySQL but with some differences installing on Shared Hosting), usually it's not used for only running bash.
Surely it's good to learn Docker in the first steps by running things like bash or sh to see the contents of container.
I have a dockerized application. When I am running it through docker-compose up, it runs fine and appears in docker images. But when I try to start a minikube cluster with vm-driver=None, then the cluster gives error and does not start. However, when I quit my docker application and start minikube cluster again, the cluster starts successfully. But then I couldnt find my docker application image I just ran. Instead I find images like below
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
Is this expected behavior? What is the reason if so?
minikube start --vm-driver=none
Update: I am working in an Ubuntu VM.
From the Minikube documentation:
minikube was designed to run Kubernetes within a dedicated VM, and assumes that it has complete control over the machine it is executing on. With the none driver, minikube and Kubernetes run in an environment with very limited isolation, which could result in:
Decreased security
Decreased reliability
Data loss
It is not expected behavior that Minikube will delete your docker images. I have tried to reproduce your issue. I had a few docker images on my Ubuntu VM.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
Later tried to run minikube.
$ sudo minikube start --vm-driver=none
😄 minikube v1.2.0 on linux (amd64)
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
...
⌛ Verifying: apiserver proxy etcd scheduler controller dns
🏄 Done! kubectl is now configured to use "minikube"
I still have all docker images and minikube is working as expected.
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4vd2q 1/1 Running 8 21d
coredns-5c98db65d4-xjx22 1/1 Running 8 21d
etcd-minikube 1/1 Running 5 21d
...
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
After exit from minikube, I still had all docker images.
As you mentioned in original thread, you have used minikube start --vm-driver=none. If you will use minikube start without sudo you will receive error like:
$ minikube start --vm-driver=none
😄 minikube v1.2.0 on linux (amd64)
💣 Unable to load config: open /home/$user/.minikube/profiles/minikube/config.json: permission denied
or if want stop minikube without sudo:
$ minikube stop
💣 Unable to stop VM: open /home/$user/.minikube/machines/minikube/config.json: permission denied
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
Please try use sudo with minikube commands.
Let me know if that helped. If not please provide your error message.
I tried to start the docker service using swarm mode, but I am not able to connect to port 8080
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3tdzofpn6qo5 vigilant_wescoff replicated 0/1 shantanu/abc:latest *:8080->8080/tcp
~ $ docker service ps 3tdzofpn6qo5
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iki0t3x1oqmz vigilant_wescoff.1 shantanuo/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Ready Ready 1 second ago
z88nyixy7u10 \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Shutdown Complete 5 minutes ago
zf4fac2a4dlh \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Shutdown Complete 11 minutes ago
zzqj4lldmxox \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-6-134.ap-south-1.compute.internal Shutdown Complete 14 minutes ago
z8eknet7oirq \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-20-50.ap-south-1.compute.internal Shutdown Complete 17 minutes ago
I used docker for aws (community version)
https://docs.docker.com/docker-for-aws/#docker-community-edition-ce-for-aws
But I guess that should not make any difference and the container should work. I have tested it using docker run command it works as expected.
In case of swarm mode, how do I know what exactly is going wrong?
You can use docker events on managers to see what the orchestrator is doing (but you can't see the history).
You can use docker events on workers to see what containers/networks/volumes etc. are doing (but you can't see the history).
You can look at the docker service logs to see current and past container logs
You can use docker container inspect to see the exit (error) code of the stopped containers in that service task list.
I'm toying around with Docker Swarm. I've deployed a service an performed a couple of updates to see how it works. I'm observing that docker is keeping the old images around for the service.
How do I clean those up?
root#picday-manager:~# docker service ps picday
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
bk6pw0t8vw4r picday.1 ischyrus/picday:latest picday-manager Running Running 6 minutes ago
ocbcjpnc71e6 \_ picday.1 ischyrus/picday:latest picday-manager Shutdown Shutdown 6 minutes ago
lcqqhbp8d99q \_ picday.1 ischyrus/picday:latest picday-manager Shutdown Shutdown 11 minutes ago
db7mco0d4uk0 picday.2 ischyrus/picday:latest picday-manager Running Running 6 minutes ago
z43p0lcdicx4 \_ picday.2 ischyrus/picday:latest picday-manager Shutdown Shutdown 6 minutes ago
These are containers, not images. In docker, there's a rather significant difference between the two (images are the definition used to create a container). Inside of a swarm service, they are referred to as tasks. To adjust how many docker keeps by default, you can change the global threshold with:
docker swarm update --task-history-limit 1
The default value for this is 5.
To remove individual containers, you can remove the container from the host where it's running with:
docker container ls -a | grep picday
docker container rm <container id>
Does anyone use docker service create with command like docker run -it ubuntu bash ?
e.g: docker service create --name test redis bash.
I want to run a temp container for debugging on production environment in swarm mode with the same network.
This is my result:
user#ubuntu ~/$ docker service ps test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
bmig9qd9tihw7q1kff2bn42ab test.1 redis ubuntu Ready Ready 3 seconds ago
9t4za9r4gb03az3af13akpklv \_ test.1 redis ubuntu Shutdown Complete 4 seconds ago
1php2be7ilp7psulwp31b3ib4 \_ test.1 redis ubuntu Shutdown Complete 10 seconds ago
drwyjdggd13n1emb66oqchmuv \_ test.1 redis ubuntu Shutdown Complete 15 seconds ago
b1zb5ja058ni0b4c0etcnsltk \_ test.1 redis ubuntu Shutdown Complete 21 seconds ago
When you create a service that startup Bash it will imediately stop because it is in detached mode.
You can have the same behavior if you run docker run -d ubuntu bash