I am trying to create a docker container in a swarm. I am expecting to see the service when I execute "docker service ls", and to see a container running when I execute "docker ps". I see the service but not the container.
[root#docker01-staging dcater]# docker service create --name dbcservice alpine ping 127.0.0.1
lm2b7g3kbnbn11m33y15bplqf
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
[root#docker01-staging dcater]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
maad961bcum4 dbcservice replicated 1/1 alpine:latest
[root#docker01-staging dcater]# docker ps --filter name=dbcservice
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Any idea what I am missing?
I figured out the answer (roughly). I'm not sure I have the terminology right, but docker01-staging is the management node. I checked docker02-staging, and that's actually where the process is running:
[root#docker02-staging dcater]# docker ps --filter name=dbcservice
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3f30b6fa3d40 alpine:latest "ping 127.0.0.1" 56 minutes ago Up 56 minutes dbcservice.1.fke9ljd8brpwzhklzqy0agt1r
docker ps is a docker level command that talks to the docker daemon running on the same node that docker ps is run, whereas in the context of Docker Swarm, docker service is a swarm level command, querying the swarm state. Thus docker ps must always be executed on each node in the swarm to see the running containers.
There is also docker node ps which is a swarm level command that will show the containers running on swarm nodes using the swarm node name. Use docker node ls to show the swarm node names.
Related
I'm using docker 1.13.1 on CentOS 7. I have created a swarm having a leader and two workers. Here are the nodes:
[root#inf-jenkins02-prd ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
jfyycwch6l1rdarc9j7hd69dg inf-jenkins04-prd Ready Active
jy182rao4rnm3vn1uhm2ghslt inf-jenkins03-prd Ready Active
xuc8l7ra249y7e9s7u778g46l * inf-jenkins02-prd Ready Active Leader
Now, I want to see the details of each node:
[root#inf-jenkins02-prd ~]# docker node ps inf-jenkins02-prd
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
[root#inf-jenkins02-prd ~]#
The command is done on the leader, of course but nothing is displayed. These seems like a major inconsistency as there are no running containers:
[root#inf-jenkins02-prd ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root#inf-jenkins02-prd ~]#
and also:
[root#inf-jenkins02-prd ~]# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root#inf-jenkins02-prd ~]#
I have created the cluster with Ansible but I don't think that this detail might be relevant. Does anyone know what might be wrong here ?
The commands you are using to see the state of the nodes are not the ones you should be using. For having some details on your nodes you can try with things like.
docker node inspect
or
docker system info
The commands you are using, are suitable for times when you want to list the "services" (as per docker/swarm perspective) that are currently running.
Just for the sake of testing, you could try to run a container like
docker container run --name test nginx
and then execute your
docker container ls
Hope this helped!
I am very new to docker , just started venturing into this. I read online about this. I came to know of the following commands of docker which is: docker run and docker service. As I understood , with docker run we are spinning a new container. However I am not clear what docker service do? Does it spin container in a Swarm?
Can anyone help understand in simple to understand?
The docker run command creates and starts a container on the local docker host.
A docker "service" is one or more containers with the same configuration running under docker's swarm mode. It's similar to docker run in that you spin up a container. The difference is that you now have orchestration. That orchestration restarts your container if it stops, finds the appropriate node to run the container on based on your constraints, scale your service up or down, allows you to use the mesh networking and a VIP to discover your service, and perform rolling updates to minimize the risk of an outage during a change to your running application.
Docker Run vs Docker service
docker run:
we can create number of containers with different images.
docker service:
we can create number of containers with same image in a single command line.
SYNTAX:
docker service create --name service-name --network network-name --replicas number-of-containers image-name
EXAMPLE:
docker service create --name service1 --network swarm-net --replicas 5 redis
When do we use a docker service create command and when do we use a docker run command?
In short: Docker service is used mostly when you configured the master node with Docker swarm so that docker containers will run in a distributed environment and it can be easily managed.
Docker run: The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
That is, docker run is equivalent to the API /containers/create then /containers/(id)/start
source: https://docs.docker.com/engine/reference/commandline/run/#parent-command
Docker service:
Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.
When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including:
the port where the swarm will make the service available outside the swarm
an overlay network for the service to connect to other services in the swarm
CPU and memory limits and reservations
a rolling update policy
the number of replicas of the image to run in the swarm
source: https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#services-tasks-and-containers
docker run command is used to create a standalone container
docker service create command is used to create instances (called tasks) of that service running in a cluster (called swarm) of computers (called nodes). Those tasks are containers of course, but not standalone containers. In a sense a service acts as a template when instantiating tasks.
For example
docker service create --name MY_SERVICE_NAME --replicas 3 IMAGE:TAG
creates 3 tasks of the MY_SERVICE_NAME service, which is based on the IMAGE:TAG image.
More information can be found here
Docker run will start a single container.
With docker service you manage a group of containers (from the same image). You can scale them (start multiple containers) or update them.
You may want to read "docker service is the new docker run"
According to these slides, "docker service create" is like an "evolved" docker run. You need to create a "service" if you want to deploy a container to Docker Swarm
Docker services are like "blueprints" for containers. You can e.g. define a simple worker as a service, and then scale that service to 20 containers to go through a queue really quickly. Afterwards you scale that service down to 3 containers again. Also, via Swarm these containers could be deployed to different nodes of your swarm.
But yeah, I also recommend reading the documentation, just like #Tristan suggested.
You can use docker in two way.
Standalone mode
When you are using the standalone mode you have installed docker daemon in only one machine. Here you have the ability to create/destroy/run a single container or multiple containers in that single machine.
So when you run docker run; the docker-cli creates an API query to the dockerd daemon to run the specified container.
So what you do with the docker run command only affects the single node/machine/host where you are running the command. If you add a volume or network with the container then those resources would only be available in the single node where you are running the docker run command.
Swarm mode (or cluster mode)
When you want or need to utilize the advantages of cluster computing like high availability, fault tolerance, horizontal scalability then you can use the swarm mode. With swarm mode, you can have multiple node/machine/host in your cluster and you can distribute your workload throughout the cluster. You can even initiate swarm mode in a single node cluster and you can add more node later.
Example
You can recreate the scenario for free here.
Suppose at this moment we have only one node called node-01.dc.local, where we have initiated following commands,
####### Initiating swarm mode ########
$ docker swarm init --advertise-addr eth0
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
####### create a standalone container #######
[node1] (local) root#192.168.0.8 ~
$ docker run -d --name app1 nginx
####### creating a service #######
[node1] (local) root#192.168.0.8 ~
$ docker service create --name app2 nginx
After a while, when you feel that you need to scale your workload you have added another machine named node-02.dc.local. And you want to scale and distribute your service to the newly created node.
So we have run the following command on the node-02.dc.local node,
####### Join the second machine/node/host in the cluster #######
[node2] (local) root#192.168.0.7 ~
$ docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
This node joined a swarm as a worker.
Now from the first node I have run the followings to scale up the service.
####### Listing services #######
[node1] (local) root#192.168.0.8 ~
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
syn9jo2t4jcn app2 replicated 1/1 nginx:latest
####### Scalling app2 from single container to 10 more container #######
[node1] (local) root#192.168.0.8 ~
$ docker service update --replicas 10 app2
app2
overall progress: 10 out of 10 tasks
1/10: running [==================================================>]
2/10: running [==================================================>]
3/10: running [==================================================>]
4/10: running [==================================================>]
5/10: running [==================================================>]
6/10: running [==================================================>]
7/10: running [==================================================>]
8/10: running [==================================================>]
9/10: running [==================================================>]
10/10: running [==================================================>]
verify: Service converged
[node1] (local) root#192.168.0.8 ~
####### Verifying that app2's workload is distributed to both of the ndoes #######
$ docker service ps app2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z12bzz5sop6i app2.1 nginx:latest node1 Running Running 15 minutes ago
8a78pqxg38cb app2.2 nginx:latest node2 Running Running 15 seconds ago
rcc0l0x09li0 app2.3 nginx:latest node2 Running Running 15 seconds ago
os19nddrn05m app2.4 nginx:latest node1 Running Running 22 seconds ago
d30cyg5vznhz app2.5 nginx:latest node1 Running Running 22 seconds ago
o7sb1v63pny6 app2.6 nginx:latest node2 Running Running 15 seconds ago
iblxdrleaxry app2.7 nginx:latest node1 Running Running 22 seconds ago
7kg6esguyt4h app2.8 nginx:latest node2 Running Running 15 seconds ago
k2fbxhh4wwym app2.9 nginx:latest node1 Running Running 22 seconds ago
2dncdz2fypgz app2.10 nginx:latest node2 Running Running 15 seconds ago
But if you need to scale your app1 you can't because you have created the container with standalone mode.
I have a docker swarm cluster consisting of one manager and one worker node. Then I configured (tls and DOCKER_HOST) a client from my laptop to get access to this cluster.
When I run docker ps I see only containers from the worker node (and not all containers of worker node (!)).
For example, from my client:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129d9402aeb progrium/consul "/bin/start -rejoi..." 2 weeks ago Up 22 hours IP:8300-8302->8300-8302/tcp, IP:8400->8400/tcp, IP:8301-8302->8301-8302/udp, 53/tcp, 53/udp, IP:8500->8500/tcp, IP:8600->8600/udp hadoop1103/consul-agt2-hadoop
As well as I run docker ps at worker node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fec7fbf0b00 swarm "/swarm join --advert" 16 hours ago Up 16 hours 2375/tcp join
a129d9402aeb progrium/consul "/bin/start -rejoin -" 2 weeks ago Up 22 hours 0.0.0.0:8300-8302->8300-8302/tcp, 0.0.0.0:8400->8400/tcp, 0.0.0.0:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->8600/udp consul-agt2-hadoop
So two questions: Why docker ps doesn't show containers from manager machine and not all containers from worker node?
Classic swarm (run as a container) by default hides the swarm management containers from docker ps output. You can show these containers with a docker ps -a command instead.
This behavior may be documented elsewhere, but the one location I've seen the behavior documented is in the api differences docs:
GET "/containers/json"
Containers started from the swarm official image are hidden by default, use all=1 to display them.
The all=1 api syntax is the equivalent of the docker ps -a cli.
I have a setup which runs my Docker container like this.
run-docker.sh
docker build -t wordpress-gcloud
container=$(docker run -d wordpress-gcloud)
ipOfContainer=$(docker inspect "$container" | jq -r '.[0].NetworkSettings.IPAddress')
But now I have setup a Docker Swarm (1 manager + 2 workers).
How should I convert the above bash script to run the container on the swarm?
Typically, you can access your Swarm cluster via Swarm APIs, which is similar with Docker API. To access Swarm APIs, you can use -H parameter with docker commands. For example, if you have a swarm manager running on your local machine, and the port number is 3376, then you can get your swarm cluster info with:
docker -H 127.0.0.1:3376 info
You can also inspect the swarm cluster containers by:
docker -H 127.0.0.1:3376 inspect <container ID>
More details about communciate with Swarm cluster can be found here: https://docs.docker.com/swarm/install-manual/#/step-6-communicate-with-the-swarm
But in your case, I think that docker build command could be a problem. In my understanding, Swarm will find a random node from your cluster to execute this docker build process, so if the Dockerfile is not existing on the node where docker build has been executed, you will get error. My idea is to consider to build your image in a certain place, and push the image to a image registry, then pull and run the image in any place you want.