Docker: Difference between `docker run` and `docker service` - docker

I am very new to docker , just started venturing into this. I read online about this. I came to know of the following commands of docker which is: docker run and docker service. As I understood , with docker run we are spinning a new container. However I am not clear what docker service do? Does it spin container in a Swarm?
Can anyone help understand in simple to understand?

The docker run command creates and starts a container on the local docker host.
A docker "service" is one or more containers with the same configuration running under docker's swarm mode. It's similar to docker run in that you spin up a container. The difference is that you now have orchestration. That orchestration restarts your container if it stops, finds the appropriate node to run the container on based on your constraints, scale your service up or down, allows you to use the mesh networking and a VIP to discover your service, and perform rolling updates to minimize the risk of an outage during a change to your running application.

Docker Run vs Docker service
docker run:
we can create number of containers with different images.
docker service:
we can create number of containers with same image in a single command line.
SYNTAX:
docker service create --name service-name --network network-name --replicas number-of-containers image-name
EXAMPLE:
docker service create --name service1 --network swarm-net --replicas 5 redis

Related

Docker container cannot reach other services for a few seconds

I have a docker swarm node running a set of docker services connected by a overlay network. When needed I dynamically add another docker node via terraform . It'll be a separate ec2 instance setup and connected as a worker node to the existing swarm network.
I'll run a container from my manager and the running container needs to talk to the existing services in manager node. For eg: Connecting to postgres service and running few queries.
docker -H <node ip> run --network <overlay network where services are running> <some image> <command>
The script running in the container fails with "Name or service not known" error. I tried to manually ping by bashing into the container and ping succeeds after some 4 or 5 seconds. I tried this hundreds of times and I always get the same issue. Also, it doesn't matter when the node is joined to the swarm. Every time I run the above command, I face the same issue.
Also, I don't have control over what script is run in the container so I cannot add retries.
One more thing. Sometimes, some services can be reached immediately. For eg., Postgres will fail. But another service exposing rest end points can be reached. But it's not always the case.
I was able to reproduce this issue with a bunch of test services:
Steps to reproduce the issue:
Create a docker swarm and add another machine as a worker node to
docker swarm
Create a overlay network in node 1 docker network create -d overlay --attachable platform
Create services in node 1 for i in {1..25} do docker service create --network platform -p :80 --name "service-${i}"
dockerbogo/docker-nginx-hello-world done
Create a task from node 1 to be run in node 2 docker -H 10.128.0.3:2376 run --rm --network platform centos ping service-1
Docker daemon logs: https://pastebin.com/65Lihp8v
Any help?

Control docker swarm from within a running container

I have a few micro-services deployed as a stack on docker swarm, with each micro-service running in a separate container.
How do I give commands to the swarm from within one of the services running inside a container on the swarm manager host? e.g running "docker service update" command from within a container to update one of the services in the swarm.
I read somewhere that it can be done by bind mounting the docker socket using:
-v /var/run/docker.sock:/var/run/docker.sock
But this does not work for me. I get docker not found error upon trying to run docker command from within the container.

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

How to set docker back to default configuration

I have installed docker compose and used it a little. Then decided I did not need it it. Now when I create containers by hand they are assigned a network with an ip address, gateway and other things. When I inspected older containers before i installed docker compose they do not have these network settings.
I have tried unoinstalling docker compose and reinstalling docker which did not work. Is there anything I can do? The reason I am asking is I can't link containers together because every new container is assigned an ip address and other network settings.
Docker always does that, nothing to do with compose. Compose doesn't modify your Docker installation in any way, purely connects to the daemon to run commands under the hood.
By linking containers together I'm assuming you mean just so they can communicate with each other? --link is deprecated for some time now in favor of docker network .... Try the following:
$ docker network create test-net
$ docker run -d --name c1 --net test-net alpine:3.3 sleep 20000
$ docker run -it --name c2 --net test-net alpine:3.3 ping c1

Docker Swarm - Map ports and Scaling

I am currently using Docker Engine 1.11, and I am investigating if its possible for me to move to Docker 1.12 and use Swarm. I am currently using Docker to run 50+ Bamboo agents, all of which need to have a port mapped to a port on the server. For instance, each docker container needs to have port 4000 available, so when I do Docker run, I do-
Docker run -p 10000:4000 myimg
Docker run -p 10001:4000 myimg
Docker run -p 10002:4000 myimg
Docker run -p 10003:4000 myimg
In Docker Swarm, from what I understand, I would run the following command to scale my service to 50 containers
docker service scale helloworld=5
But, if I did this, then they would all be trying to map to the same port. How can I accomplish this? Is it possible?
No, you can't.
It's just one key function that docker service provides that a single port can map to multi containers(service discovery)
And another one is when container fails, swarm will start a new one.(self healing)
I know nothing about Bamboo, so I can't tell you if there's a way to run bamboo service with the swarm mode.

Resources