I have one serious doubt in docker swarm . I have created the docker-machine in VM
manager1
worker1
worker2
And joined all the worker to manager and create the service like
docker service create --replicas 3 -p 80:80 --name web nginx
I change the index.html in docker service in manager1
When I run the url like http://192.168.99.100 it showing the index.html file that I have changed but the remaining 2 node showing the default nginx page
What is the concept of swarm ? Whether it is used only for the service failure ?
How to make the centralized data storage in docker swarm.
There are a few approaches to ensuring the same app and data are available on all nodes.
Your code (like nginx with your web app) should be built into an image, sent to a registry, and pulled to a Swarm service. That way the same app/code/site is in all the containers of that service.
If you need persistent data, like a database, then you should use a plugin volume driver that lets you store the unique data on shared storage.
Related
after I create a docker service using the below code
docker service create --name db --network era-networkkk -p 3306:3306 --mount type=bind,source=$(pwd)/data/mysql,destination=/var/lib/mysql schema
and when I check the services using
docker services ls
it shows the name as db
but when I use the command
docker ps
container name have some randomly generated numbers after the name
How can I solve this problem?
I think that behaviour is absolutely intended. What if your swarm is configured to start multiple containers of the same image on a single swarm node? These containers can't have the same name. So there has to be a suffix on the container names so there is no name collision. Why would you want to influence the container's names? Normally when working with clusters you are working on service level instead of container level.
I think the reason for this is that when you create a service you don't necessarily care what the container names are. You would usually create a service when docker is in swarm mode. With swarm mode you set up a cluster of nodes, I guess you only have one node for dev purposes. However when you have more than one cluster then the service would create as many containers as you specify with the --replicas option. Any requests to your application would then be load balanced across the containers in your cluster via the service.
Have a look at the docker documentation https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/ it may help to clarify how all of this works.
I am having a problem trying to implement the best way to add new container to an existing cluster while all containers run in docker.
Assuming I have a docker swarm, and whenever a container stops/fails for some reason, the swarm bring up new container and expect it to add itself to the cluster.
How can I make any container be able to add itself to a cluster?
I mean, for example, if I want to create a RabbitMQ HA cluster, I need to create a master, and then create slaves, assuming every instance of RabbitMQ (master or slave) is a container, let's now assume that one of them fails, we have 2 options:
1) slave container has failed.
2) master container has failed.
Usually, a service which have the ability to run as a cluster, it also has the ability to elect a new leader to be the master, so, assuming this scenerio is working seemlesly without any intervention, how would a new container added to the swarm (using docker swarm) will be able to add itself to the cluster?
The problem here is, the new container is not created with new arguments every time, the container is always created as it was deployed first time, which means, I can't just change it's command line arguments, and this is a cloud, so I can't hard code an IP to use.
Something here is missing.
Maybe trying to declare a "Service" in the "docker Swarm" level, will acctualy let the new container the ability to add itself to the cluster without really knowing anything the other machines in the cluster...
There are quite a few options for scaling out containers with Swarm. It can range from being as simple as passing in the information via a container environment variable to something as extensive as service discovery.
Here are a few options:
Pass in IP as container environment variable. e.g. docker run -td -e HOST_IP=$(ifconfig wlan0 | awk '/t addr:/{gsub(/.*:/,"",$2);print$2}') somecontainer:latest
this would set the internal container environment variable HOST_IP to the IP of the machine it was started on.
Service Discovery. Querying a known point of entry to determine the information about any required services such as IP, Port, ect.
This is the most common type of scale-out option. You can read more about it in the official Docker docs. The high level overview is that you set up a service like Consul on the masters, which you have your services query to find the information of other relevant services. Example: Web server requires DB. DB would add itself to Consul, the web server would start up and query Consul for the databases IP and port.
Network Overlay. Creating a network in swarm for your services to communicate with each other.
Example:
$ docker network create -d overlay mynet
$ docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
$ docker service create –name redis –network mynet redis:latest
This allows the web app to communicate with redis by placing them on the same network.
Lastly, in your example above it would be best to deploy it as 2 separate containers which you scale individually. e.g. Deploy one MASTER and one SLAVE container. Then you would scale each dependent on the number you needed. e.g. to scale to 3 slaves you would go docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS> which would start the additional slaves. In this scenario if one of the scaled slaves fails swarm would start a new one to bring the number of tasks back to 3.
https://docs.docker.com/engine/reference/builder/#healthcheck
Docker images have a new layer for health check.
Use a health check layer in your containers for example:
RUN ./anyscript.sh
HEALTHCHECK exit 1 or (Any command you want to add)
HEALTHCHECK check the status code of command 0 or 1 and than result as
1. healthy
2. unhealthy
3. starting etc.
Docker swarm auto restart the unhealthy containers in swarm cluster.
I want to deploy an ETC cluster with 3 nodes with Docker.
So I use a dicovery url for that purpose.
The problem Im having is that when I delete a etcd container, and start a new one, then it cannot rejoin to the cluster.
Docker log says:
member "XXX" has previously registered with discovery service token (https://discovery.etcd.io/yyyy)
But etcd could not find valid cluster configuration in the given data dir (/data).
I am using volume for folder /data and /waldir
Also using --net=host so it is using the same host IP always.
But why can't the new container re-join to the cluster?
Where the cluster information is saved inside the container?
Help will be appreciated.
Thanks.
I have 3 projects, that deploys on different hosts. Every project have it's own RabbitMQ container. But I need to create cluster with this 3 hosts, using the same vhost, but different user/login pair.
I was tried Swarm and overlay networks, but swarm is aimed to run solo containers and with compose it doesn't work. Also, I was tried docker-compose bundle, but this is not work as expected :(
I assumed that it would work something like this:
1) On manager node I create overlay network
2) In every compose file I extend networks config for rabbitmq container with my overlay network.
3) They work as expected and I don't publish to Internet rabbitmq port.
Any idea, how can I do this?
Your approach is right, but Docker Compose doesn't work with Swarm Mode at the moment. Compose just runs docker commands, so you could script up what you want instead. For each project you'd have a script like this:
docker network create -d overlay app1-net
docker service create --network app1-net --name rabbit-app1 rabbitmq:3
docker service create --network app1-net --name app1 your-app-1-image
...
When you run all three scripts on the manager, you'll have three networks, each network will have its own RabbitMQ service (just 1 container by default, use --replicas to run more than one). Within the network other services can reach the message queue by the DNS name rabbit-appX. You don't need to publish any ports, so Rabbit is not accessible outside of the Docker network.
After having set up my swarm network with 3 hosts (manager1, worker1, worker2), I created an overlay network :
docker network create --driver=overlay testNet
Then created a service, based on couchbase (for example, any other image exposing a non stateless web ui is having same issue)
docker service create --name db --network=testNet --publish 8091:8091 couchbase
If I try to access the web ui located on port 8091, everything works fine, until I start scaling the service to 2 (or more).
docker service scale db=2
At this point, the swarm load balancer keeps redirecting the requests between the 2 containers, rendering the web ui unusable.
Is there a way to solve this ?
I do not personally recommend using Docker Swarm in conjunction with Couchbase Server as it does have an impedance mismatch. It would be more sensible to specifically create a Couchbase container on each server which requires it (and only one per server to avoid single points of failure!) and then to forward the appropriate ports on that server.
In spite of this, the answer in this case would be to access each node/container individually, you can do this by accessing the container via it's private IP. You can identify this IP by grabbing the container ID using docker ps and then looking for the IP in docker inspect <container id>.