I am fairly new to docker and docker swarm.
Recently I got a request from a business client that has 3 servers running cPanel/WHM with some wordpress installed on each server.
He wants to replicate each server 3 times for high availability, having a total of 9 nodes in the network.
My question is, what happens if, for example, one new post is added to one of the wordpress sites? How can I make that change to propagate to the other nodes that are replicas of the main one where the change was made?
My assumption is to deploy cPanel in a container having all data in volumes, use docker swarm to replicate it and finally use another software to sync files between containers. But I am sure that there should be a better, more professional and more straightforward approach to this.
Can anyone advice me?
Thanks
Related
I work for a company where we are developing a web application of about 20 microservices between FE and BE. The company wants to deploy the containers in its local infrastucture based on wmvare. Knowing that we expect to have maximum 40/50 connected users at the same time, how do you suggest to deploy the containers? In which enviroment? We checked to use the container functionalities of wmvare but to do that we should change same network configuration of all the active vm in production and the person in charge is not confident in doing that.
security-wise it is good to have your server on separate virtual machine. In this case you retain snapshot and migration functionality as long with host isolation;
Inside guest virtual machine you can use Docker containers. It allows you to deploy and maintain your application with relatively small effort. As platform I'd use Ubuntu server or RHEL. On Ubuntu it is better to use latest docker repository, so it will have containerd management daemon.
It is hard to give more accurate instructions without knowledge of network topology, but maybe you consider routing so you do not need to change your network configuration.
This question illustrates the theoretical differences between docker run and docker service.
What I don't understand is when would one need to use the exact same container replicated multiple times (as per the Docker documentation example)?
There, they run the same web app replicated 5 times.
Is deployment on Kubernetes (for example) a potential use case, where the developer does not want to centralize the app on one host, in order to make it more resilient, hence why 5 replicas are created?
To understand, can someone please please with an example use case, where the docker service is useful?
swarm is an orchestrator just like kubernetes. docker service deploys services to swarm just as you deploy your services to kubernetes using kubectl.
swarm is essentially built-in primitive orchestrator. One possible case for replicas is running a proxy that directs requests to proper containers. You could expose multiple machines and have one take place of another in case another fails. Or any other high availability case you could think of.
Your question could be rephrased as "What's the difference between running a single container and running containers in a cluster?", which would be another question altogether, but that rephrasing might help illustrate what docker service does.
If you want to scale your application, you can run multiple instances of it (horizontal scaling) or you beef up the machine(s) that it runs on (vertical scaling). For the first, you would have to put a load balancer in front of your application so that the traffic is evenly distributed between the different instances. The idea is that those instances run on different hosts, so if one goes down, your application is still up. Some controlling instance (a Kubernetes service, for example) will notice that one of your instances has gone south and won't direct any more traffic to it. Nowadays, with all the cloud stuff going on, this is typically the way to go.
You don't need Kubernetes for such a setup, but you're right, this would be a typical use case for it. At least if you run your application in a Docker container.
Once use case is running on Docker swarm which consists of n number of nodes in your swarm cluster. You can run replicas of your application on the swarm cluster with a load balancer/reverse proxy to load balance your setup. If any one of the nodes goes down the application can still run.
But the exact use case for running multiple instances is scalabilty. Suppose you know that one instance of your app can serve 10000 users (Assume Bank authentication) at a time.
If you want your application to serve 50K users just run 5 replicas(using docker service create) .
I am building a web app using docker swarm.
Manager machine will have database and load balancer.
Next I have two pieces of software: tornado server, which acts as middle layer between user and node server. They should always be served together. And one tornado server should always talk to one node server.
I want containers to be as isolated as possible (in order to keep scalability), but how I ensure that kind of communication?
Right now my approach is to build two separate images - one for tornado and one for node and then create muli-stage container which connects them both. I do not feel this is optimal as I have to run two start commands in CMD.
What is preferable solution? Can you force docker to couple images (e.g. without specifying IPs)?
There is a link feature in docker compose files: https://docs.docker.com/compose/compose-file/#links. But recently Docker marked it as deprecated and suggests using user-defined networks: https://docs.docker.com/network/.
P.S: Also pay attention to the notes:
- If you define both links and networks, services with links between them must share at least one network in common to communicate.
- This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
I've working with Docker containers. What Ive done is lunching 5 containers running the same application, I use HAProxy to redirect requests to them, I added a volume to preserve data and set restart policy as Always.
It works. (So far this is my load balancing aproach)But sometimes I need another container to join the pool as there might be more requests, or maybe at first I don't need 5 containers.
This is provided by the Swarm Mode addition in Docker 1.12. It includes orchestration that lets you not only scale your service up or down, but recover from an outage by automatically rescheduling the jobs to run on other nodes.
If you don't want to use Docker 1.12 (yet!), you can also use a Service Discovery like Consul, register your containers inside and use a tool like Consul Template to regenerate your load balancer configuration accordingly.
I made a talk 6 months ago about it. You can find the code and the configuration I used during my demo here: https://github.com/bargenson/dockerdemo
I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions.
I installed my cluster according to this step-by-step Fedora manual, so I use Fedora 20 with its testing repository to get kubernetes binaries.
After installing, I wanted to try the guestbook example, but it seems to me there is a problem with the inter-container networking.
Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception.
Can anybody help me why the containers cannot reach each other among the hosts?
I hope I could describe my setup enough accurately and thanks in advance.
The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes.
For kubernetes to work, the following network set up must be satisfied:
Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap.
Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes.
It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly.
There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like flannel.
We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.