Docker on several computers - docker

For a study I deployed on my computer a cloud architecture using Docker. (Nginx for the load balancing an some Apache servers to run a simple Php application.
I wanted to know if it was possible to use several computers to deploy my containers in order to increase the power available.
(I'm using a MacBook Pro with Yosemite. I've installed boot2docker with Virtual box)

Disclosure: I was a maintainer on Swarm Legacy and Swarm mode
Edit: This answer mentions Docker Swarm Legacy, the first version of Docker Swarm. Since then a new version called Swarm mode was directly included in the docker engine and behaves a bit differently in terms of topology and features even though the big ideas remain.
Yes you can deploy Docker on multiple machines and manage them altogether as a single pool of resources. There are several solutions that you can use to orchestrate your containers on multiple machines using docker.
You can use either Docker Swarm, Kubernetes, Mesos/Marathon or Fleet. (there might be others as this is a fast-moving area). There are also commercial solutions like Amazon ECS.
In the case of Swarm, it uses the Docker remote API to communicate with distant docker daemons and schedule containers according to the load or some extra constraints (other systems are similar with more or less features). Here is an example of a small Swarm deployments.
Docker CLI
+
|
|
| 4000 (or else)
| server
+--------v---------+
| |
+------------> Swarm Manager <------------+
| | | |
| +--------^---------+ |
| | |
| | |
| | |
| | |
| client | client | client
| 2376 | 2376 | 2376
| | |
+---------v-------+ +--------v--------+ +--------v--------+
| | | | | |
| Swarm Agent | | Swarm Agent | | Swarm Agent |
| Docker | | Docker | | Docker |
| Daemon | | Daemon | | Daemon |
| | | | | |
+-----------------+ +-----------------+ +-----------------+
Choosing one of those systems is basically a choice between:
Cluster deployment simplicity and maintenance
Flexibility of the scheduler
Completeness of the API
Support for running VMs
Higher abstraction for groups of containers: Pods
Networking model (Bridge/Host or Overlay or Flat network)
Compatibility with the Docker remote API
It depends mostly on the use case and what kinds of workload you are running. For more details on the differences between those systems, see this answer.

That sounds like clustering, which is what docker swarm does (see its github repo).
It turns a pool of Docker hosts into a single, virtual host.
See for example issue 247: How replication control and load balancing being taken care of?

Related

Not able to add a Node in VerneMQ Cluster

I have Ubuntu 20.04.4
I am new to VerneMQ and i was tryying to setup 3 node cluster,
i am sucessfully able to have a cluster of 2 nodes but when i try to join the 3rd node it shows done, But when i type the command sudo vmq-admin cluster show the output is
+-------------------------+---------+
| Node | Running |
+-------------------------+---------+
| VerneMQ#192.168.41.17 | true |
+-------------------------+---------+
| VerneMQTR#192.168.41.20 | true |
+-------------------------+---------+
it only shows 2 nodes but when in check in GUI status on web it shows
it should show Cluster size 3 as it as 3 nodes.

Use multiple dockers with same name

I am working on building a high availability setup using keepalived, where each server will have its own set of dockers that will get handled appropriately depending on if it is in BACKUP or MASTER. However, for testing purposes, I don't have 2 boxes available that I can turn on and off for this. So, is there is a good (preferably lightweight) way I can setup multiple dockers with the same name on the same machine?
Essentially, it would like it to look something like this:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
Thanks
You cannot have multiple containers with the exact same name, but you can use docker-compose file to have several directories and containers with same name (but with some differences that I explain below).
You can read more about it in Docker Docs regarding my below explanation.
Let us suppose yours:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
In your case, I would create two directories: vsb and vsb. Now let's go into these two directories.
We have these one file (at least, but you can have more per your requirement):
-----------------------------------------
| /home/vsa/docker-compose.yml |
| /home/vsa/keepalived/Dockerfile |
| /home/vsa/htmld/Dockerfile |
| /home/vsa/accessd/Dockerfile |
| /home/vsa/mysql/Dockerfile |
| -------------------------------------- |
| ^ |
| | |
| v |
| /home/vsb/docker-compose.yml |
| /home/vsb/keepalived/Dockerfile |
| /home/vsb/htmld/Dockerfile |
| /home/vsb/accessd/Dockerfile |
| /home/vsb/mysql/Dockerfile |
| -------------------------------------- |
-----------------------------------------
Note the file names exactly, as Dockerfile starts with capital D.
Let's watch docker-compose.yml:
version: '3.9'
services:
keepalived:
build: ./keepalived
restart: always
htmld:
build: ./htmld
restart: always
accessd:
build: ./accessd
restart: always
mysql:
build: ./mysql
restart: always
networks:
default:
external:
name: some_network
volumes:
some_name: {}
Let's dig into docker-compose.yml first:
Version part defines which version to use. Services part starts the services and containers you want to create and run.
I've used names like keepalived under services. You can use any name you want there, as it's your choice.
Under keepalived, the keyword build specifies in which path Dockerfile exists, so that as the path is called /home/vsa/keepalived, so we use . which means here and then it goes to keepalived directory, searching for Dockerfile (in docker-compose.yml for vsb, it searches for this file in /home/vsb/keepalived).
networks part specifies the external network these containers use, so that when all of our containers from docker-compose are running, then they're in the same docker network, so they can see and talk to each other. name part has the name some_network that you can choose any name you want that created before.
How to create a network called some_network is, if you're in Linux, you should run docker network create some_network before running docker-compose file.
volumes part specifies the name of volume of these services.
And here is an example in keepalived directory for a file called Dockerfile:
FROM ubuntu:latest # see [Dockerfile Docs][2] for more info
# after FROM command, you can use
# other available commands to reach
# your own goal
Now let's go to Dockerfile:
FROM command specifies which OS base to use. In this case, we want to use ubuntu for example, so that we create our image based on ubuntu.
There are other commands you can see them all in above link.
After having finished both Dockerfile and docker-compose.yml files with your own commands and keywords, you can run and create them by these commands:
docker-compose -f /home/vsa/docker-compose.yml up -d
docker-compose -f /home/vsb/docker-compose.yml up -d
Now we'll have eight containers calling these (docker automatically called them, otherwise you explicitly name them on your own):
vsa_keepalived
vsa_htmld
vsa_accessd
vsa_mysql
vsb_keepalived
vsb_htmld
vsb_accessd
vsb_mysql

Allow container to read host network statistics, but bind to docker network

tl; dr? Jump straight to Question ;)
Context & Architecture
In this application designed with a micro-service architecture in mind, one can find notably two components:
monitor: probes system metrics and report them via HTTP
controller: read metrics reported by monitor and take actions according to rules defined in a configuration file.
+------------------------------------------------------+
| host / |
+-----/ |
| |
| +-----------------+ +-------------------+ |
| | monitor / | | controller / | |
| +--------/ | +-----------/ | |
| | +----------+ | | +-------------+ | |
| | | REST :80 |>--+--------+->| application | | |
| | +----------+ | | +-------------+ | |
| +-----------------+ +-------------------+ |
| |
+------------------------------------------------------+
Trouble with Docker
The only way I found for monitor to be able to read network statistics not contrived to its docker network stack was to start its container with --network=host. The following question assumes this is the only solution. If (fortunately) I were mistaken, please do answer with an alternative.
version: "3.2"
services:
monitor:
build: monitor
network_mode: host
controller:
build: controller
network_mode: host
Question
Is there a way for monitor to serve its report on a docker network even though it reads statistics from the host network stack?
Or, is there a way for controller to not be on --network=host even though it connects to monitor which is?
(note: I use docker-compose but a pure docker answer suits me)

Nginx service not starting

Am trying to setup Minikube, and have a challenge. My minikube is setup, and I started the Nginex pod. I can see that the pod is up, but the service doesn't appear as active. On dashboard too, although the pod appears the depolyment doesn't show up. Here are my power shell command outputs.
Am learning this technology and may have missed something. My understanding is that when using docker tools, no explicit configurations are necessary at docker level, other than setting it up. Am I wrong here ? If so where ?
relevant PS output
Lets deploy hello-nginx deployment
C:\> kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
View List of pods
c:\> kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-6d66656896-hqz9j 1/1 Running 0 6m
Expose as a Service
c:\> kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed
List exposed services using minikube
c:\> minikube.exe service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | hello-nginx | http://192.168.99.100:31313 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
Access Nginx from browser http://192.168.99.100:31313
This method can be used
this worked for me on centos 7
$ systemctl enable nginx
$ systemctl restart nginx
or
$ systemctl start nginx

No response from Docker container during load test

I'm trying to execute load testing on my architecture using Docker containers.
The architecture I designed:
+---------------+ +---------------+ +--------------+
| Docker Apache |------>| Docker Tomcat |------>| Docker MySQL |
+---------------+ +---------------+ +--------------+
I use JMeter to validate this architecture, but when I launch just 100 requests in the same time, I received around 80% "Socket closed" errors. But Apache logs reveals no errors.
If I changed the architecture to use a classic Apache instead of Docker, I'm not experiencing these errors (all requests are successfully handled by Apache), but I have now 408 HTTP errors between Apache and Docker Tomcat.
+---------------+ +---------------+ +--------------+
| Apache |------>| Docker Tomcat |------>| Docker MySQL |
+---------------+ +---------------+ +--------------+
I think docker-proxy has problems to handle a lot of simultaneous connections, but is there a way to tune it to resolve this issue?
Thanks for your help!

Resources