I have Ubuntu 20.04.4
I am new to VerneMQ and i was tryying to setup 3 node cluster,
i am sucessfully able to have a cluster of 2 nodes but when i try to join the 3rd node it shows done, But when i type the command sudo vmq-admin cluster show the output is
+-------------------------+---------+
| Node | Running |
+-------------------------+---------+
| VerneMQ#192.168.41.17 | true |
+-------------------------+---------+
| VerneMQTR#192.168.41.20 | true |
+-------------------------+---------+
it only shows 2 nodes but when in check in GUI status on web it shows
it should show Cluster size 3 as it as 3 nodes.
Related
Context
I've a minikube cluster which run into WSL2 context and docker driver (from docker-desktop).
~ » minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.24.3 | Running | 1 | * |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
I've setup a LoadBalancer service, then I run minikube tunnel command.
~ » kubectl get svc my-svc-loadbalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc-loadbalancer LoadBalancer 10.105.11.182 127.0.0.1 8080:31684/TCP 18h
My problem
When trying to access 127.0.0.1:8080 from my browser, I'm getting a ERR_EMPTY_RESPONSE error code.
What I have noticed:
Without minikube tunnel command, accessing to 127.0.0.1:8080 result into a ERR_CONNECTION_REFUSED - which make sense.
The same service configuration but as NodePort and minikube service command works and I can access my deployment, but I would like to access it as LoadBalancer.
I am working on building a high availability setup using keepalived, where each server will have its own set of dockers that will get handled appropriately depending on if it is in BACKUP or MASTER. However, for testing purposes, I don't have 2 boxes available that I can turn on and off for this. So, is there is a good (preferably lightweight) way I can setup multiple dockers with the same name on the same machine?
Essentially, it would like it to look something like this:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
Thanks
You cannot have multiple containers with the exact same name, but you can use docker-compose file to have several directories and containers with same name (but with some differences that I explain below).
You can read more about it in Docker Docs regarding my below explanation.
Let us suppose yours:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
In your case, I would create two directories: vsb and vsb. Now let's go into these two directories.
We have these one file (at least, but you can have more per your requirement):
-----------------------------------------
| /home/vsa/docker-compose.yml |
| /home/vsa/keepalived/Dockerfile |
| /home/vsa/htmld/Dockerfile |
| /home/vsa/accessd/Dockerfile |
| /home/vsa/mysql/Dockerfile |
| -------------------------------------- |
| ^ |
| | |
| v |
| /home/vsb/docker-compose.yml |
| /home/vsb/keepalived/Dockerfile |
| /home/vsb/htmld/Dockerfile |
| /home/vsb/accessd/Dockerfile |
| /home/vsb/mysql/Dockerfile |
| -------------------------------------- |
-----------------------------------------
Note the file names exactly, as Dockerfile starts with capital D.
Let's watch docker-compose.yml:
version: '3.9'
services:
keepalived:
build: ./keepalived
restart: always
htmld:
build: ./htmld
restart: always
accessd:
build: ./accessd
restart: always
mysql:
build: ./mysql
restart: always
networks:
default:
external:
name: some_network
volumes:
some_name: {}
Let's dig into docker-compose.yml first:
Version part defines which version to use. Services part starts the services and containers you want to create and run.
I've used names like keepalived under services. You can use any name you want there, as it's your choice.
Under keepalived, the keyword build specifies in which path Dockerfile exists, so that as the path is called /home/vsa/keepalived, so we use . which means here and then it goes to keepalived directory, searching for Dockerfile (in docker-compose.yml for vsb, it searches for this file in /home/vsb/keepalived).
networks part specifies the external network these containers use, so that when all of our containers from docker-compose are running, then they're in the same docker network, so they can see and talk to each other. name part has the name some_network that you can choose any name you want that created before.
How to create a network called some_network is, if you're in Linux, you should run docker network create some_network before running docker-compose file.
volumes part specifies the name of volume of these services.
And here is an example in keepalived directory for a file called Dockerfile:
FROM ubuntu:latest # see [Dockerfile Docs][2] for more info
# after FROM command, you can use
# other available commands to reach
# your own goal
Now let's go to Dockerfile:
FROM command specifies which OS base to use. In this case, we want to use ubuntu for example, so that we create our image based on ubuntu.
There are other commands you can see them all in above link.
After having finished both Dockerfile and docker-compose.yml files with your own commands and keywords, you can run and create them by these commands:
docker-compose -f /home/vsa/docker-compose.yml up -d
docker-compose -f /home/vsb/docker-compose.yml up -d
Now we'll have eight containers calling these (docker automatically called them, otherwise you explicitly name them on your own):
vsa_keepalived
vsa_htmld
vsa_accessd
vsa_mysql
vsb_keepalived
vsb_htmld
vsb_accessd
vsb_mysql
I need to reproduce a scenario where there are two machines on a network. Machine 1 has two network interfaces (say for control and data traffic), both on the same network.
------------- -------------
| M 1 | | M 2 |
| eth0 eth1 | | eth0 |
1.1.1.1 1.1.1.2 | 1.1.1.3 |
| | | | | | |
------------- -------------
| | |
+------+--network 1.1.1.0/24----+
I see there are solutions to have two networks on a docker container, but is it possible to have two interfaces on the same network in docker?
Am trying to setup Minikube, and have a challenge. My minikube is setup, and I started the Nginex pod. I can see that the pod is up, but the service doesn't appear as active. On dashboard too, although the pod appears the depolyment doesn't show up. Here are my power shell command outputs.
Am learning this technology and may have missed something. My understanding is that when using docker tools, no explicit configurations are necessary at docker level, other than setting it up. Am I wrong here ? If so where ?
relevant PS output
Lets deploy hello-nginx deployment
C:\> kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
View List of pods
c:\> kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-6d66656896-hqz9j 1/1 Running 0 6m
Expose as a Service
c:\> kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed
List exposed services using minikube
c:\> minikube.exe service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | hello-nginx | http://192.168.99.100:31313 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
Access Nginx from browser http://192.168.99.100:31313
This method can be used
this worked for me on centos 7
$ systemctl enable nginx
$ systemctl restart nginx
or
$ systemctl start nginx
For a study I deployed on my computer a cloud architecture using Docker. (Nginx for the load balancing an some Apache servers to run a simple Php application.
I wanted to know if it was possible to use several computers to deploy my containers in order to increase the power available.
(I'm using a MacBook Pro with Yosemite. I've installed boot2docker with Virtual box)
Disclosure: I was a maintainer on Swarm Legacy and Swarm mode
Edit: This answer mentions Docker Swarm Legacy, the first version of Docker Swarm. Since then a new version called Swarm mode was directly included in the docker engine and behaves a bit differently in terms of topology and features even though the big ideas remain.
Yes you can deploy Docker on multiple machines and manage them altogether as a single pool of resources. There are several solutions that you can use to orchestrate your containers on multiple machines using docker.
You can use either Docker Swarm, Kubernetes, Mesos/Marathon or Fleet. (there might be others as this is a fast-moving area). There are also commercial solutions like Amazon ECS.
In the case of Swarm, it uses the Docker remote API to communicate with distant docker daemons and schedule containers according to the load or some extra constraints (other systems are similar with more or less features). Here is an example of a small Swarm deployments.
Docker CLI
+
|
|
| 4000 (or else)
| server
+--------v---------+
| |
+------------> Swarm Manager <------------+
| | | |
| +--------^---------+ |
| | |
| | |
| | |
| | |
| client | client | client
| 2376 | 2376 | 2376
| | |
+---------v-------+ +--------v--------+ +--------v--------+
| | | | | |
| Swarm Agent | | Swarm Agent | | Swarm Agent |
| Docker | | Docker | | Docker |
| Daemon | | Daemon | | Daemon |
| | | | | |
+-----------------+ +-----------------+ +-----------------+
Choosing one of those systems is basically a choice between:
Cluster deployment simplicity and maintenance
Flexibility of the scheduler
Completeness of the API
Support for running VMs
Higher abstraction for groups of containers: Pods
Networking model (Bridge/Host or Overlay or Flat network)
Compatibility with the Docker remote API
It depends mostly on the use case and what kinds of workload you are running. For more details on the differences between those systems, see this answer.
That sounds like clustering, which is what docker swarm does (see its github repo).
It turns a pool of Docker hosts into a single, virtual host.
See for example issue 247: How replication control and load balancing being taken care of?