The current environment is as below,
Host 1: Master in Docker Container + Sentinel in separate Docker Container
Host 2: Replica in Docker Container + Sentinel in separate Docker Container
Host 3: Sentinel in Docker Container
Replication is working fine between Master and Replica.
Sentinels in all the Hosts are able to discover the Master and Replica (verified by SENTINEL MASTER mymaster and SENTINEL REPLICAS mymaster).
But, no sentinel is able to auto discover any other sentinel (SENTINEL SENTINELS mymaster returns empty array).
In each sentinel.conf of the respective Sentinel, announce-ip and announce-port options have been set to the respective Host's IP and Sentinel default port (26379).
The channel __sentinel__:hello is receiving hello messages from all Sentinels (verified by running SUBSCRIBE __sentinel__:hello in redis-cli on Master node). Channel name was obtained from here.
Despite all these, Sentinels are not able to auto discover each other.
Any suggestions?
Related
Hello docker swarm gurus!
I need your help today please.
I am facing some issue trying to deploy a swarm between a manager on a machine A and a worker on a machine B reached via ssh. The challenge here is that the machine B can only be reached via ssh through a vpn, so every communication with machine A has to happen via the VPN link using ssh port forwarding
I was able to build the swarm properly but when ever a service spawns a replica on the machine B, the container cannot connect to other services in the swam as if the dns or the swarm routing mesh was not properly forwarded on the machine B.
to set up my swarm from A where the swarm manager is running, I forwarded the port 2377, 7946 and 4789
machine-A$ ssh -R 2377:localhost:2377 -R 7946:localhost:7946 -R 4789:localhost:4789 machine-B
from there I was able to join the swarm
machine-B$ swarm join --token xxxxx localhost:2377
I am able to start services on both machines A and B and they do show in the swarm as running..... but..... the issue is when I search accessing to some service from another container running. only services spawned on node A can ping each other. any service hosted on B does not respond to ping. and in any container running on machine B, they cannot ping any service running either on B or A....
I checked /etc/resolv.conf in container on B, and it reads:
search mydomain.local
nameserver 127.0.0.11
options ndots:0
127.0.0.11 answers to the ping, but a ping to any other services returns
ping: <service_name>: Name or service not known
Any idea of what I am missing?
Is there any specific more to do to configure the swarm routing mesh properly?
thanks for any suggestion
Sam.
I am building a docker image that runs ES v7.5.1 under Windows ServerCore but that doesn't seem to work.
When I start the docker container, I have a message saying that the node couldn't join the cluster.
[o.e.c.c.ClusterFormationFailureHelper] [66EADAF2C321] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{66EADAF2C321}{PLdolNAJSfC_tyPB32cLtQ}{YC0BB7okSFOBA_i9GqI6xA}{172.27.103.24}{172.27.103.24:9300}{dilm}{ml.machine_memory=1072611328, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, [::1]:9300] from hosts providers and [{66EADAF2C321}{PLdolNAJSfC_tyPB32cLtQ}{YC0BB7okSFOBA_i9GqI6xA}{172.27.103.24}{172.27.103.24:9300}{dilm}{ml.machine_memory=1072611328, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
If I run ES on laptop it works without any issue (same elasticsearch.yml file).
Do you have idea on why docker is failing?
elasticsearch.yml file:
network.host: 0.0.0.0
cluster.name: elasticsearch
path.logs: L:/ path.data: D:/
discovery.seed_hosts: 127.0.0.1, [::1]
http.port: 9200
and the docker image:
docker pull mydockeruniversity/elasticsearchservercore:751-beta1-cfgchange1
The node can not connect to other nodes and form a cluster because you haven't configured them in the discovery.seed_hosts setting. Right now you tell your nodes that they should try to connect to localhost (127.0.0.1) to find the other nodes. Since you are inside a docker container, there won't be any nodes under that address.
Instead, you need to provide the hostnames or ip-addresses of the master-eligible nodes in that setting like so:
discovery.seed_hosts:
- 192.168.1.10:9300
- 192.168.1.11
- seeds.mydomain.com
You might want to take a look at the discovery docs to get a better grasp about that topic.
If you tell docker-compose to scale a service, and do NOT expose its ports,
docker-compose scale dataservice=2
There will be two IPs in the network that the dns name dataservice will resolve to. So, services that reach it by hostname will load balance.
I would also like to do this to the edge proxy as well. The point would be that
docker-compose scale edgeproxy=2
Would cause edgeproxy to resolve to one of 2 possible IP Addresses.
But the semantics of exposing ports is wrong for this. If I expose:
8443:8443
Then it will try to bind each edgeproxy to be bound to host 8443. What I want is more like:
0.0.0.0:8443:edgeproxy:8443
Where when you try to come into the docker network via host 8443, it randomly selects an edgeproxy:8443 IP to bind the incoming TCP connection to.
Is there an alternative to just do a port-forward? I want a port that can get me in to talk to any ip that will resolve as edgeproxy.
This is provided by swarm mode. You can enable a single node swarm cluster with:
docker swarm init
And then deploy your compose file as a stack with:
docker stack deploy -c docker-compose.yml $stack_name
There are quite a few differences from docker compose including:
Swarm doesn't build images
You manage the target state with docker service commands, trying to stop a container with docker stop won't work since swarm will restart it
The compose file needs to be in a v3 syntax
Networks will be an overlay network, and not attachable by containers outside of swarm, by default
One of the main changes is that exposed ports are published on an ingress network managed by swarm mode, and connections are round robin load balanced to your containers. You can also define a replica count inside the compose file, eliminating the need to run a scale command.
See more at: https://docs.docker.com/engine/swarm/
I am running into a peculiar problem.
I have kubernetes cluster, I setup no_proxy for the master node of the cluster (in docker systemd environment). In order to be able to run docker build/push to a registry that is running on docker on the master node.
Now I have a problem, as my containers cannot access the outside network (because the communication happens through k8s master node I presume).
Or if I choose not to set no_proxy for the master node in docker then I cannot push images to my registry through the external IP of the master, have to use (localhost) as push destination -> which breaks my app later on.
I use weave as my cni plugin
The network communication of containers running on your nodes has nothing to do with the network communication of your master to the outside world or it through a proxy.
Basically, the network communication for your containers running on a node goes through its own network interface, etc.
Having said that, are you running your workloads on your master? If yes, that could be affecting the communication of your master containers (if you set no_proxy for some hostnames). It could also be affecting the communication of your kube-controller-manager, kube-apiserver, core-dns, kubelet and network overlay on the master.
Are you configuring your docker client proxy correctly as per here?
I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.