How to do HA failover for mysqlrouter with keepalived using docker - docker

So i have mysqlrouter docker container that is load balancing request to my mysql innodb cluster and every thing is working perfectly.
Now being that having a single mysqlrouter in my setup also means that the single router is a single-point-of-failure, i want to achieve HA with spinning another docker container - mysqlrouter-bkup. Also, i want to spin 2 keepalived docker containers monitoring the mysqlrouter cluster so as to cause failover (i.e make the backup instance a master when the master instance dies).
What i have managed to do so far (without any proper documented guidance on how to achieve this using docker containers online) is:
Spin up the first mysqlrouter with the following command:
docker run --name mysqlrouter -p 6446:6446 -p 4667:4667 --net=smartdev-pro_internalnet -e MYSQL_HOST=mysql1 -e MYSQL_PORT=3306 -e MYSQL_USER=clusterAdmin -e MYSQL_PASSWORD=password -e MYSQL_INNODB_CLUSTER_MEMBERS=3 -e MYSQL_CREATE_ROUTER_USER=0 mysql/mysql-router
Spin up the second mysqlrouter with the following command:
docker run --name mysqlrouter-bkup -p 6448:6446 -p 4669:4667 --net=smartdev-pro_internalnet -e MYSQL_HOST=mysql1 -e MYSQL_PORT=3306 -e MYSQL_USER=clusterAdmin -e MYSQL_PASSWORD=password -e MYSQL_INNODB_CLUSTER_MEMBERS=3 -e MYSQL_CREATE_ROUTER_USER=0 mysql/mysql-router
Spin up the first keepalived docker container:
docker run --name keepalived1 --env KEEPALIVED_STATE="MASTER" --env KEEPALIVED_INTERFACE="eth0" --env KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['172.18.0.9', '172.18.0.10']" --env KEEPALIVED_ROUTER_ID="100" --env KEEPALIVED_PRIORITY="200" --env KEEPALIVED_PASSWORD="password" --env KEEPALIVED_VIRTUAL_IPS="172.18.0.100" --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host osixia/keepalived:2.0.20
Spin up the second keepalived docker container:
docker run --name keepalived2 --env KEEPALIVED_STATE="BACKUP" --env KEEPALIVED_INTERFACE="eth0" --env KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['172.18.0.9', '172.18.0.10']" --env KEEPALIVED_ROUTER_ID="101" --env KEEPALIVED_PRIORITY="100" --env KEEPALIVED_PASSWORD="password" --env KEEPALIVED_VIRTUAL_IPS="172.18.0.100" --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host osixia/keepalived:2.0.20
172.18.0.9 is mysqlrouter ip address and 172.18.0.10 is mysqlrouter-bkup ip address.
So all containers start successfully and the complete logs for both keepalived services are posted here: click me because i couldn't post more than 30,000 characters here.
Now in my nodejs mysql connection string, 172.18.0.100 (which is the virtual ip of keepalived) is what i supply and the host.
const connection = mysql.createPool({
host: 172.18.0.100,
user: process.env.USER,
password: process.env.PASSWORD,
database: process.env.DATABASE,
port: 6446,
connectionLimit: 20
});
So with this setup, everything works fine until i stop the mysqlrouter container to see if it will failover to mysqlrouter-bkup, it doesn't.
Please can someone point out to me what i am doing wrong.
Extra Info: i am running this containers on docker desktop(with wsl2), windows 11.
And please if there is a better way of achieving my purpose using docker on windows, i am open to it.

Related

Docker's container name can not be resolved

I just tried to create two containers for Elastic Search and Kibana.
docker network create esnetwork
docker run --name myes --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" elasticsearch:7.9.3
and Elastic Search works when I use http://localhost:9200 or http://internal-ip:9200
But when I use http://myes:9200, it just can't resolve the container name.
Thus when I run
docker run --name mykib --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://myes:9200” docker.elastic.co/kibana/kibana:7.9.3
It couldn't be created because it cannot resolve myes:9200
I also tried to replace "ELASTICSEARCH_HOSTS=http://myes:9200" with localhost:9200 or internal IP instead. but nothing works.
So I think my question should be how to make the container's DNS works?
How are you resolving 'myes'?
Is it mapped in hostname file and resolving to 127.0.0.1?
Also, use 127.0.0.1 wherever possible as localhost could be pointing to something else and not getting resolved.
It seems this problem doesn't arise from DNS. Both Elastic search and Kibana containers should use the fix name "elasticsearch" . so the docker command will be:
$docker network create esnetwork
$sudo vi /etc/sysctl.d/max_map_count.conf
vm.max_map_count=262144
$docker run --name elasticsearch --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e
$docker run --name kib01-test --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://elasticsearch:9200” docker.elastic.co/kibana/kibana:7.9.3
Then if the terminals that run installations could be ended automatically, just close them. And restart containers from the docker desktop manager. Then everything will go smoothly.
My environment is Fedora 36, docker 20.10.18

Connect application to database when they are in separate docker containers

Well, the set up is simple, there should be two containers: one of them for the mysql database and the other one for web application.
What I do to run the containers,
the first one for database and the second for the app:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d mysql
docker run -p 8081:8081 myrepo/myapp
The application tries to connect to database using localhost:3306, but as I found out the issue is that each container has its own localhost.
One of the solution I found was to add the same network for containers using --net and the docker commands happend to be like the following:
docker network create my-network
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d
--net my-network mysql
docker run --net my-network -p 8081:8081 myrepo/myapp
Though, the web application still is not able to connect to the database. What am I doing wrong and what is the proper flow to connect application to database when they are both inside containers?
You could use the name of the container (i.e. mysql-container) to connect to mysql. Example:
Run the mysql container:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d --net my-network mysql
Connect from another container using the mysql client:
docker run --net my-network -it mysql mysql -u root -p db -h mysql-container
In your application you should replace in the database URL, whatever IP you have with mysql-container.
Well, after additional research, I successfully managed to connect to the database.
The approach I used is the following:
On my host I grabbed the IP address of the docker itself but not the specific container:
sudo ip addr show | grep docker0
The IP address of the docker0 I added to the database connection URL inside my application and thus application managed to connect to the database (note: with this flow I don't add the --net keyword when start container)
What definitely strange is that even adding shared network like --net=my-nework for both the container didn't work. Moreover I did try to use --net=host to share the host network with container's one, still it was unsuccessful. If there's any who can explain why it couldn't work, please - share your knowledge.

KafkaTool: Can't connet to Kafka cluster

I'm trying to connect to the Kafka using a KafkaTool. I got an error:
Error connecting to the cluster. failed create new KafkaAdminClient
Kafka and Zookeeper is hosting in the Docker. I run next commands
docker network create kafka
docker run --network=kafka -d --name zookeeper -e ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:latest
docker run --network=kafka -d -p 9092:9092 --name kafka -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:latest
Settings for KafkaTool
Why does KafkaTool not connect to the Kafka that is hosting in the Docker?
I'm assuming this GUI is not coming from a Docker container. Therefore, your host machine doesn't know what zookeeper or kafka are, only the Docker network does.
In the GUI, you will want to use localhost for both, then in your Kafka run command, leave all the other variables alone but change -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
Zookeeper run command is fine, but add -p 2181:2181 to expose the port out to the host so that the GUI can connect

Can't access published port on a service on my host machine

The docker daemon is running on an Ubuntu machine. I'm trying to start up a zookeeper ensemble in a swarm. The zookeeper nodes themselves can talk to each other. However, from the host machine, I don't seem to be able to access the published ports.
If I start the container with -
docker run \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
It works like a charm. On my host machine I can say echo conf | nc localhost 2181 and zookeeper says something back.
However if I do,
docker service create \
-p 2181:2181 \
--env ZOO_MY_ID=1 \
--env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
zookeeper
and run the same command echo conf | nc localhost 2181,
it just gets stuck. I don't even get a new prompt on my terminal.
This works just as expected on the Docker Playground on the official Zookeeper Docker Hub page. So I expect it should for me too.
But... If I docker exec -it $container sh and then try the command in there, it works again.
Aren't published ports supposed to be accessible even by the host machine for a service?
Is there some trick I'm missing about working with overlay networks?
Try to use docket service create --publish 2181:2181 instead.
I believe the container backing the service is not directly exposed and has to go through the Swarm networking.
Otherwise, inspect your service to check which port are published: docker service inspect <service_name>
Source: documentation

Start Solr cloud on Docker Swarm (1.12) without Zookeeper

I am running Docker Swarm 1.12 on 3 CoreOS machines via Vagrant.
What is the best way to start a Solr cloud on the cluster?
Do I need Zookeeper?
I have gotten as far as this:
docker service create --mode=global --name solr -p 8983:8983 solr:5.3.1 bash -c "/opt/solr/bin/solr start -f -c"
But then the Cloud is empty because it does not know about the other 2 machines, how can I use Swarm's power here?
Documentation
The container image documentation describes how interconnect Zookeeper as a backing store for a distributed Solr setup:
You can also run a distributed Solr configuration, with Solr nodes in
separate containers, sharing a single ZooKeeper server:
Run ZooKeeper, and define a name so we can link to it:
$ docker run --name zookeeper -d -p 2181:2181 -p 2888:2888 -p 3888:3888 jplock/zookeeper
Run two Solr nodes, linked to the zookeeper container:
$ docker run --name solr1 --link zookeeper:ZK -d -p 8983:8983 \
solr \
bash -c '/opt/solr/bin/solr start -f -z $ZK_PORT_2181_TCP_ADDR:$ZK_PORT_2181_TCP_PORT'
$ docker run --name solr2 --link zookeeper:ZK -d -p 8984:8983 \
solr \
bash -c '/opt/solr/bin/solr start -f -z $ZK_PORT_2181_TCP_ADDR:$ZK_PORT_2181_TCP_PORT'
Running distributed Solr on Docker Swarm
Starting with a 3 node swarm:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2wkpmybf4wni8wia1s46lm2ml * node1 Ready Active Leader
ajrnf5oibgm7b12ayy0hg5i32 node3 Ready Active
bbe8n1hybhruhhrhmswn7fjmd node2 Ready Active
Create a network
$ docker network create --driver overlay my-net
Start a zookeeper service and wait for it to start
$ docker service create --name zookeeper --replicas 1 --network my-net jplock/zookeeper
Start 2 solr instances configured to connect to the DNS address "zookeeper". For more information on swarm mode networking you can read the documentation
$ docker service create --name solr --replicas 2 --network my-net -p 8983:8983 \
solr \
bash -c '/opt/solr/bin/solr start -f -z zookeeper:2181'
The web UI will be available on any node in the cluster
http://192.168.99.100:8983/
http://192.168.99.101:8983/
http://192.168.99.102:8983/
If you check the services you'll notice the containers are spread across all 3 nodes in the cluster. This is the default scheduling behaviour
$ docker service ps zookeeper
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
3fhipbsd4jdazmx8d7zum0ohp zookeeper.1 jplock/zookeeper node1 Running Running 7 minutes ago
$ docker service ps solr
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
bikggwpyz5q6vdxrpqwevlwsr solr.1 solr node2 Running Running 43 seconds ago
cutbmjsmcxrmi1ld75eox0s9m solr.2 solr node3 Running Running 43 seconds ago

Resources