Backup and restore Zookeeper and Kafka data - docker

zookeeper cluster
sudo docker run -itd --name zookeeper-1 --restart=always -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=1 -e ZOO_SERVERS=0.0.0.0:2888:3888,ip2:2888:3888,ip3:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-2 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=2 -e ZOO_SERVERS=ip1:2888:3888,0.0.0.0:2888:3888,ip3:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-3 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=3 -e ZOO_SERVERS=ip1:2888:3888,ip2:2888:3888,0.0.0.0:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
kafka cluster
sudo docker run -itd --name kafka-1 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-2 --network sevenchats -v /data/kafka/data:/bitnami --restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-3 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
after running these commands my cluster is working fine
I want to restore my Kafka data or topics to new cluster so using tar command I took the backup of the docker volumes on my host under /data/kafka/ folder
tar cvzf data_kafka1.tar.gz data
when I check the size of the folder on my existing cluster the /data/kafka folder is around 200MB
and when I copy my backup folder to new cluster and extract there using the command
tar xvzf data_kafka1.tar.gz
when I check the size of the data folder
du -hs data
it is 20GB on my new cluster
I want my backup size to be same on both the clusters
small difference is also fine but there is a huge difference in size
let me know what mistake am i doing?

Related

What I have missed when I linked the tor container to microblog?

So from begin I was trying to link the microblog from here by using next commands:
sudo docker build -t microblog:latest .
sudo docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
-e MYSQL_DATABASE=microblog -e MYSQL_USER=microblog \
-e MYSQL_PASSWORD=onion~12 \
mysql/mysql-server:5.7
sudo docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.2
sudo docker run --name microblog -d -p 8000:5000 -e SECRET_KEY=my-secret-key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e MAIL_USERNAME=admin_onion123#gmail.com -e MAIL_PASSWORD=123456780 \
--link mysql:dbserver \
-e DATABASE_URL=mysql+pymysql://microblog:onion~12#dbserver/microblog \
--link elasticsearch:elasticsearch \
-e ELASTICSEARCH_URL=http://elasticsearch:9200 \
microblog:latest
Up here everything is perfect!
Now I was getting the docker container with Tor Hidden-Service from here by using next command:
sudo docker run -itd --link microblog goldy/tor-hidden-service
The ideea is when I am using the command :
sudo docker logs {container of tor}
is showing me the myrandomoninonaddress.onoin:5000.
What I did wrong or what I have missed and why is listening port 5000 instead of 8000?

ACME certbot: Dry run is successful although no connection open

I have a question about certbot:
At the moment, I test it using the following command:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
When I run it, it fails because it cannot connect. So I add -p 80:80 -p 443:443:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
-v nginx-docker_certs-data:/data/letsencrypt
-p 80:80 -p 443:443
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
Now it works.
When I remove -p 80:80 -p 443:443 again and do a test run again, the test run is successful. But why?

How to Update Graylog version in docker

I am new to graylog, I have installed graylog in docker and after installing it I observed 2 notifications one is related to Graylog Upgrade. Can someone tell me how to update it using docker commands?
Note: First in need to take backup of my data and then I need to update it to version 2.4.6.
Note 2: I have already referred the documentation in graylog.
http://docs.graylog.org/en/2.4/pages/upgrade.html
http://docs.graylog.org/en/2.4/pages/installation/docker.html
Graylog Installation process:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
To use the latest version change the tag of the graylog image from 2.4.0-1 to 2.4 or 2.4.6-1
Seems like the documentation you found is not completely in line with the documentation on docker hub:
If you simply want to checkout Graylog without any further customization, you can run the following three commands to create the necessary environment:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.12
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4
First i have installed graylog with my own volumes
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
Now stop graylog using
docker stop [graylog Container ID]
Now remove container from docker
docker rm [graylog Container ID]
Now Remove docker image
docker rmi [graylog Image ID]
Now again install graylog by changing the graylog version
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.6-1
Note: Only remove graylog not mongoDB/Elasticsearch. Then you won't loose any data.

--net equivalent in docker compose

When i am running my docker container,I am explicitly providing network as host.
docker run -d \
--net=host \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
-e ZOOKEEPER_TICK_TIME=2000 \
confluentinc/cp-zookeeper:4.1.0
How can i provide the same network in docker-compose.yaml file so that all container run within this host?
You can use as below in your service definition -
network_mode: "host"
Ref - https://docs.docker.com/compose/compose-file/#network_mode

Kafka with Docker 3 nodes in different host - Broker may not be available

I'm played with wurstmeister/kafka image on three different docker host
, and hosts ip are
10.1.1.11
10.1.1.12
10.1.1.13
I enter these command to run image:
10.1.1.11:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="1" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.11" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="0.0.0.0:2181,10.1.1.12:2181,10.1.1.13:2181" \
-d wurstmeister/kafka
10.1.1.12:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="2" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.12" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="10.1.1.11:2181,0.0.0.0:2181,10.1.1.13:2181" \
-d wurstmeister/kafka
10.1.1.13:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="3" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.13" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="10.1.1.11:2181,10.1.1.12:2181,0.0.0.0:2181" \
-d wurstmeister/kafka
When run those command, always the first command appear the:
Warning, the rest of two are not appear this question.
I'm using kafka producer test too. if host appear this problem, message send failed with message
if not appear this problem, message send success.
When I restart the image in 10.1.1.11, the problem is fixed, but 10.1.1.12 start the same problem and so on.
All i search this problem solve method are set the KAFKA_ADVERTISED_HOST_NAME to docker host. But i already did.
I have no idea why appear this problem.
My Zookeeper command on 10.1.1.11:
sudo docker run --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 \
--restart always \
-e ZOO_MY_ID="1" \
-e ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=10.1.1.12:2888:3888 server.3=10.1.1.13:2888:3888" \
-d zookeeper:latest
Solution from OP.
The problem was firewall block docker container connect to docker host.
so i can't telnet docker host inside docker container.
the solution was set rule to iptables
sudo iptables -I INPUT 1 -i <docker-bridge-interface> -j ACCEPT
I found solution from https://github.com/moby/moby/issues/24370

Resources