ACME certbot: Dry run is successful although no connection open - docker

I have a question about certbot:
At the moment, I test it using the following command:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
When I run it, it fails because it cannot connect. So I add -p 80:80 -p 443:443:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
-v nginx-docker_certs-data:/data/letsencrypt
-p 80:80 -p 443:443
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
Now it works.
When I remove -p 80:80 -p 443:443 again and do a test run again, the test run is successful. But why?

Related

Backup and restore Zookeeper and Kafka data

zookeeper cluster
sudo docker run -itd --name zookeeper-1 --restart=always -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=1 -e ZOO_SERVERS=0.0.0.0:2888:3888,ip2:2888:3888,ip3:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-2 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=2 -e ZOO_SERVERS=ip1:2888:3888,0.0.0.0:2888:3888,ip3:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-3 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=3 -e ZOO_SERVERS=ip1:2888:3888,ip2:2888:3888,0.0.0.0:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
kafka cluster
sudo docker run -itd --name kafka-1 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-2 --network sevenchats -v /data/kafka/data:/bitnami --restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-3 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
after running these commands my cluster is working fine
I want to restore my Kafka data or topics to new cluster so using tar command I took the backup of the docker volumes on my host under /data/kafka/ folder
tar cvzf data_kafka1.tar.gz data
when I check the size of the folder on my existing cluster the /data/kafka folder is around 200MB
and when I copy my backup folder to new cluster and extract there using the command
tar xvzf data_kafka1.tar.gz
when I check the size of the data folder
du -hs data
it is 20GB on my new cluster
I want my backup size to be same on both the clusters
small difference is also fine but there is a huge difference in size
let me know what mistake am i doing?

translate docker run command (tensorflow-serving) into docker-compose

Here is the docker run command:
docker run -p 8501:8501 \
--name tfserving_classifier \
-e MODEL_NAME=img_classifier \
-t tensorflow/serving
Here is what I tried but I am not able to get the MODEL_NAME to work
tensorflow-servings:
container_name: tfserving_classifier
ports:
- 8501:8501
command:
- -e MODEL_NAME=img_classifier
- -t tensorflow/serving
tensorflow-servings:
container_name: tfserving_classifier
image: tensorflow/serving
environment:
- MODEL_NAME=img_classifier
ports:
- 8501:8501

How to correct invalid escape sequence in nginx unit file

I'm trying to mount a docker volume of the path my ssl certificate is being generated to. However when I add the path -v /etc/ssl:/etc/ssl \ into my nginx.service unit file:
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker kill %p
ExecStartPre=-/usr/bin/docker rm -f %p
ExecStart=/usr/bin/docker run -t --rm --name %p \
-p 80:80 -p 443:443 \
--link custodian:custodian --volumes-from custodian \
-v /etc/ssl:/etc/ssl \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/www/letsencrypt/.well-known/acme-challenge:/var/www/letsencrypt/.well-known/acme-challenge \
-v /etc/ssl/private:/etc/ssl/private %p
ExecStop=/usr/bin/docker stop %p
the nginx log returns:
[/etc/systemd/system/nginx.service:10] Invalid escape sequences in line, correcting: "/usr/bin/docker run -t --rm --name %p -p 80:80 -p 443:443 --link custodian:custodian --volumes-from custodian -v /etc/ssl:/etc/ssl \"
How can I adjust the unit file to mount correctly?
The problem is a trailing space in the line -v /etc/ssl:/etc/ssl \, after the \. This way it doesn't escape the line ending.

How to Update Graylog version in docker

I am new to graylog, I have installed graylog in docker and after installing it I observed 2 notifications one is related to Graylog Upgrade. Can someone tell me how to update it using docker commands?
Note: First in need to take backup of my data and then I need to update it to version 2.4.6.
Note 2: I have already referred the documentation in graylog.
http://docs.graylog.org/en/2.4/pages/upgrade.html
http://docs.graylog.org/en/2.4/pages/installation/docker.html
Graylog Installation process:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
To use the latest version change the tag of the graylog image from 2.4.0-1 to 2.4 or 2.4.6-1
Seems like the documentation you found is not completely in line with the documentation on docker hub:
If you simply want to checkout Graylog without any further customization, you can run the following three commands to create the necessary environment:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.12
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4
First i have installed graylog with my own volumes
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
Now stop graylog using
docker stop [graylog Container ID]
Now remove container from docker
docker rm [graylog Container ID]
Now Remove docker image
docker rmi [graylog Image ID]
Now again install graylog by changing the graylog version
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.6-1
Note: Only remove graylog not mongoDB/Elasticsearch. Then you won't loose any data.

Kafka with Docker 3 nodes in different host - Broker may not be available

I'm played with wurstmeister/kafka image on three different docker host
, and hosts ip are
10.1.1.11
10.1.1.12
10.1.1.13
I enter these command to run image:
10.1.1.11:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="1" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.11" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="0.0.0.0:2181,10.1.1.12:2181,10.1.1.13:2181" \
-d wurstmeister/kafka
10.1.1.12:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="2" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.12" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="10.1.1.11:2181,0.0.0.0:2181,10.1.1.13:2181" \
-d wurstmeister/kafka
10.1.1.13:
sudo docker run --name kafka -p 9092:9092 --restart always \
-e KAFKA_BROKER_ID="3" \
-e KAFKA_ADVERTISED_HOST_NAME="10.1.1.13" \
-e KAFKA_ADVERTISED_PORT="9092" \
-e KAFKA_ZOOKEEPER_CONNECT="10.1.1.11:2181,10.1.1.12:2181,0.0.0.0:2181" \
-d wurstmeister/kafka
When run those command, always the first command appear the:
Warning, the rest of two are not appear this question.
I'm using kafka producer test too. if host appear this problem, message send failed with message
if not appear this problem, message send success.
When I restart the image in 10.1.1.11, the problem is fixed, but 10.1.1.12 start the same problem and so on.
All i search this problem solve method are set the KAFKA_ADVERTISED_HOST_NAME to docker host. But i already did.
I have no idea why appear this problem.
My Zookeeper command on 10.1.1.11:
sudo docker run --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 \
--restart always \
-e ZOO_MY_ID="1" \
-e ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=10.1.1.12:2888:3888 server.3=10.1.1.13:2888:3888" \
-d zookeeper:latest
Solution from OP.
The problem was firewall block docker container connect to docker host.
so i can't telnet docker host inside docker container.
the solution was set rule to iptables
sudo iptables -I INPUT 1 -i <docker-bridge-interface> -j ACCEPT
I found solution from https://github.com/moby/moby/issues/24370

Resources