How to start a consul client running in a docker container? - docker

Update: I overlooked the progrium/consul page on dockerhub which had the solution to my question.
Question:
So I am running consul in the progrium/consul container. I am running 3 servers joined together and would like to add some consul clients. However I have not been able to find any guides that detail how to start consul clients using the progrium/consul container. Here is my current attempt to start a client:
Note that $CLIENT_IP_ADDR is my clients IP address and $CONSUL_SERVER0, $CONSUL_SERVER1 and $CONSUL_SERVER2 are the IP addresses of my consul servers.
docker run -d -h client0 --name client0 -v /mnt:/data \
-p $CLIENT_IP_ADDR:8300:8300 \
-p $CLIENT_IP_ADDR:8301:8301 \
-p $CLIENT_IP_ADDR:8301:8301/udp \
-p $CLIENT_IP_ADDR:8302:8302 \
-p $CLIENT_IP_ADDR:8302:8302/udp \
-p $CLIENT_IP_ADDR:8400:8400 \
-p $CLIENT_IP_ADDR:8500:8500 \
-p 172.17.0.1:53:53/udp \
progrium/consul -client -advertise $CLIENT_IP_ADDR \
-join $CONSUL_SERVER0 -join $CONSUL_SERVER1 -join $CONSUL_SERVER2
Here are the error messages I get when I check the logs for my container:
myUserName#myHostName:~$ docker logs client0
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent...
==> Error starting RPC listener: listen tcp $CLIENT_IP_ADDR:8400: bind: cannot assign requested address

I think the answer was just to remove the -client tag from my container:
docker run -d -h client0 --name client0 -v /mnt:/data \
-p $CLIENT_IP_ADDR:8300:8300 \
-p $CLIENT_IP_ADDR:8301:8301 \
-p $CLIENT_IP_ADDR:8301:8301/udp \
-p $CLIENT_IP_ADDR:8302:8302 \
-p $CLIENT_IP_ADDR:8302:8302/udp \
-p $CLIENT_IP_ADDR:8400:8400 \
-p $CLIENT_IP_ADDR:8500:8500 \
-p 172.17.0.1:53:53/udp \
progrium/consul -advertise $CLIENT_IP_ADDR \
-join $CONSUL_SERVER0 -join $CONSUL_SERVER1 -join $CONSUL_SERVER2
Apparently its the default mode of this container https://hub.docker.com/r/progrium/consul/. I think its running in client mode because my client0 node does not appear under the consul service. Only my 3 consul servers appear there.

Related

starting a NIFI container with my template/flow automatically loaded

i want to create a NIFI container and pass it a template/flow to be loaded automatically when the container is being created (without human intervention).
couldn't find any volumes/environments that are related to it.
i tried using (suggested by chatGPT):
docker run -d -p 8443:8443 \
-v /path/to/templates:/templates \
-e TEMPLATE_FILE_PATH=/templates/template.xml \
--name nifi apache/nifi
and
docker run -d -p 8443:8443 \
-e NIFI_INIT_FLOW_FILE=/Users/l1/Desktop/kaleidoo/devops/files/git_aliases/flow.json \
-v /Users/l1/Desktop/kaleidoo/docker/test-envs/flow.json:/flow.json:ro \
--name nifi apache/nifi
non of them worked, and i couldn't find data about NIFI_INIT_FLOW_FILE and TEMPLATE_FILE_PATH in the documentation.

kafka: Connection to node 1001 could not be established. Broker may not be available

I started zookeeper and kafka container using below commands in CentOS 7.9:
docker run -it -d --net=sup-network --name zookeeper --ip 200.100.0.140 -p 2181:2181 zookeeper:3.7.0
docker run -it --net=sup-network --name kafka -p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=200.100.0.140:2181 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://200.100.0.141:9092 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-d bitnami/kafka:3.0.0
The 200.100.0.xxx ips are defined in docker swarm.
But kafka consistently gave below logs:
WARN [Controller id=1001, targetBrokerId=1001] Connection to node 1001 (/200.100.0.141:9092)
could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
How to fix it out?
additional info:
I removed -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://200.100.0.141:9092 \, then kafka didn't give Broker may not be available log info. But why there're so many posts suggest that this line should be added?

Can't log into private registry between instances on Play-with-docker

I am very new to docker so please bear with me. I am following the documentation on https://docs.docker.com/registry/deploying/#running-a-domain-registry
I have spin up 2 nodes on play-with-docker.com for my learning.
On Node1 I am able to set up a private registry successfully using the following command
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v "$(pwd)"/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v "$(pwd)"/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I was also able to pull and push images from Node1 to the registry. However, when I go on Node2 and try to log in to the registry it gives the following error:
[node2] (local) root#192.168.0.7 ~
$ docker login 192.168.0.8:5000
Username: testuser
Password:
Error response from daemon: Get https://192.168.0.8:5000/v2/: dial tcp 192.168.0.8:5000: connect: connection refused
please let me know what am I missing?
Node2 can't access port 5000 on 192.168.0.8. This looks like a network issue.
Are your nodes in the same network?
Are there firewall rules that might be blocking access to port 5000?
Are you sure 192.168.0.8 is the correct IP address of your Node1 machine?
To test your TCP connection use telnet. For example telnet 192.168.0.8 5000 (assuming 192.168.0.8 is the correct IP address).

Kafka FQDN in containers enviornment

Running kafka on a container and trying to create a new pgsql container on the same host.
the pgsql container keeps exiting and the logs indicates
ERROR: Failed to connect to Kafka at kafka.domain, check the docker run -e KAFKA_FQDN= value
the kafka container is built with the following attributes
docker run -d \
--name=app_kafka \
-e KAFKA_FQDN=localhost \
-v /var/app/kafka:/data/kafka \
-p 2181:2181 -p 9092:9092 \
app/kafka
the pgsql container with
docker run -d --name app_psql \
-h app-psql \
**-e KAFKA_FQDN=kafka.domain \
--add-host kafka.domain:172.17.0.1 \**
-e MEM=16 \
--shm-size=512m \
-v /var/app/config:/config \
-v /var/app/postgres/main:/data/main \
-v /var/app/postgres/ts:/data/ts \
-p 5432:5432 -p 9005:9005 -p 8080:8080 \
app/postgres
If i'm using docker0 ip address, the logs indicates no route to host, if i'm using the kafka docker ip, i'm getting connection refused.
I guess i'm missing something basic here that needs to be modified to my environment, but I'm lacking in knowledge here.
Will appreciate any assistance here.
You need edit container file hosts, you can pass a script in dockerFile like to
COPY dot.sh .
ENTRYPOINT ["sh","domain.sh"]
And domain.sh
#!/bin/sh
echo Environment container kafka is: "kafka.domain"
echo PGSQL container is "pgsql.domain"
echo "127.0.0.1 kafka.domain" >> /etc/hosts
echo "127.0.0.1 pgsql.domain" >> /etc/hosts
Feel free change ip or domain to needs.

Docker can't expose mesos port 5050

I have a mesos container running, the container has the port mapping 0.0.0.0:32772->5050/tcp.
If I run docker exec CONTAINER_ID "curl 0.0.0.0:5050, I can see the thing I want. However, I can't access HOST_IP:32772. I've tried to run nginx in the same container and I can connect to the nginx server in host, so I think it's mesos configuration problem? How can I fix it?
If I understand correctly, you're running your Mesos Master(s) from a Docker container. You should use host networking instead of bridge networking.
The settings work at least for me:
docker run \
--name=mesos_master \
--net=host \
-e MESOS_IP={YOUR_HOST_IP} \
-e MESOS_HOSTNAME={YOUR_HOST_IP} \
-e MESOS_CLUSTER=mesos-cluster \
-e MESOS_ZK=zk://{YOUR_ZK_SERVERS}/mesos \
-e MESOS_LOG_DIR=/var/log/mesos/master \
-e MESOS_WORK_DIR=/var/lib/mesos/master \
-e MESOS_QUORUM=2 \
mesosphere/mesos-master:0.27.1-2.0.226.ubuntu1404

Resources