Kafka FQDN in containers enviornment - docker

Running kafka on a container and trying to create a new pgsql container on the same host.
the pgsql container keeps exiting and the logs indicates
ERROR: Failed to connect to Kafka at kafka.domain, check the docker run -e KAFKA_FQDN= value
the kafka container is built with the following attributes
docker run -d \
--name=app_kafka \
-e KAFKA_FQDN=localhost \
-v /var/app/kafka:/data/kafka \
-p 2181:2181 -p 9092:9092 \
app/kafka
the pgsql container with
docker run -d --name app_psql \
-h app-psql \
**-e KAFKA_FQDN=kafka.domain \
--add-host kafka.domain:172.17.0.1 \**
-e MEM=16 \
--shm-size=512m \
-v /var/app/config:/config \
-v /var/app/postgres/main:/data/main \
-v /var/app/postgres/ts:/data/ts \
-p 5432:5432 -p 9005:9005 -p 8080:8080 \
app/postgres
If i'm using docker0 ip address, the logs indicates no route to host, if i'm using the kafka docker ip, i'm getting connection refused.
I guess i'm missing something basic here that needs to be modified to my environment, but I'm lacking in knowledge here.
Will appreciate any assistance here.

You need edit container file hosts, you can pass a script in dockerFile like to
COPY dot.sh .
ENTRYPOINT ["sh","domain.sh"]
And domain.sh
#!/bin/sh
echo Environment container kafka is: "kafka.domain"
echo PGSQL container is "pgsql.domain"
echo "127.0.0.1 kafka.domain" >> /etc/hosts
echo "127.0.0.1 pgsql.domain" >> /etc/hosts
Feel free change ip or domain to needs.

Related

starting a NIFI container with my template/flow automatically loaded

i want to create a NIFI container and pass it a template/flow to be loaded automatically when the container is being created (without human intervention).
couldn't find any volumes/environments that are related to it.
i tried using (suggested by chatGPT):
docker run -d -p 8443:8443 \
-v /path/to/templates:/templates \
-e TEMPLATE_FILE_PATH=/templates/template.xml \
--name nifi apache/nifi
and
docker run -d -p 8443:8443 \
-e NIFI_INIT_FLOW_FILE=/Users/l1/Desktop/kaleidoo/devops/files/git_aliases/flow.json \
-v /Users/l1/Desktop/kaleidoo/docker/test-envs/flow.json:/flow.json:ro \
--name nifi apache/nifi
non of them worked, and i couldn't find data about NIFI_INIT_FLOW_FILE and TEMPLATE_FILE_PATH in the documentation.

How to set QuestDB configuration defaults when using in Docker?

I spin QuestDB in docker container as suggested in the docs
docker run -p 9000:9000 \
-p 9009:9009 \
-p 8812:8812 \
-p 9003:9003 \
questdb/questdb
How can I override number of threads in the worker pool default configuration for the container from 2 to 8?
If there is a property to override from configuration list there is a way to specify it as environment variable for the container with QDB_ prefix and _ instead of . in the variable name. In case of shared worker count it should be
docker run -p 9000:9000 \
-p 9009:9009 \
-p 8812:8812 \
-p 9003:9003 \
-e QDB_SHARED_WORKER_COUNT=8 \
questdb/questdb

How to start a consul client running in a docker container?

Update: I overlooked the progrium/consul page on dockerhub which had the solution to my question.
Question:
So I am running consul in the progrium/consul container. I am running 3 servers joined together and would like to add some consul clients. However I have not been able to find any guides that detail how to start consul clients using the progrium/consul container. Here is my current attempt to start a client:
Note that $CLIENT_IP_ADDR is my clients IP address and $CONSUL_SERVER0, $CONSUL_SERVER1 and $CONSUL_SERVER2 are the IP addresses of my consul servers.
docker run -d -h client0 --name client0 -v /mnt:/data \
-p $CLIENT_IP_ADDR:8300:8300 \
-p $CLIENT_IP_ADDR:8301:8301 \
-p $CLIENT_IP_ADDR:8301:8301/udp \
-p $CLIENT_IP_ADDR:8302:8302 \
-p $CLIENT_IP_ADDR:8302:8302/udp \
-p $CLIENT_IP_ADDR:8400:8400 \
-p $CLIENT_IP_ADDR:8500:8500 \
-p 172.17.0.1:53:53/udp \
progrium/consul -client -advertise $CLIENT_IP_ADDR \
-join $CONSUL_SERVER0 -join $CONSUL_SERVER1 -join $CONSUL_SERVER2
Here are the error messages I get when I check the logs for my container:
myUserName#myHostName:~$ docker logs client0
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent...
==> Error starting RPC listener: listen tcp $CLIENT_IP_ADDR:8400: bind: cannot assign requested address
I think the answer was just to remove the -client tag from my container:
docker run -d -h client0 --name client0 -v /mnt:/data \
-p $CLIENT_IP_ADDR:8300:8300 \
-p $CLIENT_IP_ADDR:8301:8301 \
-p $CLIENT_IP_ADDR:8301:8301/udp \
-p $CLIENT_IP_ADDR:8302:8302 \
-p $CLIENT_IP_ADDR:8302:8302/udp \
-p $CLIENT_IP_ADDR:8400:8400 \
-p $CLIENT_IP_ADDR:8500:8500 \
-p 172.17.0.1:53:53/udp \
progrium/consul -advertise $CLIENT_IP_ADDR \
-join $CONSUL_SERVER0 -join $CONSUL_SERVER1 -join $CONSUL_SERVER2
Apparently its the default mode of this container https://hub.docker.com/r/progrium/consul/. I think its running in client mode because my client0 node does not appear under the consul service. Only my 3 consul servers appear there.

Docker can't expose mesos port 5050

I have a mesos container running, the container has the port mapping 0.0.0.0:32772->5050/tcp.
If I run docker exec CONTAINER_ID "curl 0.0.0.0:5050, I can see the thing I want. However, I can't access HOST_IP:32772. I've tried to run nginx in the same container and I can connect to the nginx server in host, so I think it's mesos configuration problem? How can I fix it?
If I understand correctly, you're running your Mesos Master(s) from a Docker container. You should use host networking instead of bridge networking.
The settings work at least for me:
docker run \
--name=mesos_master \
--net=host \
-e MESOS_IP={YOUR_HOST_IP} \
-e MESOS_HOSTNAME={YOUR_HOST_IP} \
-e MESOS_CLUSTER=mesos-cluster \
-e MESOS_ZK=zk://{YOUR_ZK_SERVERS}/mesos \
-e MESOS_LOG_DIR=/var/log/mesos/master \
-e MESOS_WORK_DIR=/var/lib/mesos/master \
-e MESOS_QUORUM=2 \
mesosphere/mesos-master:0.27.1-2.0.226.ubuntu1404

How to store my docker registry in the file system

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Resources