dynamic KAFKA_ADVERTISED_HOST_NAME on kafka continer - docker

I use kafka in docker container.
One of the requirements is that the kafka will be available to a producer natively running on the host machine.
This is why I set the KAFKA_ADVERTISED_HOST_NAME to my host ip.
my docker-compose.yml looks like this:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.10
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERES: PLAINTEXT://localhost:9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
and it works.
The problem is, I want to be able to use this docker-compose file also on other machines, and I don't know what their IP might be.
trying to change the ip into a name like 'kafka' caused it to be unavailable to the host machine (although still available from other containers).
Is there a way to use the host IP in the docker-compose file without "hardcoding" it (so that it will be a different IP address on different machines)?
Is there another way of addressing this issue?

did you try using the property? It not recommend for production usage for whatever reason but it might work for you
host.docker.internal
https://docs.docker.com/docker-for-mac/networking/#there-is-no-docker0-bridge-on-macos#i-want-to-connect-from-a-container-to-a-service-on-the-host
https://docs.docker.com/docker-for-windows/networking/#there-is-no-docker0-bridge-on-windows#i-want-to-connect-from-a-container-to-a-service-on-the-host

Related

Kafka container connecting from inside the container and outside using the same hostname

I am aware this topic has several questions and blog posts about it. I am following these two:
https://rmoff.net/2018/08/02/kafka-listeners-explained/
https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
But unfortunately, without success.
I'm trying to make it so the same code will work whether I'm running it from my IDE where the kafka client is in a container, or whether the code I'm running is in a container within the network. I am able to make each scenario work on its own, but not the two together.
My docker compose:
zoo1:
image: confluentinc/cp-zookeeper:7.2.1
hostname: zoo1
container_name: zoo1
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: zoo1:2888:3888
kafka1:
image: confluentinc/cp-kafka:7.2.1
hostname: kafka1
container_name: kafka1
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_LISTENERS: INTERNAL://0.0.0.0:29092,EXTERNAL://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
depends_on:
- zoo1
In this docker-compose, communication within the docker network using kafka1:29092 as bootstrap works great. but, from my laptop using the same doesn't work.
Is there anyway to ensure that both locally and inside the container network I can bootstrap to kafka1:29092? Do I even need the external listener?
Thanks
anyway to ensure that both locally and inside the container network I can bootstrap to kafka1:29092?
No.
Your host isn't aware of the DNS / service names used by Docker.
Instead, add an environment variable in your code like KAFKA_BOOTSTRAP_SERVERS, then set that as a variable in your IDE (as localhost:9092) via a run config, or as a container variable (kafka1:29092)
You can also remove - "29092:29092" from your compose file since your host will never need that port to connect with the broker

accessing kafka running in docker-compose from other machines

I want to run kafka in a single node, single broker, in one of computers on our network and be able to access it from other machines. for example by running docker-compose on 192.168.0.36 I want to access it from 192.168.0.19.
since we can't use any Linux distribution I have to run kafka as a docker container on windows.
I know there are already a ton of questions and documents on this topic including this question and this example and also this blog post, but unfortunately none of them worked out for me.
this is the compose file I'm using right now:
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
expose:
- "2181"
volumes:
- type: bind
source: "G:\\path\\to\\zookeeper"
target: /opt/zookeeper-3.4.6/data
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
expose:
- "9092"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9093, OUTSIDE://192.168.0.36:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://:9093,OUTSIDE://:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_BROKER_ID: 1
KAFKA_LOG_DIRS: "/kafka"
volumes:
- type: bind
source: "G:\\path\\to\\logs"
target: /kafka/
things I tried for debugging the issue:
alraedy tried all the different configurations in mentioned questions
and blog posts.
I can access Kafka from 192.168.0.36 which is machine running docker-compose but not from
192.168.0.19 (NoBrokersAvailable error in kafka-python).
just to see if it's internal networking problem or not, I tried a similar docker-compose file running a falcon API using gunicorn and I can call the API from 192.168.0.19.
I also tried the windows telnet tool to see the 9092 port is
accessible from different machines, it's accessible from 0.36 but not
from 0.19.
tried using a custom network like this one
I'm testing the connection using python's kafka-python package. we have a multi-broker kafka running on our on-premise kubernetes cluster and it's working fine, so I don't think my testing scripts have any issues.
UPDATE
as OneCricketeer suggested, I tried this solution with different configurations like 0.0.0.0:9092=>127.0.0.1:9092 and 192.168.0.36:9092=>127.0.0.1:9092. also disabled firewall. still getting NoBrokersAvailable but at least I can access 0.36:9092 from other machine's telnet now.

Docker Compose network_mode and port_binding compatibility issue

My docker-compose.yml contains this:
version: '3.2'
services:
mysql:
image: mysql:latest
container_name: mysql
restart: always
network_mode: "host"
hostname: localhost
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- $HOME/data/datasql:/var/lib/mysql
ports:
- 3306:3306
user-management-service:
build: user-management-service/
container_name: user-management-service
restart: always
depends_on:
- mysql
- rabbitmq
- eureka
network_mode: "host"
hostname: localhost
ports:
- 8089:8089
When I try to do docker-compose up, I get the following error:
"host" network_mode is incompatible with port_bindings
Can anyone help me with the solution?
network_mode: host is almost never necessary. For straightforward servers, like the MySQL server you show or what looks like a normal HTTP application, it's enough to use normal (bridged) Docker networking and ports:, like you show.
If you do set up host networking, it completely disables Docker's networking stack. You can't call to other containers using their host name, and you can't remap a container's port using ports: (or choose to not publish it at all).
You should delete the network_mode: lines you show in your docker-compose.yml file. The container_name: and hostname: lines are also unnecessary, and you can delete those too (specific exception: RabbitMQ needs a fixed hostname:).
I feel like the two places I see host networking are endorsed are either to call back to the host machine (see From inside of a Docker container, how do I connect to the localhost of the machine?), or because the application code has hard-coded localhost as the host name of the database or other components (in which case Docker and a non-Docker development setup fundamentally act differently, and you should configure these locations using environment variable or another mechanism).
Quick solution:
Downgrade the docker-compose version and you'll be fine. The issue is with the latest docker-compose version and network_mode: "host"
I faced the same issue on v1.29.2 and while everything worked smooth on v1.27.4.
I had the same problem with network_mode: 'host'.
When downgrading docker-compose from 1.29.2 to 1.25.4, it worked fine. Maybe some bug added in new versions?
Get rid of the param ports in your services containing network_mode its like doing mapping twice.
mysql:
image: mysql:latest
container_name: mysql
restart: always
network_mode: "host"
hostname: localhost
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
volumes:
- $HOME/data/datasql:/var/lib/mysql
....
....
To access the host's http://localhost inside your docker, you need to replace:
network_mode: host
with:
ports:
- 80:80
You can do the same with any other port.
If you want to connect to a local database then, when connecting to that database, don't use "localhost" or "127.0.0.1". Instead use "host.docker.internal" and that will allow traffic between your container to the database.

Kafka broker not accessible in docker compose

I have created a docker compose file where my application wants to use kafka.
docker-compose.yaml is:
version: '3.7'
services:
api:
depends_on:
- kafka
restart: on-failure
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.7
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "mytopic:1:1"
192.168.1.7 is my ip that i got from ifconfig.
In my service i am giving broker as 192.168.1.7:9092.
When i do docker ps and exec to my kafka container. I am not able to access the 192.168.1.0
What am i doing wrong here though the strange thing is in my application logs i see that the topic is created.
When i try to create the topic:
You don't need IP addresses other than 127.0.0.1
192.168.1.7 seems like your host IP, not the docker IP, and yet you are not using network_mode: host, and so the network is not allowing you to connect to the broker.
I recommend finding existing, functional Docker Compose files such as ones in this answer
As posted above by #oneCricketeer you don't have to hardcode any of your host ip addresses.
You can connect to broker using "broker" name inside your api itself. And same can be set to advertise host name as well.

Figure out IP address within docker container

I have a docker-compose file with several service-container definitions. One of the services communicates with Apache Kafka within the same docker-compose run.
So I have the kafka docker definition like this:
kafka:
image: spotify/kafka
ports:
- "2181:2181"
- "9092:9092"
environment:
ADVERTISED_HOST: 127.0.0.1
ADVERTISED_PORT: 9092
I have my service definition in the same docker-compose file. In the startup script of the service I have to figure out somehow the IP address of the Kafka instance.
I know, I can use something like docker inspect to find out which IP address is used by a container.
But how can I do it dynamically in a docker-compose environment?
EDIT
So, the right configuration should be (thank you, #nwinkler):
kafka:
image: spotify/kafka
ports:
- "2181:2181"
- "9092:9092"
environment:
ADVERTISED_HOST: kafka
ADVERTISED_PORT: 9092
myservice:
image: foo
links:
- kafka:kafka
Don't forget to set the ADVERTISED_HOST to kafka (or how you named your kafka container within docker-compose).
You can use the Docker Compose Links feature for this. If you provide a link to the kafka container from your other container, Docker Compose will ensure that your other container can access the Kafka container through its hostname - you will not have to know its IP address.
Example:
kafka:
image: spotify/kafka
ports:
- "2181:2181"
- "9092:9092"
environment:
ADVERTISED_HOST: 127.0.0.1
ADVERTISED_PORT: 9092
myservice:
image: foo
links:
- kafka:kafka
This will allow your myservice container to access the Kafka container through the kafka hostname. So from your myservice container, you can do something like curl http://kafka:9092 to access the service on the Kafka container.
Docker-Compose does this through DNS, it creates a hostname/IP mapping in your container allowing you to access the container without knowing its IP address.
The ip of your container will be the ip you are looking for.
Append the port number (9092 in your case) to the ip of the container to get whatever kafka is serving.

Resources