Can someone point out how to setup kafka with docker? I have tried every tutorial I could find and I got the same error:
Can't resolve YYYYYYY:PORT adress where YYYYYYY is the container id
I tried using kafka listeners, kafka advertised host name, kafka port and kafka advertised listeners enviroment variables but nothing worked. I mapped all ports 9092:9092 and 2181:2181.
If someone has a working Dockerfile with kafka I would apreaciate it.
YYYYYYY:PORT adress where YYYYYYY is the container id
Without seeing your Dockerfile and the commands you've tried, sounds to me like you are not using localhost outside of the container to access the Docker image, or using the Docker image name, not the container ID.
If a tutorial is showing it working, then I wouldn't think seeing another Dockerfile would help... TBH, just seems like a misconception that the container ID is relevant; or even if you used the container name externally of the container, it's a network error because it is not available to your DNS servers
That all being said, Confluent Quick Start (Docker) gives a good overview of not just Kafka, but also Zookeeper and other Kafka related components
See https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/cp-all-in-one/docker-compose.yml for an example of a working Docker Compose.
Also, you need to get your networking configuration right, as Kafka works across hosts and needs to be able to access them all. This post explains it in detail.
Related
I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.
we're trying to run a Consumer on Docker container and have Kafka, ZK and Schema Registry run outside Docker. Most of the examples I see for running Kafka inside Docker and making sure it is accessible from outside, are there any examples of the other way round i.e., making Kafka accessible inside Docker? Any leads/examples will help. Thank you!
A consumer inside a container should work the same as a consumer just on your host.
The broker's advertised.listeners should be the broker's external IP. When any client connects to this inside a container, it will be routed though the host's network interface.
How to use confluent/cp-kafka image in docker compose with exposing on localhost and my network container name kafka?
Do not link this as duplicate of:
Connect to docker kafka container from localhost and another docker container
Cannot produce message to kafka from service running in docker
These do not solve my issue because the methods they use are depreciated by confluent/cp-kafka and I want to connect on localhost and on the docker network.
In the configure script on confluent/cp-kafka they do this annoying task:
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
It always sets whatever I give KAFKA_ADVERTISED_LISTENERS to 0.0.0.0! Using the docker network, doing
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9093,PLAINTEXT://kafka:9093
I expect the listeners to be either localhost:9092 or 0.0.0.0:9092 and some docker ip PLAINTEXT://172.17.0.1:9093 (whatever kafka resolves to on the docker network)
Currently I can get only one or the other to work. So using localhost, it only works on the host system, no docker containers can access it. Using kafka, it only works in the docker network, no host applications can access it. I want it to work with both. I am using docker compose so that I can have zookeeper, kafka, redis, and my application start up. I have other applications that will startup without docker.
Update
So when I set PLAINTEXT://localhost:9092 I can access kafka running docker, outside of docker.
When I set PLAINTEXT://kafka:9092 I cannot access kafka running docker, outside of docker.
This is expected, however doing this: PLAINTEXT://localhost:9092,PLAINTEXT://kafka:9093 I would expect to access kafka running docker, both inside and outside docker. The confluent/cp-kafka image is wiping out localhost and kafka. Setting them both to 0.0.0.0, then throwing an error that I set 2 different ports to the same ip...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
The image is fine. You might want to read this explanation of the listeners.
tl;dr - you don't want to (and shouldn't?) use the same listener "protocol" in different networks.
Use the advertised.listeners, no need to edit the listeners
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
When PLAINTEXT://localhost:9093 is being loaded inside of the container, you need to add port mappings for 9093, which should be self explanatory, and you connect to localhost:9093 and it should work.
Then, if you also had PLAINTEXT://kafka:9092, that will only work within the Docker Compose network overlay, not externally to your DNS servers, because that's how Docker networking works. You should be able to run other applications as part of that Docker network with the --network flag, or link containers using Docker Compose
Keep in mind that if you're running on Mac, the recommended way (as per the Confluent docs) is to run these containers in Docker Machine, in a VM, where you can manage the external port mappings correctly using the --net=host flag of Docker. However, using the blog above, it all works fine on a Mac outside a VM.
I have a kafka server in my local host, and i want to connect it in my docker container.
I had searched how to connect local services in docker container and i found this: how-to-connect-to-local-mysql-server-through-docker
But it didn't work. Please help me, thanks~
Try updating kafka config as shown below
$ nano config/server.properties
uncomment listeners and paste the following
listeners=PLAINTEXT://<local-ip>:9092
save file & restart kafka.
Hope this helps!
https://github.com/provectus/kafka-ui/discussions/1081
In my Kafka server.properties I had advertised.listeners=PLAINTEXT://localhost:9092. This only allow access via localhost. Once I changed it to my machine's IP address everything worked fine.
If I understand right, you question can be re-phrased as "How can I access my host machine withing my docker container".
As i wrote in another answer, you can set gateway when starting your container, create some kind of proxy to access your Kafka or take the host ip from container.
I have built a docker image for kafka(wurstmeister/kafka-docker). Inside docker container I am able to create topic, produce messages and consume messages using the builtin shell scripts. Now I am using code hosted at https://github.com/mapr-demos/kafka-sample-programs to connect to kafka broker from my host machine. After building and running the program nothing happens and program stucks. I guess producer.send is not able to connect to kafka broker. Any help is appreciated
You can see that both the consumer.properties and the producer.properties files in that project specify bootstrap.servers=localhost:9092.
Since you cannot connect to the dockerized kafka service using localhost:9092, you might try finding the IP address of the docker container, by using, for example, docker inspect kafka | grep IPA (assuming that the name of your container is kafka). Then replace localhost with that IP address in those two properties files.
I am using ches/kafka docker image. Have a look at the explanation of KAFKA_ADVERTISED_HOST_NAME.