Docker + Kafka: Is it possible to run a Consumer on Docker which reads from a Kafka topic outside Docker? - docker

we're trying to run a Consumer on Docker container and have Kafka, ZK and Schema Registry run outside Docker. Most of the examples I see for running Kafka inside Docker and making sure it is accessible from outside, are there any examples of the other way round i.e., making Kafka accessible inside Docker? Any leads/examples will help. Thank you!

A consumer inside a container should work the same as a consumer just on your host.
The broker's advertised.listeners should be the broker's external IP. When any client connects to this inside a container, it will be routed though the host's network interface.

Related

How can I pipe Kafka messages into a docker container?

I have a computer currently hosting the zookeeper and kafka servers.
I also have, in the same machine, a script that consumes messages sent to the local kafka server. The consumer script works as intended if I run it directly.
I want to run the consumer script from inside a docker container.
I have successfully built and run a container that runs the consumer script, but it waits forever for the kafka messages.
How can I make the kafka messages be redirected into the container? Is the only way to do this to host the zookeeper and kafka servers directly in the container?
By default, the consumer script's container is isolated from the host networking stack. The Kafka consumer needs to be able to see your brokers and Zookeeper instances running on your host machine.
There are number of solutions to this issue discussed here: Forward host port to docker container
A simple short-term solution is running your container on host networking by passing in --network=host, allowing the consumer container to share namespace with the host (e.g. you can use 'localhost:9092'). Note that this only works on Linux hosts.
Docker docs on using host networking: https://docs.docker.com/network/host/

How docker containers expose services?

I'm deploying a stack of services through the command:
docker stack deploy -c <docker-compose.yml> <stack-name>
And I'm mapping ports of one of these services on docker compose with ports: 8000:8000.
The network driver being used is overlay.
I can access these services via localhost:8000, via Peers IP(?).
When I inspect the network created, I can see the local IPs of each container (for instance, 10.0.1.2). But Where is the external IP of container (the one like 172.0. ...) ?
I am running these docker container on a virtual machine ubuntu.
How can I access the services running on containers from other nodes running on other networks? Isn't possible to access via hostIP:port?
If so, how do I get the host IP? When I do docker-machine IP I get "host is not running".
[EDIT: I wasn't doing port mapping between the host and the VM in virtualbox. Now it works!]
Whats the best way to communicate between containers on the same swarm?
Thanks
Whats the best way to communicate between containers on the same swarm? Through name discovery?
In general if you communicate between containers you should use the container/service name.
And for your other problem you probably wan't a reverse proxy like nginx or traefik.

How to use confluent/cp-kafka image in docker compose with advertising on localhost and my network container name kafka?

How to use confluent/cp-kafka image in docker compose with exposing on localhost and my network container name kafka?
Do not link this as duplicate of:
Connect to docker kafka container from localhost and another docker container
Cannot produce message to kafka from service running in docker
These do not solve my issue because the methods they use are depreciated by confluent/cp-kafka and I want to connect on localhost and on the docker network.
In the configure script on confluent/cp-kafka they do this annoying task:
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
It always sets whatever I give KAFKA_ADVERTISED_LISTENERS to 0.0.0.0! Using the docker network, doing
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9093,PLAINTEXT://kafka:9093
I expect the listeners to be either localhost:9092 or 0.0.0.0:9092 and some docker ip PLAINTEXT://172.17.0.1:9093 (whatever kafka resolves to on the docker network)
Currently I can get only one or the other to work. So using localhost, it only works on the host system, no docker containers can access it. Using kafka, it only works in the docker network, no host applications can access it. I want it to work with both. I am using docker compose so that I can have zookeeper, kafka, redis, and my application start up. I have other applications that will startup without docker.
Update
So when I set PLAINTEXT://localhost:9092 I can access kafka running docker, outside of docker.
When I set PLAINTEXT://kafka:9092 I cannot access kafka running docker, outside of docker.
This is expected, however doing this: PLAINTEXT://localhost:9092,PLAINTEXT://kafka:9093 I would expect to access kafka running docker, both inside and outside docker. The confluent/cp-kafka image is wiping out localhost and kafka. Setting them both to 0.0.0.0, then throwing an error that I set 2 different ports to the same ip...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
The image is fine. You might want to read this explanation of the listeners.
tl;dr - you don't want to (and shouldn't?) use the same listener "protocol" in different networks.
Use the advertised.listeners, no need to edit the listeners
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
When PLAINTEXT://localhost:9093 is being loaded inside of the container, you need to add port mappings for 9093, which should be self explanatory, and you connect to localhost:9093 and it should work.
Then, if you also had PLAINTEXT://kafka:9092, that will only work within the Docker Compose network overlay, not externally to your DNS servers, because that's how Docker networking works. You should be able to run other applications as part of that Docker network with the --network flag, or link containers using Docker Compose
Keep in mind that if you're running on Mac, the recommended way (as per the Confluent docs) is to run these containers in Docker Machine, in a VM, where you can manage the external port mappings correctly using the --net=host flag of Docker. However, using the blog above, it all works fine on a Mac outside a VM.

Docker swarm, Consul and Spring Boot

I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...

How can a container enumerate hosts available on the network?

Use case: haproxy container running with docker compose. I want to have the container discover which hosts are available in order to recreate haproxy config and reload it.
I know the there will be one or more containers named server1 and server2 available. From inside the haproxy container I can query dns for server1 and receive more than one IP address. Is that the only way to know when a new server1 cointainer becomes available or dies? I know I can use the docker api from python running inside a container that hast the docker host socket mapped to it, but I'm not sure that will work when running on swarm.
The perfect solution would be an api or command that let's me register an event handler that is called when a new container joins the network.
There is a solutions that you can use Registrator (https://github.com/gliderlabs/registrator), Consul and Consul Template.
Consul is a Service Discovery
Consul-Template watches Consul and updates HA Proxy config and reload it.
Registrator listens Docker Engine and update Consul if there is any container is up or down.
Please see the image:
For the full tutorial, you can refer to my blog (https://sonnguyen.ws/microservices-with-docker-swarm-and-consul/) to know how to implement it.

Resources