When i was running the kafka and zookeeper without Docker, I could see the topic partitions log files in the /tmp/kafka-logs directory. Now with Docker, even though i specify the log directory in the Volumes section in docker-compose.yml, i cant see the files in the docker VM, like "TOPICNAME-PARTITIONNUMBER".. Is there anything I'm missing here ? Any idea on where i could find these directories in Docker VMs..
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "15001:15000"
environment:
ZK_SERVER_ID: 1
volumes:
- /tmp/docker/zk1/logs:/logs
- /tmp/docker/zk1/data:/data
kafka1:
image: confluent/kafka
container_name: kafka1
ports:
- "9092:9092"
- "15002:15000"
links:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_STORAGE: kafka
# This is Container IP
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
volumes:
- /tmp/docker/kafka1/logs:/logs
- /tmp/docker/kafka1/data:/data
this is how we configured for logs in our compose file and it has the log files in it. You should jump onto the container to see the '/var/lib/kafka/data', directory and the data inside it
volumes:
-kb1_data:/var/lib/kafka/data
Remember that 1st parameter in list for volumes, ports and other fields about sharing resources in docker-compose is about host, the 2nd is about the container.
So you should change the order for your volumes values.
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "15001:15000"
environment:
ZK_SERVER_ID: 1
volumes:
#- ./host/folder:/container/folder
- ./logs:/tmp/docker/zk1/logs
- ./data:/tmp/docker/zk1/data
kafka1:
image: confluent/kafka
container_name: kafka1
ports:
#- "host-port:container-port"
- "9092:9092"
- "15002:15000"
links:
- zookeeper
environment:
- KAFKA_BROKER_ID: 1
- KAFKA_OFFSETS_STORAGE: kafka
- # This is Container IP
- KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
volumes:
#- ./host/folder:/container/folder
- ./logs:/tmp/docker/kafka1/logs
- ./data:/tmp/docker/kafka1/data
Related
I am starting Zookeeper, kafka and kafdrop with docker-compose in local, everything is works.
when I want to do the same thing inside EC2 instance I get this error.
the EC2 type that I'm using is t2.micro with an OBS in the default VPC and Subnet.
docker-compose.yaml
version: "2"
services:
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafka-web
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:9092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
depends_on:
- "kafka"
networks:
- nesjs-network
zookeeper:
image: 'docker.io/bitnami/zookeeper:3-debian-10'
container_name: zookeeper
ports:
- 2181:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- nesjs-network
kafka:
image: 'docker.io/bitnami/kafka:2-debian-10'
container_name: kafka
ports:
- 9092:9092
- 9093:9093
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://kafka:9093
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
networks:
- nesjs-network
`
this docker-compos.yaml is working in may local without any issue but she doesn't in my EC2 instance
The problem is in EC2 configuration level.
kafka and kafdrop needs some specific resources such as RAM and vCpu.
instead t2.micro use t2.medium with a volume OBS 30Mo and other resources(vpc subnet sg) by default.
this config work for me.
I have 3 docker-compose files. One to start the kafka and the other two are consumer and producer. Added external_links in the other docker-compose files to kafka, but still unable to access kafka from inside containers. From outside the container, I can access through localhost:9092, but what about inside docker container.
# docker-compose1.yml
version: "3.6"
services:
zookeeper:
image: 'docker.io/bitnami/zookeeper:3.7'
container_name: zookeeper
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'docker.io/bitnami/kafka:3'
container_name: kafka
ports:
- '9092:9092'
volumes:
- 'kafka_data:/bitnami'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_ADVERTISED_HOST_NAME=localhost
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
depends_on:
- zookeeper
volumes:
zookeeper_data:
external: true
kafka_data:
external: true
# docker-compose2.yml
version: "3.6"
services:
web:
hostname: ocp-transmitter
image: 'ocp/transmitter'
command: bash -c "bundle install && foreman start"
ports:
- '3000:3000'
volumes:
- .:/app:cached
stdin_open: true
tty: true
external_links:
- kafka
First, remove these, they are deprecated
- KAFKA_ADVERTISED_HOST_NAME=localhost
- KAFKA_ADVERTISED_PORT=9092
Second, read the bitnami image documentation more carefully, all the Kafka properties start with KAFKA_CFG_, then read the section about internal/external listeners
The linked answer(s) are correct Communication between multiple docker-compose projects
Run docker network create with a name to setup an external bridge network separately from Compose, then add networks section to each service in that network (Zookeeper, Kafka, and your Kafka clients). Then make sure it's external
networks:
example-net:
external: true
Then you'd use kafka:29092 in your apps, not localhost, and not port 9092
I am trying to use kompose convert on my docker-compose.yaml files however, when I run the command:
kompose convert -f docker-compose.yaml
I get the output:
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak" isn't supported - ignoring path on the host
It also says more warning for the other persistent volumes
My docker-compose file is:
version: '3'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.1
container_name: es01
environment:
[env]
ulimits:
nproc: 3000
nofile: 65536
memlock: -1
volumes:
- /home/centos/Sprint0Demo/Servers/elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- kafka_demo
zookeeper:
image: confluentinc/cp-zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
volumes:
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-data:/var/lib/zookeeper/data
- /home/centos/Sprint0Demo/Servers/zookeeper/zk-txn-logs:/var/lib/zookeeper/log
networks:
kafka_demo:
kafka0:
image: confluentinc/cp-kafka
container_name: kafka0
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/kafkaData:/var/lib/kafka/data
ports:
- "9092:9092"
depends_on:
- zookeeper
- es01
networks:
kafka_demo:
schema_registry:
image: confluentinc/cp-schema-registry:latest
container_name: schema_registry
environment:
[env]
ports:
- 8081:8081
networks:
- kafka_demo
depends_on:
- kafka0
- es01
elasticSearchConnector:
image: confluentinc/cp-kafka-connect:latest
container_name: elasticSearchConnector
environment:
[env]
volumes:
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect:/etc/kafka-connect
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch:/etc/kafka-elasticsearch
- /home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak:/etc/kafka
ports:
- "28082:28082"
networks:
- kafka_demo
depends_on:
- kafka0
- es01
networks:
kafka_demo:
driver: bridge
Does anyone know how I can fix this issue? I was thinking it has to do with the error message saying that its a volume mount vs host mount?
I have made some research and there are three things to point out:
kompose does not support volume mount on host. You might consider using emptyDir instead.
Kubernetes makes it difficult to pass in host/root volumes. You can try with
hostPath.
kompose convert --volumes hostPath works for k8s.
Also you can check out Compose on Kubernetes if you'd like to run things on a single machine.
Please let me know if that helped.
I have a docker-compose.yml file:
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper:3.4
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka:latest
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql:latest
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
Running docker-compose up -d creates the container for all the image files.
Please note that i have created images for all already so these images will not be pulled from server when i run the docker-compose.yml file.
All the containers are running successfully but the problem turns out to be that containers cannot interact with each other like i have used links in my docker-compose.yml file to provide communicate between containers. But i think links option is not working for me. Kakfa is not able to communication with zookeeper(I used links to link zookeeper and kafka).
In short, Why link option is not working?
Or i am going wrong somewhere?
Anyone please provide me the right direction.
Note: All the containers are working separately but not able to communicate with each other
The issue is you are linking your containers improperly. To link to containers in another service. Either specify both the service name and a link alias (SERVICE:ALIAS), or just the service name. See the docker compose documentation for further information. Corrected compose file.
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
You can also specify the link with an alias like so:
...
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql:mysqldb
environment:
runMode: dev
...
Links are to the service name, not to the image name:
zookeeper:
image: zookeeper:3.4
ports:
- 2181:2181
kafka:
image: ches/kafka:latest
ports:
- 9092:9092
links:
- zookeeper
myDpm:
image: dpm-image:latest
ports:
- 9000:9000
links:
- kafka
mySql:
image: mysql:latest
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
myMc3:
image: mc3-v3:3.0
ports:
- 9001:9000
links:
- mySql
environment:
runMode: dev
myElastic:
image: elasticsearch:2.4.0
ports:
- 9200:9200
So, i.e., you can point to zookeeper from kafka as this:
zookeeper:2181
PS: You don't need to expose ports if you only use containers inter connections (as example before). You expose ports when you need to access, i.e. to some service port through your localhost.
I have written docker-compose.yml file to create the following containers:
Confluent-Zookeeper
Confluent-Kafka
Confluent-Schema Registry
I want a single docker-compose file to spun up the necessary containers, expose required ports and interconnect the dependent containers. The goal is to have
I am using the official confluent images from Docker Hub.
My docker-compose file looks like this:
zookeeper:
image: confluent/zookeeper
container_name: confluent-zookeeper
hostname: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
kafka:
environment:
KAFKA_ZOOKEEPER_CONNECTION_STRING: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
image: confluent/kafka
container_name: confluent-kafka
hostname: kafka
links:
- zookeeper
ports:
- "9092:9092"
schema-registry:
image: confluent/schema-registry
container_name: confluent-schema_registry
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
SCHEMA_REGISTRY_HOSTNAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081
SCHEMA_REGISTRY_DEBUG: 'true'
SCHEMA_REGISTRY_KAFKASTORE_TOPIC_REPLICATION_FACTOR: '1'
links:
- kafka
- zookeeper
ports:
- "8081:8081"
Now when I run docker-compose up, all these containers will be created and launched. But Schema Registry container exits immediately. docker logs gives the following output:
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2017-05-17 06:06:33,415] ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.config.ConfigException: Only plaintext and SSL Kafka endpoints are supported and none are configured.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getBrokerEndpoints(KafkaStore.java:254)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.<init>(KafkaStore.java:111)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:136)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:53)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:117)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
I searched for this issue but nothing helped. I tried various other configurations like providing KAFKA_ADVERTISED_HOSTNAME, changing SCHEMA_REGISTRY_LISTENERS value, etc. but no luck.
Can anybody point out the exact configuration issue why Schema Registry container is failing?
Those are old and deprecated docker images. Use the latest supported docker images from confluentinc https://hub.docker.com/u/confluentinc/
You can find a full compose file here - confluentinc/cp-docker-images
You're missing the hostname (hostname: schema-registry) entry in the failing container. By default Docker will populate a container's /etc/hosts with the linked containers' aliases and names, plus the hostname of self.
The question is old, though it might be helpful to leave a solution that worked for me. I am using docker-compose:
version: '3.3'
services:
zookeeper:
image: confluent/zookeeper:3.4.6-cp1
hostname: "zookeeper"
networks:
- test-net
ports:
- 2181:2181
environment:
zk_id: "1"
kafka:
image: confluent/kafka:0.10.0.0-cp1
hostname: "kafka"
depends_on:
- zookeeper
networks:
- test-net
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: "kafka"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_BROKER_ID: "0"
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
schema-registry:
image: confluent/schema-registry:3.0.0
hostname: "schema-registry"
depends_on:
- kafka
- zookeeper
networks:
- test-net
ports:
- 8081:8081
environment:
SR_HOSTNAME: schema-registry
SR_LISTENERS: http://schema-registry:8081
SR_DEBUG: 'true'
SR_KAFKASTORE_TOPIC_REPLICATION_FACTOR: '1'
SR_KAFKASTORE_TOPIC_SERVERS: PLAINTEXT://kafka:9092
networks:
test-net:
driver: bridge`