Kafka docker - ERROR: "Unable to find leader for partition 0" - docker

Below is part of my YML file for zookeeper and Kafka Dockers
Ive added "KAFKA_CREATE_TOPICS" variable to create topics at startup
but it takes long time to create all 6 topics. so every time I bring dockers down and up, I need to wait untill all topics created
*the solution to create topics by kafkaProducer is not good for me
when kafka/zookeeper are up, I want them to remember all topics so I added volumes to Zookeeper, it worked and now topic exist on startup, but I cannot consume any topic because I get the following ERROR:
"Unable to find leader for partition 0"
zoo1:
image: xx.xx.xx.xx:5002/zookeeper
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- xxnet
deploy:
restart_policy:
condition: on-failure
logging:
driver: json-file
options:
max-size: 50m
volumes:
- /home/docker-volumes/zoo1-conf/:/conf
- /home/docker-volumes/zoo1-data/:/data
- /home/docker-volumes/zoo1-datalog/:/datalog
kafka:
image: xx.xx.xx.xx:5002/wurstmeister/kafka
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_ADVERTISED_HOST_NAME: xx.xx.xx.xx
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 1600000
#KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
KAFKA_LOG_RETENTION_HOURS: 0
KAFKA_LOG_RETENTION_MINUTES: 0
KAFKA_LOG_RETENTION_MS: 120000
RETENTION.MS: 900000
ADVERTISED_HOST: kafkaserver
ADVERTISED_PORT: 9092
KAFKA_SEGMENT_BYTES: 1073741824
KAFKA_RETENTION_CHECK_INTERVAL_MS: 300000
KAFKA_CREATE_TOPICS: topic1:1:1,topic2:1:1,topic3:1:1,topic4:1:1,topic5:1:1,topic6:1:1
networks:
- xxnet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/docker-volumes/kafka-logs/:/tmp/kafka-logs
- /home/docker-volumes/kafka-logs/kafka:/kafka
deploy:
restart_policy:
condition: on-failure
logging:
driver: json-file
options:
max-size: 50m
Thanks,
Larry

Related

Connecting Presto to Kafka does fails - Catalog 'kafka' does not exist

I tried to do something similar as the instructions outlined here. Just in my case I wanted to start presto and kafka in docker using docker-compose.
So my docker-compose.yaml looks like this:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
networks:
- shared
kafka1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092,PLAINTEXT_HOST://localhost:29092,LISTENER_DOCKER_INTERNAL://kafka1:19092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,LISTENER_DOCKER_INTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- shared
kafdrop:
image: obsidiandynamics/kafdrop:latest
restart: "no"
ports:
- "8089:9000"
environment:
KAFKA_BROKERCONNECT: "kafka1:19092"
depends_on:
- "kafka1"
networks:
- shared
presto:
image: ahanaio/prestodb-sandbox:latest
ports:
- 8087:8080
volumes:
- ./presto/kafka.properties:/etc/catalog/kafka.properties
networks:
- shared
networks:
shared:
name: kappa-playground
driver: bridge
The mounted file kafka.properties has the following content:
connector.name=kafka
kafka.nodes=kafka1:19092
kafka.table-names=example_topic
I ensure kafka has the topic create with the following little script:
# requires kafka-python
from kafka import KafkaClient
from kafka.admin import KafkaAdminClient, NewTopic
client = KafkaClient(bootstrap_servers='localhost:29092')
admin_client = KafkaAdminClient(
bootstrap_servers="localhost:29092",
client_id='setup'
)
future = client.cluster.request_update()
client.poll(future=future)
metadata = client.cluster
topics = metadata.topics()
if(len(topics) > 0 ):
print("topics: " + " ".join(topics))
else:
print("no topics exist yet")
if("example_topic" not in topics):
topic_list = []
topic_list.append(NewTopic(name="example_topic", num_partitions=1, replication_factor=1))
admin_client.create_topics(new_topics=topic_list, validate_only=False)
I can verify the topic "example_topic" exists with kafdrop.
Now I try to verify that presto can ready the topics from kafka like this:
presto --server=localhost:8087 --catalog kafka --schema default
presto:default> SHOW TABLES;
Which shows the following error:
Query 20220622_080948_00005_t2k7a failed: line 1:1: Catalog 'kafka' does not exist
What is going wrong here?
Found the issue. The kafka.properties file was mounted to the wrong path.
It should rather be:
presto:
image: ahanaio/prestodb-sandbox:latest
ports:
- 8087:8080
volumes:
- ./presto/kafka.properties:/opt/presto-server/etc/catalog/kafka.properties
networks:
- shared

kafka broker connection failed when create new topic with a cluster of 3 brokers

I'm trying to setup a kafka cluster with 3 brokers on Docker.
The problem is: when I do an operation (i.e create/list/delete topics), there's always 1 broker fails to be connected and restart Docker container. This problem doesn't happen on a cluster of 2 or single broker.
My steps to reproduce is:
Run docker-compose up
Open shell of 1 of kafka containers and create a topic kafka-topics --bootstrap-server ":9092" --create --topic topic-name --partitions 3 --replication-factor 3
After this, 1 random broker is disconnected and deleted from the cluster. Sometimes the reponse of the above execution is the error said that replication factor cannot be larger than 2 (since 1 broker has removed from cluster)
I'm new to Kafka. I think I'm just having some silly mistakes but I don't have any clue of what it is. I search through docs but haven't found yet.
Here is my docker-compose file:
version: "3.9"
networks:
kafka-cluster:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
# ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
# ZOOKEEPER_TICK_TIME: 2000
# ZOOKEEPER_SERVERS: "zookeeper:22888:23888"
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
ports:
- 2181:2181
restart: unless-stopped
networks:
- kafka-cluster
kafka1:
image: confluentinc/cp-kafka:latest
container_name: kafka1
depends_on:
- zookeeper
ports:
- "9093:9093"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9093
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka1:9092,EXTERNAL://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka2:
image: confluentinc/cp-kafka:latest
container_name: kafka2
depends_on:
- zookeeper
ports:
- "9094:9094"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9094
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka2:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafka3:
image: confluentinc/cp-kafka:latest
container_name: kafka3
depends_on:
- zookeeper
ports:
- "9095:9095"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: CLIENT://:9092,EXTERNAL://:9095
KAFKA_ADVERTISED_LISTENERS: CLIENT://kafka3:9092,EXTERNAL://localhost:9095
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: CLIENT
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
restart: unless-stopped
networks:
- kafka-cluster
kafdrop:
image: obsidiandynamics/kafdrop:latest
container_name: kafdrop
ports:
- "9000:9000"
environment:
- KAFKA_BROKERCONNECT=kafka1:9092,kafka2:9092,kafka3:9092
- JVM_OPTS="-Xms32M -Xmx64M"
- SERVER_SERVLET_CONTEXTPATH="/"
depends_on:
- kafka1
networks:
- kafka-cluster
Here is the error log on the other 2 brokers:
[2022-01-17 04:32:40,078] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=1002, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={test-topic-3-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty), test-topic-2-1=PartitionData(fetchOffset=0, logStartOffset=0, maxBytes=1048576, currentLeaderEpoch=Optional[0], lastFetchedEpoch=Optional.empty)}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=28449961, epoch=INITIAL), rackId=) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
[2022-01-17 04:32:42,088] WARN [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Connection to node 1001 (kafka1/192.168.48.3:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-01-17 04:32:42,088] INFO [ReplicaFetcher replicaId=1002, leaderId=1001, fetcherId=0] Error sending fetch request (sessionId=28449961, epoch=INITIAL) to node 1001: (org.apache.kafka.clients.FetchSessionHandler)
java.io.IOException: Connection to kafka1:9092 (id: 1001 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:104)
at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:218)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:321)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:137)
at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3$adapted(AbstractFetcherThread.scala:136)
at scala.Option.foreach(Option.scala:437)
at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:136)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:119)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Assuming you don't need host connections (since you're running the Kafka CLI commands directly in the containers), you could greatly simplify your Compose file
Remove host ports
Remove non-CLIENT listeners, and stick to the defaults.
Remove the Compose network (for debugging) since one is automatically created
All in all, you'd end up with something like this
x-kafka-setup: &kafka-setup
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
ALLOW_PLAINTEXT_LISTENER: 'yes'
version: "3.8"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.7
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka1:
image: &broker-image docker.io/bitnami/kafka:3
environment:
KAFKA_BROKER_ID: 1
<<: *kafka-setup
depends_on:
- zookeeper
kafka2:
image: *broker-image
environment:
KAFKA_BROKER_ID: 2
<<: *kafka-setup
depends_on:
- zookeeper
kafka3:
image: *broker-image
environment:
KAFKA_BROKER_ID: 3
<<: *kafka-setup
depends_on:
- zookeeper
kafdrop:
image: obsidiandynamics/kafdrop:latest
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: kafka1:9092,kafka2:9092,kafka3:9092
JVM_OPTS: "-Xms32M -Xmx64M"
SERVER_SERVLET_CONTEXTPATH: /
depends_on:
- kafka1
- kafka2
- kafka3

Unable for .NET client to send messages to Kafka on Docker [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 1 year ago.
I have set up Zookeeper and Apache Kafka on Docker (Windows) using the following docker-compose.yml:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- 9094:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://localhost:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
networks:
app-network:
driver: bridge
Docker containers are up and running, and verified Kafka's external listener via telnet (telnet localhost 9094). I have followed Confluent's guide on setting these up - https://www.confluent.io/blog/kafka-listeners-explained/. I also have verified Kafka's external connectivity via clients such as https://www.kafkamagic.com/.
However I have my own .NET client that is trying to connect to Kafka via localhost:9094 using Confluent.Kafka NuGet package and is throwing the following errors:
%3|1630480353.603|FAIL|rdkafka#producer-3| [thrd:kafka:9092/1]: kafka:9092/1: Failed to resolve 'kafka:9092': No such host is known. (after 2343ms in state CONNECT)
%3|1630480353.603|ERROR|rdkafka#producer-3| [thrd:app]: rdkafka#producer-3: kafka:9092/1: Failed to resolve 'kafka:9092': No such host is known. (after 2343ms in state CONNECT)
%3|1630480356.907|FAIL|rdkafka#producer-3| [thrd:kafka:9092/1]: kafka:9092/1: Failed to resolve 'kafka:9092': No such host is known. (after 2295ms in state CONNECT, 1 identical error(s) suppressed)
%3|1630480356.907|ERROR|rdkafka#producer-3| [thrd:app]: rdkafka#producer-3: kafka:9092/1: Failed to resolve 'kafka:9092': No such host is known. (after 2295ms in state CONNECT, 1 identical error(s) suppressed)
How come the errors are showing connectivity errors on :9092 if I specified the broker as :9094? Is there something wrong with Kafka's setup on docker-compose file?
Managed to get it working by modifying the docker-compose.yml specifically kafka's ports and KAFKA_ADVERTISED_LISTENERS; and removed KAFKA_LISTENERS.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:29092,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "localhost: 127.0.0.1"
networks:
- app-network
networks:
app-network:
driver: bridge

Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected

Im trying to setup my cluster with elk and kafka inside docker containers, but always logstash cant consume data from the kafka. Producer based on my local machine, not inside docker. I appriciate any help.
docker-compose:
zoo1:
image: confluentinc/cp-zookeeper
restart: always
container_name: zoo1
ports:
- "2181:2181"
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=2181
- ZOOKEEPER_CLIENT_PORT=2181
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: confluentinc/cp-kafka
hostname: kafka
container_name: kafka
depends_on:
- zoo1
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT_HOST://0.0.0.0:9092, PLAINTEXT://kafka:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9093,PLAINTEXT_HOST://kafka:9092
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- kafka
logstash.conf:
input {
kafka {
topics => ["topic-ex"]
bootstrap_servers => "localhost:9092"
}
}
Trace:
logstash_1 | [2021-03-28T04:39:54,855][WARN ][org.apache.kafka.clients.NetworkClient][main][540d5db3f43043788c8c88c0e41536de536f338e7ba9b86852861fc54f459599] [Consumer clientId=logstash-0, groupId=logstash] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
logstash_1 | [2021-03-28T04:39:54,856][WARN ][org.apache.kafka.clients.NetworkClient][main][540d5db3f43043788c8c88c0e41536de536f338e7ba9b86852861fc54f459599] [Consumer clientId=logstash-0, groupId=logstash] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
Edit: added logstash container and stack trace
With compose yaml you can not define locahost as hostname anymore. First add network bridge to both of your services then kafka will be your hostname. kafka:9093
Im just change bootstrap_servers in logstash.conf from "localhost:9092" to "kafka:9092".
I thought logstash and kafka must connect oustide of container. Thanks for help, everyone.

Kafka producer error "1 partitions have leader brokers without a matching listener"

First time working with Kafka and Docker-compose. I'm trying to publish a message to Kafka but I get an error (look below). What is the issue?
2020-07-21 16:37:40,274 WARN [kafka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater: [Producer clientId=producer-1] 1 partitions have leader brokers without a matching listener, including [demo-topic-0]
Here is my docker-compose.yml:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
links:
- zookeeper:zk
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ZOOKEEPER_CONNECT: zk:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "demo-topic:1:1"
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9093,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:9093,PLAINTEXT_HOST://localhost:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
player-service-ci:
image: player/player-service:latest
container_name: player-service-ci
restart: unless-stopped
volumes:
- /tmp/app/logs:/logs
environment:
- "JAVA_OPTS=-Xmx256m -Xms128m"
- "spring.profiles.active=ci"
- "LOGS_FILENAME=player-service-logger-ci"
- "SPRING_KAFKA_BOOTSTRAPSERVERS=kafka:9093"
ports:
- 17500:17500
networks:
default:
external:
name: ci
My question was partially answered here Leader brokers without a matching listener error in kafka.
docker-compose rm -sfv
The above code ultimately resolved the issue of multiple consumers.

Resources