Spring Boot & Kafka, Producer thrown exception with key='null' - docker

I'm trying to use Spring Boot with Kafka and ZooKeeper with Docker :
docker-compose.yml:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
restart: always
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
980e6b09f4e3 wurstmeister/kafka "start-kafka.sh" 29 minutes ago Up 29 minutes 0.0.0.0:9092->9092/tcp samplespringkafkaproducerconsumermaster_kafka_1
64519d4808aa wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 2 hours ago Up 29 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp samplespringkafkaproducerconsumermaster_zookeeper_1
docker-compose up output log:
kafka_1 | [2018-01-12 13:14:49,545] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,546] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,546] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,547] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,547] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,548] INFO Client environment:os.version=4.9.60-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,548] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,549] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,549] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,552] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#1534f01b (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,574] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2018-01-12 13:14:49,578] INFO Opening socket connection to server samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2018-01-12 13:14:49,591 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /192.168.32.3:51466
kafka_1 | [2018-01-12 13:14:49,593] INFO Socket connection established to samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2018-01-12 13:14:49,600 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#928] - Client attempting to establish new session at /192.168.32.3:51466
zookeeper_1 | 2018-01-12 13:14:49,603 [myid:] - INFO [SyncThread:0:FileTxnLog#203] - Creating new log file: log.fd
zookeeper_1 | 2018-01-12 13:14:49,613 [myid:] - INFO [SyncThread:0:ZooKeeperServer#673] - Established session 0x160ea8232b00000 with negotiated timeout 6000 for client /192.168.32.3:51466
kafka_1 | [2018-01-12 13:14:49,616] INFO Session establishment complete on server samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181, sessionid = 0x160ea8232b00000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2018-01-12 13:14:49,619] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2018-01-12 13:14:49,992] INFO Cluster ID = Fgy9ybPPQQ-QdLINzHpmVA (kafka.server.KafkaServer)
kafka_1 | [2018-01-12 13:14:50,003] WARN No meta.properties file under dir /kafka/kafka-logs-980e6b09f4e3/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2018-01-12 13:14:50,065] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,065] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,067] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,167] INFO Log directory '/kafka/kafka-logs-980e6b09f4e3' not found, creating it. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,183] INFO Loading logs. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,199] INFO Logs loading complete in 15 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,283] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,291] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,633] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2018-01-12 13:14:50,639] INFO [SocketServer brokerId=1005] Started 1 acceptor threads (kafka.network.SocketServer)
kafka_1 | [2018-01-12 13:14:50,673] INFO [ExpirationReaper-1005-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,674] INFO [ExpirationReaper-1005-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,675] INFO [ExpirationReaper-1005-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,691] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1 | [2018-01-12 13:14:50,753] INFO [ExpirationReaper-1005-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,757] INFO [ExpirationReaper-1005-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,762] INFO [ExpirationReaper-1005-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,777] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,791] INFO [GroupCoordinator 1005]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2018-01-12 13:14:50,791] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,793] INFO [GroupCoordinator 1005]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2018-01-12 13:14:50,798] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1 | [2018-01-12 13:14:50,811] INFO [ProducerId Manager 1005]: Acquired new producerId block (brokerId:1005,blockStartProducerId:5000,blockEndProducerId:5999) by writing to Zk with path version 6 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1 | [2018-01-12 13:14:50,848] INFO [TransactionCoordinator id=1005] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2018-01-12 13:14:50,850] INFO [Transaction Marker Channel Manager 1005]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1 | [2018-01-12 13:14:50,850] INFO [TransactionCoordinator id=1005] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2018-01-12 13:14:50,949] INFO Creating /brokers/ids/1005 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper_1 | 2018-01-12 13:14:50,952 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:create cxid:0x70 zxid:0x102 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper_1 | 2018-01-12 13:14:50,952 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:create cxid:0x71 zxid:0x103 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
kafka_1 | [2018-01-12 13:14:50,957] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,959] INFO Registered broker 1005 at path /brokers/ids/1005 with addresses: EndPoint(localhost,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka_1 | [2018-01-12 13:14:50,961] WARN No meta.properties file under dir /kafka/kafka-logs-980e6b09f4e3/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2018-01-12 13:14:50,992] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2018-01-12 13:14:50,993] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2018-01-12 13:14:51,004] INFO [KafkaServer id=1005] started (kafka.server.KafkaServer)
zookeeper_1 | 2018-01-12 13:14:51,263 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:delete cxid:0xe3 zxid:0x105 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2018-01-12 13:24:50,793] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1 | [2018-01-12 13:34:50,795] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
Kafka maven dependency in Producer and Consumer:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.9.RELEASE</version>
<relativePath/>
</parent>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
application.properties in Producer:
spring.kafka.producer.bootstrap-servers=0.0.0.0:9092
spring.kafka.consumer.topic=kafka_topic
server.port=8080
application.properties in Consumer:
spring.kafka.consumer.bootstrap-servers=0.0.0.0:9092
spring.kafka.consumer.group-id=WorkUnitApp
spring.kafka.consumer.topic=kafka_topic
server.port=8081
Consumer:
#Component
public class Consumer {
private static final Logger LOGGER = LoggerFactory.getLogger(Consumer.class);
#KafkaListener(topics = "${spring.kafka.consumer.topic}")
public void receive(ConsumerRecord<?, ?> consumerRecord) {
LOGGER.info("received payload='{}'", consumerRecord.toString());
}
}
Producer:
#Component
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String topic, String payload) {
LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
kafkaTemplate.send(topic, payload);
}
}
ConsumerConfig log:
2018-01-12 15:25:48.220 INFO 20919 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [0.0.0.0:9092]
check.crcs = true
client.id = consumer-1
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = WorkUnitApp
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ProducerConfig log:
2018-01-12 15:26:27.956 INFO 20924 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [0.0.0.0:9092]
buffer.memory = 33554432
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
When i try to send a message I get an exception:
producer.send("kafka_topic", "test")
exception log:
2018-01-12 15:26:27.975 INFO 20924 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2018-01-12 15:26:27.975 INFO 20924 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
2018-01-12 15:26:58.152 ERROR 20924 --- [ad | producer-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='test' to topic kafka_topic:
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for kafka_topic-0 due to 30033 ms has passed since batch creation plus linger time
How to fix it ?

Problem is not with sending key as null, Connection to a broker may not be established
try using local Kafka installation.
If you are using mac Docker for mac having some networking
limitations
https://docs.docker.com/docker-for-mac/networking/#known-limitations-use-cases-and-workarounds

i ran into the same issue. the issue was with my dockercompose file. not 100% but i think KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_LISTENERS both need to refernce localhost. my working compose file.
version: '2'
networks:
sb_net:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
networks:
- sb_net
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
networks:
- sb_net
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

error on mindfulness, need to add a link:
links:
- zookeeper:zookeeper
full docker-compose.yml:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
restart: always
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka
container_name: kafka
restart: always
ports:
- 9092:9092
depends_on:
- zookeeper
links:
- zookeeper:zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

I got same problem, Kafka only allow 127.0.0.0/localhost by default.
My solution:
Add this line in Kafka server.properties, and restart Kafka service
listeners=PLAINTEXT://192.168.31.72:9092

Related

Producer cannot connect to kafka in docker compose in Docker Desktop on mac

I'm working on Mac with Docker Desktop. I'm trying to run wurstmeister/kafka from docker compose and connect a producer to it.
This is my docker-compose.yml:
version: '3.8'
services:
zookeeper:
container_name: zookeeper
image: zookeeper:3.7.0
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zookeeper:2888:3888;2181
restart: on-failure
kafka:
container_name: kafka
image: wurstmeister/kafka:2.13-2.7.0
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://localhost:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
restart: on-failure
depends_on:
- zookeeper
Then I have producer connecting to localhost:9092 and sending a simple message. The producer works fine - tested with another kafka image confluentinc/cp-kafka:6.2.0.
When I try to use producer with wurstmeister/kafka I'm getting a lot of this errors:
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=simple-producer] Created socket with SO_RCVBUF = 326640, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node -1
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Completed connection to node -1. Fetching API versions.
22:07:13.421 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Initiating API versions fetch from node -1.
22:07:13.422 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=simple-producer, correlationId=20) and timeout 30000 to node -1: {client_software_name=apache-kafka-java,client_software_version=2.7.0,_tagged_fields={}}
22:07:13.423 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=simple-producer] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.base/java.lang.Thread.run(Thread.java:829)
22:07:13.424 [kafka-producer-network-thread | simple-producer] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Node -1 disconnected.
22:07:13.424 [kafka-producer-network-thread | simple-producer] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=simple-producer] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
Why this happens? What is the cause of this error? And how can make it work?
EDIT: added kafka container logs
Kafka container last logs below and no new logs added when I try to connect producer:
[2021-09-09 21:07:28,227] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-09 21:07:28,230] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-09 21:07:28,231] INFO Kafka startTimeMs: 1631221648211 (org.apache.kafka.common.utils.AppInfoParser)
[2021-09-09 21:07:28,238] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2021-09-09 21:07:28,365] INFO [broker-1-to-controller-send-thread]: Recorded new controller, from now on will use broker 1 (kafka.server.BrokerToControllerRequestThread)

Alfresco deployment with docker in a system that has a rabbitmq instance running

I am trying to deploy Alfresco community edition with its official docker-compose file, the problem that I am facing is that in the host system there is a RabbitMq instance running (with default configs) an I think the ActiveMq and RabbitMq interferes with each other causing the Alfresco Content Service (ACS) to get stuck in "Starting 'Messaging' subsystem, ID: [Messaging, default]", but the ActiveMq seems to run properly.
this is my docker-compose.yml (I changed ActiveMq ports):
version: "2"
services:
alfresco:
image: alfresco/alfresco-content-repository-community:6.2.0-ga
mem_limit: 1500m
environment:
JAVA_OPTS: "
-Ddb.driver=org.postgresql.Driver
-Ddb.username=alfresco
-Ddb.password=alfresco
-Ddb.url=jdbc:postgresql://postgres:5432/alfresco
-Dsolr.host=solr6
-Dsolr.port=8983
-Dsolr.secureComms=none
-Dsolr.base.url=/solr
-Dindex.subsystem.name=solr6
-Dshare.host=127.0.0.1
-Dshare.port=8080
-Dalfresco.host=localhost
-Dalfresco.port=8080
-Daos.baseUrlOverwrite=http://localhost:8080/alfresco/aos
-Dmessaging.broker.url=\"failover:(nio://activemq:11617)?timeout=3000&jms.useCompression=true\"
-Ddeployment.method=DOCKER_COMPOSE
-Dlocal.transform.service.enabled=true
-DlocalTransform.pdfrenderer.url=http://alfresco-pdf-renderer:8090/
-DlocalTransform.imagemagick.url=http://imagemagick:8090/
-DlocalTransform.libreoffice.url=http://libreoffice:8090/
-DlocalTransform.tika.url=http://tika:8090/
-DlocalTransform.misc.url=http://transform-misc:8090/
-Dlegacy.transform.service.enabled=true
-Dalfresco-pdf-renderer.url=http://alfresco-pdf-renderer:8090/
-Djodconverter.url=http://libreoffice:8090/
-Dimg.url=http://imagemagick:8090/
-Dtika.url=http://tika:8090/
-Dtransform.misc.url=http://transform-misc:8090/
-Dcsrf.filter.enabled=false
-Xms1500m -Xmx1500m
"
alfresco-pdf-renderer:
image: alfresco/alfresco-pdf-renderer:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8090:8090
imagemagick:
image: alfresco/alfresco-imagemagick:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8091:8090
libreoffice:
image: alfresco/alfresco-libreoffice:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8092:8090
tika:
image: alfresco/alfresco-tika:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8093:8090
transform-misc:
image: alfresco/alfresco-transform-misc:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8094:8090
share:
image: alfresco/alfresco-share:6.2.0
mem_limit: 1g
environment:
REPO_HOST: "alfresco"
REPO_PORT: "8080"
JAVA_OPTS: "
-Xms500m
-Xmx500m
-Dalfresco.host=localhost
-Dalfresco.port=8080
-Dalfresco.context=alfresco
-Dalfresco.protocol=http
"
postgres:
image: postgres:11.4
mem_limit: 512m
environment:
- POSTGRES_PASSWORD=alfresco
- POSTGRES_USER=alfresco
- POSTGRES_DB=alfresco
command: postgres -c max_connections=300 -c log_min_messages=LOG
ports:
- 5432:5432
solr6:
image: alfresco/alfresco-search-services:1.4.0
mem_limit: 2g
environment:
#Solr needs to know how to register itself with Alfresco
- SOLR_ALFRESCO_HOST=alfresco
- SOLR_ALFRESCO_PORT=8080
#Alfresco needs to know how to call solr
- SOLR_SOLR_HOST=solr6
- SOLR_SOLR_PORT=8983
#Create the default alfresco and archive cores
- SOLR_CREATE_ALFRESCO_DEFAULTS=alfresco,archive
#HTTP by default
- ALFRESCO_SECURE_COMMS=none
- "SOLR_JAVA_MEM=-Xms2g -Xmx2g"
ports:
- 8083:8983 #Browser port
activemq:
image: alfresco/alfresco-activemq:5.15.8
mem_limit: 1g
ports:
- 1162:8161 # Web Console
- 1673:5672 # AMQP
- 11617:61616 # OpenWire
- 11614:61613 # STOMP
proxy:
image: alfresco/acs-community-ngnix:1.0.0
mem_limit: 128m
depends_on:
- alfresco
ports:
- 8080:8080
links:
- alfresco
- share
this is ActivMq logs :
activemq_1 | INFO: Loading '/opt/activemq/bin/env'
activemq_1 | INFO: Using java '/usr/java/default/bin/java'
activemq_1 | INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
activemq_1 | INFO: Creating pidfile /opt/activemq/data/activemq.pid
activemq_1 | Extensions classpath:
activemq_1 | [/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,/opt/activemq/lib/extra]
activemq_1 | ACTIVEMQ_HOME: /opt/activemq
activemq_1 | ACTIVEMQ_BASE: /opt/activemq
activemq_1 | ACTIVEMQ_CONF: /opt/activemq/conf
activemq_1 | ACTIVEMQ_DATA: /opt/activemq/data
activemq_1 | Loading message broker from: xbean:activemq.xml
activemq_1 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#73ad2d6: startup date [Mon Apr 27 09:57:23 UTC 2020]; root of context hierarchy
activemq_1 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
activemq_1 | INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
activemq_1 | INFO | Apache ActiveMQ 5.15.8 (localhost, ID:7f445cd32cc5-39441-1587981447728-0:1) is starting
activemq_1 | INFO | Listening for connections at: tcp://7f445cd32cc5:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector openwire started
activemq_1 | INFO | Listening for connections at: amqp://7f445cd32cc5:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector amqp started
activemq_1 | INFO | Listening for connections at: stomp://7f445cd32cc5:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector stomp started
activemq_1 | INFO | Listening for connections at: mqtt://7f445cd32cc5:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector mqtt started
activemq_1 | INFO | Starting Jetty server
activemq_1 | INFO | Creating Jetty connector
activemq_1 | WARN | ServletContext#o.e.j.s.ServletContextHandler#8e50104{/,null,STARTING} has uncovered http methods for path: /
activemq_1 | INFO | Listening for connections at ws://7f445cd32cc5:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector ws started
activemq_1 | INFO | Apache ActiveMQ 5.15.8 (localhost, ID:7f445cd32cc5-39441-1587981447728-0:1) started
activemq_1 | INFO | For help or more information please see: http://activemq.apache.org
activemq_1 | WARN | Store limit is 102400 mb (current store usage is 0 mb). The data directory: /opt/activemq/data/kahadb only has 20358 mb of usable space. - resetting to maximum available disk space: 20358 mb
activemq_1 | WARN | Temporary Store limit is 51200 mb (current store usage is 0 mb). The data directory: /opt/activemq/data only has 20358 mb of usable space. - resetting to maximum available disk space: 20358 mb
and this is the Alfresco last log that it get stucks for ever :
alfresco_1 | 2020-04-27 09:59:50,116 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Messaging' subsystem, ID: [Messaging, default]

Kubernetes-Kafka unable to write message on topic

I am trying to write data on kafka topic but, stuck with some errors. Below are my configuration & error details.
Kubernetes Service:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service NodePort 10.105.214.246 <none> 9092:30998/TCP 17m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
zoo1 ClusterIP 10.101.3.128 <none> 2181/TCP,2888/TCP,3888/TCP 20m
Kubernetes Pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-broker0-69c97b67f-4pmw9 1/1 Running 1 1m
zookeeper-deployment-1-796f9d9bcc-cr756 1/1 Running 0 20m
Kafka Docker Process:
docker ps | grep kafka
f79cd0196083 wurstmeister/kafka#sha256:d04dafd2b308f26dbeed8454f67c321579c2818c1eff5e8f695e14a19b1d599b "start-kafka.sh" About a minute ago Up About a minute k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1
75393e9e25c1 k8s.gcr.io/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_0
Topic test is created successfully in Kafka as shown below:
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1 /opt/kafka_2.12-2.1.0/bin/kafka-topics.sh --list --zookeeper zoo1:2181
OR
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1 /opt/kafka_2.12-2.1.0/bin/kafka-topics.sh --list --zookeeper 10.101.3.128:2181
Output of above command:
test
As the topic is available to write data on it, I had executed below command with host machine IP 10.225.36.98 or with service IP 10.105.214.246 :
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t --
/opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list
10.225.36.98:30998 --topic test ]
>{"k":"v"}
But none of them is working for me & throw below exception:
[2019-01-01 09:26:52,215] ERROR Error when sending message to topic test with key: null, value: 9 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>[2019-01-01 09:27:59,513] WARN [Producer clientId=console-producer]
Connection to node -1 (/10.225.36.98:30998) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
When tried to write on broker with hostname kafka:
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t -- /opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list kafka:9092 --topic test ]
[2019-01-01 09:34:41,293] WARN Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
(org.apache.kafka.clients.ClientUtils)
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
As the host & service IP were not working, I tried with pod IP, but get test=LEADER_NOT_AVAILABLE error.
kubectl exec kafka-broker0-69c97b67f-4pmw9 -c kafka -i -t -- /opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh [ --broker-list 172.17.0.7:9092 --topic test ]
>{"k":"v"}
[2019-01-01 09:52:30,733] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
After searching Google I found command to get list of available brokers in Zookeeper. So I tried to run it from container & stuck on below error:
bash-4.4# ./opt/zookeeper/bin/zkCli.sh -server zoo1:2181 ls /brokers/ids
Connecting to zoo1:2181
Exception from Zookeeper:
2019-01-01 09:18:05,215 [myid:] - INFO [main:Environment#100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2019-01-01 09:18:05,219 [myid:] - INFO [main:Environment#100] - Client environment:host.name=zookeeper-deployment-1-796f9d9bcc-cr756
2019-01-01 09:18:05,220 [myid:] - INFO [main:Environment#100] - Client environment:java.version=1.8.0_151
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.vendor=Oracle Corporation
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/b
in/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-
1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper/bin/../lib/log4j-
1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.10.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.library.path=/usr/lib/jvm/java-1.8-
openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-01-01 09:18:05,223 [myid:] - INFO [main:Environment#100] - Client environment:java.io.tmpdir=/tmp
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:java.compiler=<NA>
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.name=Linux
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.arch=amd64
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:os.version=3.10.0-693.11.6.el7.x86_64
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.name=root
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.home=/root
2019-01-01 09:18:05,224 [myid:] - INFO [main:Environment#100] - Client environment:user.dir=/
2019-01-01 09:18:05,225 [myid:] - INFO [main:ZooKeeper#438] - Initiating client connection, connectString=zoo1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher#25f38edc
2019-01-01 09:18:05,259 [myid:] - INFO [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1032] - Opening socket connection to server zoo1.default.svc.cluster.local/10.101.3.128:2181. Will not attempt to authenticate using SASL (unknown error)
2019-01-01 09:18:35,280 [myid:] - WARN [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1108] - Client session timed out, have not heard from server in 30027ms for sessionid 0x0
2019-01-01 09:18:35,282 [myid:] - INFO [main-SendThread(zoo1.default.svc.cluster.local:2181):ClientCnxn$SendThread#1156] - Client session timed out, have not heard from server in 30027ms for sessionid 0x0, closing socket connection and attempting reconnect
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /brokers/ids
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1532)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1560)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:731)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:599)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:362)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:290)
I also tried to create Kafka service of type LoadBalancer type, but, No LoadBalancer IP is assigned to service.
References to resolve this issue:
https://rmoff.net/2018/08/02/kafka-listeners-explained/
https://github.com/wurstmeister/kafka-docker/wiki/Connectivity#additional-listener-information
https://github.com/kubernetes/contrib/issues/2891
https://dzone.com/articles/ultimate-guide-to-installing-kafka-docker-on-kuber
https://github.com/wurstmeister/kafka-docker/issues/85
Any help would be appreciated.
Try following command to send data to topic:
docker exec k8s_kafka_kafka-broker0-69c97b67f-4pmw9_default_a747d38a-0da6-11e9-bd84-fa163e7d3173_1
/opt/kafka_2.12-2.1.0/bin/kafka-console-producer.sh
--broker-list kafka-service:30998 --topic test

unable to run kibana and logstash with elasticsearch

Elastic serach is running fine on 9201 port. But unable to run kibana and logstash with docker-compose.
For logstash it throws the error:
Attempted to resurrect connection to dead ES instance, but got an
error.
For kibana it throw warnings:
"warning","elasticsearch","admin"],"pid":1,"message":"No living
connections"
Below is the docker-compose.yml file:
version: '2'
services:
# Service 1 : elasticsearch
elasticsearch-5-6:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
container_name: elasticsearch-5-6
ports:
- "9201:9200"
volumes:
- /etc/elasticsearch/elasticsearch-5-6.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /var/elasticsearch/data/immunedata-5-6/:/usr/share/elasticsearch/data/
#- /etc/elasticsearch/logging.yml:/usr/share/elasticsearch/config/logging.yml
#- /var/log/elasticsearch/:/usr/share/elasticsearch/logs/
environment:
- cluster.name=docker-cluster-elasticsearch-5-6
#- bootstrap.memory_lock=true
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
# Disabling the xpack security as it costs after one month of free trail.
- xpack.security.enabled=false
# Service 2 : logstash
logstash-5-6:
image: docker.elastic.co/logstash/logstash:5.6.3
container_name: logstash-5-6
ports:
#- "5044:5044"
- "5001:5001"
volumes:
- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
- /etc/logstash/pipeline:/usr/share/logstash/pipeline
#- /etc/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
#- /var/logstash/pipeline:/usr/share/logstash/pipeline
environment:
- "ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
depends_on:
- elasticsearch-5-6
# Service 3 : kibana
kibana-5-6:
image: docker.elastic.co/kibana/kibana:5.6.3
container_name: kibana-5-6
ports:
- "5601:5601"
volumes:
- /etc/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
#- /var/kibana/immunedata-5-6/:/usr/share/kibana/data/
environment:
- xpack.security.enabled=false
- xpack.graph.enabled = false
- xpack.ml.enabled = false
- xpack.monitoring.enabled = false
- xpack.watcher.enabled = false
- xpack.reporting.enabled = false
depends_on:
- elasticsearch-5-6
# Service 4 : elasticseach-head
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
# will not wait for elasticsearch to be ready.
ports:
- "9100:9100"
elasticserach.yml
cluster.name: immunedata-cluster-5.6
node.name: "immunedata-cluster-5-6.node-1"
# Elasticsearch in docker access different data directory, defined mapping directory in docker-compose.yml
#path.data: /var/elasticsearch/data/immunedata-5-6/
path.data: /usr/share/elasticsearch/data/
#path.data: /var/elasticsearch/data
# NOTE : Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml
#index.number_of_shards: 1
#index.number_of_replicas: 0
# Allow all host access
network.bind_host: 0.0.0.0
http.port: 9200
# To enable cross-origin resource sharing (Accessing on browser)
http.cors.enabled: true
http.cors.allow-origin : "*"
logstash.yml file
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
#xpack.monitoring.elasticsearch.url: http://localhost:9201
##xpack.monitoring.elasticsearch.url: http://elasticsearch:9201
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
kibana.yml file
server.name: kibana
server.host: "0"
elasticsearch.url: http://192.168.56.10:9201
xpack.monitoring.ui.container.elasticsearch.enabled: false
#elasticsearch.url: http://elasticsearch:9201
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
Logs:
[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
elasticsearch-5-6 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
elasticsearch-5-6 | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
elasticsearch-5-6 | [2017-11-26T06:07:57,084][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][14][6] duration [18.2s], collections [1]/[18.5s], total [18.2s]/[23.5s], memory [178.2mb]->[79.5mb]/[1.9gb], all_pools {[young] [132.1mb]->[964kb]/[133.1mb]}{[survivor] [16.6mb]->[12.5mb]/[16.6mb]}{[old] [29.4mb]->[66.5mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:07:57,085][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][14] overhead, spent [18.2s] collecting in the last [18.5s]
elasticsearch-5-6 | [2017-11-26T06:07:57,298][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [immunedata-cluster-5-6.node-1] collector [index-recovery] failed to collect data
elasticsearch-5-6 | org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
elasticsearch-5-6 | at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:256) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:234) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:79) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.3.jar:5.6.3]
elasticsearch-5-6 | at
elasticsearch-5-6 | [2017-11-26T06:08:45,238][WARN ][o.e.x.w.e.ExecutionService] [immunedata-cluster-5-6.node-1] Failed to execute watch [XYNCje-TQzKm9OLdiH60gQ_elasticsearch_cluster_status_60e3c208-acca-4462-ba47-0711279d8f5e-2017-11-26T06:08:35.573Z]
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][young][63][9] duration [3.6s], collections [1]/[4.6s], total [3.6s]/[30.2s], memory [226.9mb]->[103.5mb]/[1.9gb], all_pools {[young] [127.5mb]->[1mb]/[133.1mb]}{[survivor] [16.6mb]->[11.3mb]/[16.6mb]}{[old] [82.7mb]->[91.2mb]/[1.8gb]}
elasticsearch-5-6 | [2017-11-26T06:08:54,886][WARN ][o.e.m.j.JvmGcMonitorService] [immunedata-cluster-5-6.node-1] [gc][63] overhead, spent [3.6s] collecting in the last [4.6s]
logstash-5-6 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
elasticsearch-5-6 | [2017-11-26T06:08:55,988][INFO ][o.e.c.r.a.AllocationService] [immunedata-cluster-5-6.node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.watcher-history-6-2017.11.20][0], [.monitoring-es-6-2017.11.20][0]] ...]).
logstash-5-6 | [2017-11-26T06:08:56,786][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
logstash-5-6 | [2017-11-26T06:08:56,891][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
logstash-5-6 | [2017-11-26T06:08:57,558][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.6.3-java/modules/arcsight/configuration"}
logstash-5-6 | [2017-11-26T06:09:04,121][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx#elasticsearch-5-6:9201/]}}
logstash-5-6 | [2017-11-26T06:09:04,123][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
elasticsearch-5-6 | [2017-11-26T06:09:04,687][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
elasticsearch-5-6 | [2017-11-26T06:09:04,687][INFO ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] rerouting shards: [high disk watermark exceeded on one or more nodes]
logstash-5-6 | [2017-11-26T06:09:06,450][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:06,452][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash-5-6 | [2017-11-26T06:09:06,455][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
logstash-5-6 | [2017-11-26T06:09:06,455][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch-5-6:9201"]}
logstash-5-6 | [2017-11-26T06:09:06,462][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
logstash-5-6 | [2017-11-26T06:09:09,818][INFO ][logstash.pipeline ] Pipeline main started
logstash-5-6 | [2017-11-26T06:09:10,341][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash-5-6 | [2017-11-26T06:09:11,460][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:11,484][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash-5-6 | [2017-11-26T06:09:16,491][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:16,500][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:21Z","tags":["warning","elasticsearch","config","deprecation"],"pid":1,"message":"Config key \"ssl.verify\" is deprecated. It has been replaced with \"ssl.verificationMode\""}
logstash-5-6 | [2017-11-26T06:09:21,513][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:21,523][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:kibana#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
logstash-5-6 | [2017-11-26T06:09:26,536][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:26,570][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:elasticsearch#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:26Z","tags":["status","plugin:xpack_main#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:graph#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch-5-6:9201/ => connect ECONNREFUSED 172.21.0.2:9201"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["status","plugin:monitoring#5.6.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:29Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
logstash-5-6 | [2017-11-26T06:09:31,585][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
logstash-5-6 | [2017-11-26T06:09:31,603][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:xpack_main#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:graph#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:reporting#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:elasticsearch#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:33Z","tags":["status","plugin:searchprofiler#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:34Z","tags":["status","plugin:ml#5.6.3","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201.","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch-5-6 | [2017-11-26T06:09:34,750][WARN ][o.e.c.r.a.DiskThresholdMonitor] [immunedata-cluster-5-6.node-1] high disk watermark [90%] exceeded on [eAlcHBJ2QVG58e0HJsgrdQ][immunedata-cluster-5-6.node-1][/usr/share/elasticsearch/data/nodes/0] free: 1.9gb[7.4%], shards will be relocated away from this node
logstash-5-6 | [2017-11-26T06:09:36,692][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx#elasticsearch-5-6:9201/, :path=>"/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["status","plugin:ml#5.6.3","info"],"pid":1,"state":"yellow","message":"Status changed from red to yellow - Waiting for Elasticsearch","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch-5-6:9201."}
logstash-5-6 | [2017-11-26T06:09:37,366][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#elasticsearch-5-6:9201/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#elasticsearch-5-6:9201/][Manticore::SocketException] Connection refused (Connection refused)"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-5-6:9201/"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana-5-6 | {"type":"log","#timestamp":"2017-11-26T06:09:37Z","tags":
You called elasticsearch service elasticsearch-5-6 in your docker-compose.yml. That means that container with elasticsearch is available on address http://elasticsearch-5-6:9200 for all other containers in your docker-compose.yaml. And it is available on address http://127.0.0.1:9201 from the host machine.
In order to have workable ELK stack you need to change logstash config to:
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.elasticsearch.url: http://elasticsearch-5-6:9200
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
and kibana config to:
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch-5-6:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false
xpack.security.enabled: false
## Above I tired this - not working
#elasticsearch.username: elastic
#elasticsearch.password: changeme
#xpack.monitoring.ui.container.elasticsearch.enabled: false
#xpack.monitoring.ui.container.elasticsearch.enabled: true
# Extra:
ssl.verificationMode: false
EKL Cluster with Xpack disabled
You are missing with the ELASTICSEARCH_URL: "http://elasticsearch:9200" in kibana and xpack.monitoring.elasticsearch.url: http://elasticsearch:9200 in Logstash
here is the sample yml configuration ith all possible environment varibales defined in environment
version: '3.4'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
container_name: elasticsearch
environment:
ES_JAVA_OPTS: '-Xms2048m -Xmx2048m'
cluster.name: es-cluster
node.name: es1
network.bind_host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: elasticsearch1
xpack.security.enabled: 'false'
xpack.monitoring.enabled: 'false'
xpack.watcher.enabled: 'false'
xpack.ml.enabled: 'false'
http.cors.enabled : 'true'
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
logger.level: debug
volumes:
- /var/elasticsearch/db/elasticsearch/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
logstash:
image: docker.elastic.co/logstash/logstash:6.6.0
container_name: logstash
ports:
- 5044:5044
- 5001:5001
volumes:
- /var/elasticsearch/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ES_JAVA_OPTS: -Xmx2048m -Xms2048m"
http.host: 0.0.0.0
xpack.monitoring.enabled: 'false'
xpack.monitoring.elasticsearch.url: http://elasticsearch:9200
networks:
- elastic
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.6.0
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
xpack.security.enabled: 'false'
xpack.graph.enabled : 'false'
xpack.ml.enabled : 'false'
xpack.monitoring.enabled : 'false'
xpack.watcher.enabled : 'false'
xpack.reporting.enabled : 'false'
ports:
- 5601:5601
networks:
- elastic
depends_on:
- elasticsearch
elasticsearch-head:
image: mobz/elasticsearch-head:5
container_name: elasticsearch-head
ports:
- "9100:9100"
networks:
- elastic
networks:
elastic:
driver: bridge

kafka zookeeper docker no connection

I am new to docker-compose.
I try to run: https://github.com/wurstmeister/kafka-docker with an adapted docker-compose.yml file: https://github.com/geoHeil/sparkplay
This is the output of docker-compose ps.
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------
docker_kafka_1 /bin/sh -c start-kafka.sh Up 0.0.0.0:32780->9092/tcp
docker_kafka_2 /bin/sh -c start-kafka.sh Up 0.0.0.0:32782->9092/tcp
docker_kafka_3 /bin/sh -c start-kafka.sh Up 0.0.0.0:32781->9092/tcp
docker_zookeeper_1 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:32779->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
docker_zookeeper_2 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:32783->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
docker_zookeeper_3 /bin/sh -c /usr/sbin/sshd ... Up 0.0.0.0:32784->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
So apparently some ports should be available. I am using osx with kitematic / the docker-toolkit. However if I hit any of these IP-addresses with my browser no connection can be established.
edit:
This is the docker-compose.yml file: https://github.com/geoHeil/sparkplay/blob/master/docker-compose.yml
The logs of the docker containers after connecting with the browser:
kafka_1 | [2015-11-14 16:15:34,000] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
kafka_1 | [2015-11-14 16:15:34,002] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
kafka_1 | [2015-11-14 16:15:34,004] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
kafka_1 | [2015-11-14 16:15:34,092] INFO Closing socket connection to /192.168.99.1 due to invalid request: Request of length 1195725856 is not valid, it is larger than the maximum size of 104857600 bytes. (kafka.network.Processor)
zookeeper_1 | 2015-11-14 16:15:51,375 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52315
zookeeper_1 | 2015-11-14 16:15:51,375 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52316
zookeeper_1 | 2015-11-14 16:15:51,549 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,549 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52315 (no session established for client)
zookeeper_1 | 2015-11-14 16:15:51,550 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,550 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52316 (no session established for client)
zookeeper_1 | 2015-11-14 16:15:51,552 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52317
zookeeper_1 | 2015-11-14 16:15:51,552 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,553 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52317 (no session established for client)
zookeeper_1 | 2015-11-14 16:15:51,651 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.99.1:52318
zookeeper_1 | 2015-11-14 16:15:51,651 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#362] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
zookeeper_1 | 2015-11-14 16:15:51,652 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.99.1:52318 (no session established for client)
And:
zookeeper_1 | 2015-11-14 16:25:09,810 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
zookeeper_1 | 2015-11-14 16:25:09,820 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:create cxid:0xa zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
zookeeper_1 | 2015-11-14 16:25:09,825 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:create cxid:0x10 zxid:0xb txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
zookeeper_1 | 2015-11-14 16:25:09,991 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:setData cxid:0x1a zxid:0xf txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
zookeeper_1 | 2015-11-14 16:25:10,030 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x15106d08b9c0000 type:delete cxid:0x27 zxid:0x11 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
Following the instructions there: http://sookocheff.com/post/kafka/kafka-quick-start/ helped me a lot to get kafka up and running in docker.
edit:
I re-cloned the wurstmeister/kafka repo and started from scratch. This seemed to work.

Resources