kafka bootstrap broker disconnected using docker-compose - docker

Good morning.
First of all, I've created a docker swarm with 2 physical hosts and an overlay network. In the same host I've created 2 containers (postgres and ambari servers) and one with ambari agent in which I'll install kafka, zookeeper, spark,... from ambari. The scope is installing several containers in several hosts, but I'm trying first like this as I'm not getting it working.
The fact is that once deployed with Ambari, I change kafka configuration to add advertised.host.name to the physical host's ip and advertised.port to 9092 to bind it to physical host's 9092 port.
When trying it, I'm always getting the following errors:
WARN Error while fetching metadata with correlation id 17 : {prueba2=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
If trying by sending to container's 6667 port
or
[2018-08-30 12:06:28,758] WARN Bootstrap broker 192.168.0.28:9092 disconnected (org.apache.kafka.clients.NetworkClient)
If trying to physical host's port 9092.
Thanks for help and please ask for any further information needed to help solving this problem.
EDIT1:
Configuration changed to the following properties.
and kafka-broker is not running. server.log is showing following trace:
root#host1:/usr/hdp/2.6.3.0-235/kafka# cat /var/log/kafka/server.log
[2018-09-04 09:17:46,964] INFO KafkaConfig values:
advertised.host.name = host1.ambari
advertised.listeners = INTERNO://host1.ambari:6667,EXTERNO://192.168.0.28:9092
advertised.port = 9092
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
default.replication.factor = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 10000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.protocol.version = 0.10.1-IV2
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listeners = INTERNO://host1.ambari:6667,EXTERNO://host1.ambari:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.1-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 6667
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 10000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = host1.ambari:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-09-04 09:17:46,974] FATAL (kafka.Kafka$)
java.lang.IllegalArgumentException: Error creating broker listeners from 'INTERNO://host1.ambari:6667,EXTERNO://host1.ambari:6667': No enum constant org.apache.kafka.common.protocol.SecurityProtocol.INTERNO
at kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:994)
at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:1013)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:966)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:779)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:776)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)

If you're deploying Kafka within Docker, you need to configure the listener to be accessible if you want to be able to access it from outside your Docker network too. This article explains the details.

Related

How do I setup Kafka with SSL?

I am trying to run kafka in docker. It works with plaintext but does not work with SSL.
I performed SSL setup according to this documentation:
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey
#Step 2
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
#Step 3
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
Then I copied all ssl stuff into /tmp/ssl/1/ directory.
Here is a my docker-compose:
version: '2'
volumes:
data-volume: {}
services:
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=127.0.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
volumes:
- "/tmp/ssl/:/tmp/ssl/"
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
server.properties
advertised.host.name = 127.0.0.1
advertised.listeners = null
advertised.port = 9092
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092,SSL://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka/kafka-logs-935db2aeed2f
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /tmp/ssl/1/server.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /tmp/ssl/1/server.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
server log:
[2019-04-29 13:12:33,935] INFO Awaiting socket connections on s0.0.0.0:9092. (kafka.network.Acceptor)
[2019-04-29 13:12:33,975] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2019-04-29 13:12:33,975] INFO Awaiting socket connections on s0.0.0.0:9093. (kafka.network.Acceptor)
[2019-04-29 13:12:34,117] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9093,ListenerName(SSL),SSL) (kafka.network.SocketServer)
[2019-04-29 13:12:34,122] INFO [SocketServer brokerId=1001] Started 2 acceptor threads for data-plane (kafka.network.SocketServer)
[2019-04-29 13:12:34,160] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,162] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,163] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,164] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,180] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-04-29 13:12:34,264] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
/opt/kafka/client-ssl.properties:
security.protocol=SSL
ssl.truststore.location=/tmp/ssl/1/kafka.client.truststore.jks
ssl.truststore.password=test1234
I run the following:
/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9093 --topic sample-sink-data --producer.config /opt/kafka/client-ssl.properties
and see this in the kafka server log:
[2019-04-29 13:28:13,654] INFO [SocketServer brokerId=1001] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
What am I missing?

High RAM Consumption with MariaDB

People that I would like to ask a question about.
I have a replica between a laptop and a server.
On the server I have no problems
But on the laptop ...
With Time, you notice that the RAM increases ... Until you reach the limit ... When you reach the limit, the MariaDB service is restarted.
What is the reason for the increase in RAM ...
Server version: 10.1.32-MariaDB MariaDB Server
Excuse me my English is not very good
socket = /run/mysqld/mysqld.sock
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
slave-net-timeout = 30
binlog_ignore_db = mysql
binlog_ignore_db = zoom
binlog_ignore_db = performance_schema
binlog_ignore_db = information_schema
binlog_do_db = TK09
replicate_do_db = TK09
binlog_ignore_db = TK09_user
log-bin=binlog
log-slave-updates=1
binlog_format=mixed
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 8
innodb_log_buffer_size = 8M
query_cache_size = 40M
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout

configuration for flume to write large file in HDFS

My config file :
agent1.sources = source1
agent1.channels = channel1
agent1.sinks = sink1
agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /var/SpoolDir
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path = hdfs://templatecentosbase.oneglobe.com:8020/user/Banking4
agent1.sinks.sink1.hdfs.filePrefix = Banking_Details
agent1.sinks.sink1.hdfs.fileSuffix = .avro
agent1.sinks.sink1.hdfs.serializer = avro_event
agent1.sinks.sink1.hdfs.serializer = DataStream
#agent1.sinks.sink1.hdfs.callTimeout = 20000
agent1.sinks.sink1.hdfs.rollCount = 0
agent1.sinks.sink1.hdfs.rollsize = 100000000
#agent1.sinks.sink1.hdfs.txnEventMax = 40000
agent1.sinks.sink1.hdfs.rollInterval = 0
#agent1.sinks.sink1.serializer.codeC =
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 100000000
agent1.channels.channel1.transactionCapacity = 100000000
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
Can anyone help me in getting this resolved. Source file is nearly 400MB its writing bits and pieces in HDFS. example ( 1.5mb to 2mb )

Kafka producer throws 'TimeoutException: Batch Expired' exception

I'm Testing Spring Cloud Stream App for twitter,
Started the docker container with the following Environment properties related to Kafka,
KAFKA_ADVERTISED_HOST_NAME=<ip>
advertised.host.name=<ip>:9092
spring.cloud.stream.bindings.output.destination=twitter-source-test
spring.cloud.stream.kafka.binder.brokers=<ip>:9092
spring.cloud.stream.kafka.binder.zkNodes=<ip>:2181
My kafka producerConfig values follows,
2017-01-12 14:47:09.979 INFO 1 --- [itterSource-1-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [192.168.127.188:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0
2017-01-12 14:47:09.985 INFO 1 --- [itterSource-1-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.9.0.1
But the producer continuously throws the following exception,
2017-01-12 14:47:42.196 ERROR 1 --- [ad | producer-3] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='{-1, 1, 11, 99, 111, 110, 116, 101, 110, 116, 84, 121, 112, 101, 0, 0, 0, 12, 34, 116, 101, 120, 116...' to topic twitter-source-test:
org.apache.kafka.common.errors.TimeoutException: Batch Expired
I can telnet from my docker container to the broker 192.168.127.188:9092 and 2181. Also my kafka server is not a docker container.
Saw some solution like adding 'advertised.host.name' but doesn't worked, or is it the correct way that I have given the env props.
Any help?
Sharing the fix.
Setting listeners in server.properties resolves the problem. eg: listeners = PLAINTEXT://your.host.name:9092

Everything in one line [duplicate]

This question already has answers here:
Lua indentation code in Lua [closed]
(6 answers)
Closed 7 years ago.
My example is the below, i found i code in internet, but its a little difuculty to understand it because are everything in one line, there a program or website to uncode it? i mean, add the spaces and lines to make it more readiable?
require("libs.Utils")require("libs.Res") require("libs.SideMessage") local adae = false local ddasf = true local ddasfcfa = false local ggsa = {} local fasca = {} local vvsa = {} local ggaw = {} local fefsg = nil local hhasf = true local gggqas = 4000 local bwe = false local bwefqa = {} print(math.floor(client.screenRatio*100)) --[[Config. --If u have some problem with positioning u can add screen ration(64 line) and create config for yourself.]] if math.floor(client.screenRatio*100) == 177 then testX = 1600 testY = 900 tpanelHeroSize = 55 tpanelHeroDown = 25.714 tpanelHeroSS = 20 txxB = 2.535 txxG = 3.485 elseif math.floor(client.screenRatio*100) == 166 then testX = 1280 testY = 768 tpanelHeroSize = 47.1 tpanelHeroDown = 25.714 tpanelHeroSS = 18 txxB = 2.59 txxG = 3.66 elseif math.floor(client.screenRatio*100) == 160 then testX = 1280 testY = 800 tpanelHeroSize = 48.5 tpanelHeroDown = 25.714 tpanelHeroSS = 20 txxB = 2.579 txxG = 3.74 elseif math.floor(client.screenRatio*100) == 133 then testX = 1024 testY = 768 tpanelHeroSize = 47 tpanelHeroDown = 25.714 tpanelHeroSS = 18 txxB = 2.78 txxG = 4.63 elseif math.floor(client.screenRatio*100) == 125 then testX = 1280 testY = 1024 tpanelHeroSize = 58 tpanelHeroDown = 25.714 tpanelHeroSS = 23 txxB = 2.747 txxG = 4.54 else testX = 1600 testY = 900 tpanelHeroSize = 55 tpanelHeroDown = 25.714
I don't know if such a program exists, but you can put it into a text editor that supports Lua syntax highlighting. Then, it's mostly a case of hitting 'enter' where the Lua triggers are (things like local and require, for instance). Lua is nice in that it allows you to put everything on one line, but I can understand how hard that can be to read so you can understand what's being done.

Resources