People that I would like to ask a question about.
I have a replica between a laptop and a server.
On the server I have no problems
But on the laptop ...
With Time, you notice that the RAM increases ... Until you reach the limit ... When you reach the limit, the MariaDB service is restarted.
What is the reason for the increase in RAM ...
Server version: 10.1.32-MariaDB MariaDB Server
Excuse me my English is not very good
socket = /run/mysqld/mysqld.sock
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
slave-net-timeout = 30
binlog_ignore_db = mysql
binlog_ignore_db = zoom
binlog_ignore_db = performance_schema
binlog_ignore_db = information_schema
binlog_do_db = TK09
replicate_do_db = TK09
binlog_ignore_db = TK09_user
log-bin=binlog
log-slave-updates=1
binlog_format=mixed
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 8
innodb_log_buffer_size = 8M
query_cache_size = 40M
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
Related
I am trying to echo some string to /dev/ttyS0 and the rx and tx stats in /proc/tty/driver/serial remain 0.
If I change the irq of ttyS0 to 0 and then echo some string to ttyS0, the stats are changing according to the length of the string.
Would anyone be able to help me with this?
output of setserial -g /dev/ttyS[0123]
/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 11
/dev/ttyS1, UART: 16550A, Port: 0x02f8, IRQ: 10
/dev/ttyS2, UART: unknown, Port: 0x03e8, IRQ: 4
/dev/ttyS3, UART: unknown, Port: 0x02e8, IRQ: 3
output of stty -F /dev/ttyS0 -a
speed 115200 baud; rows 24; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S;
susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig -icanon -iexten -echo -echoe -echok -echonl -noflsh -xcase -tostop -echoprt -echoctl -echoke -flusho -extproc
out put of cat /proc/interrupts for irq 11 with 16 cpu.
11: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 11-edge ttyS0
output of cat /proc/tty/driver/serial:
0: uart:16550A port:000003F8 irq:11 tx:228 rx:0 RTS|DTR|DSR|CD|RI
1: uart:16550A port:000002F8 irq:10 tx:0 rx:0 CTS|DSR|CD|RI
2: uart:unknown port:000003E8 irq:4
3: uart:unknown port:000002E8 irq:3
As i said I have tried to change thee irq to 0 and it worked. But what is wrong with irq 11. There is no deviced registered for irq 11 in /proc/interrupts except ttyS0.
I am working on hyperledger blockchain. I am running 4 organizations network, I have developed docker-compose.yml file. When is start docker-compose all containers are starting but orderer.example.com container is not starting i.e., the container is exiting with code 2.
Below is the orderer.example.com docker-compose configuration details and logs of the orderer.example.com container.
docker-compose configuration details
###########################################
# Orderer Docker Container Config
###########################################
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- FABRIC_LOGGING_SPEC=info
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- OERDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./channel-artifacts/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/example.com/orderers/OrdererPeer.example.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peerOrg1/peer0
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/msp/peerOrg2/peer0
- ./crypto-config/peerOrganizations/org3.example.com/peers/peer0.org3.example.com/:/etc/hyperledger/msp/peerOrg3/peer0
- ./crypto-config/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/:/etc/hyperledger/msp/peerOrg4/peer0
networks:
- basic
Error logs of orderer.example.com container
2019-10-18 08:08:51.429 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2019-10-18 08:08:51.556 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
General.LedgerType = "file"
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
General.TLS.Enabled = false
General.TLS.PrivateKey = "/etc/hyperledger/fabric/tls/server.key"
General.TLS.Certificate = "/etc/hyperledger/fabric/tls/server.crt"
General.TLS.RootCAs = [/etc/hyperledger/fabric/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = ""
General.Cluster.ClientPrivateKey = ""
General.Cluster.RootCAs = []
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisProfile = "SampleInsecureSolo"
General.SystemChannel = "test-system-channel-name"
General.GenesisFile = "/etc/hyperledger/fabric/genesisblock"
General.Profile.Enabled = false
General.Profile.Address = "0.0.0.0:6060"
General.LocalMSPDir = "/etc/hyperledger/msp/orderer/msp"
General.LocalMSPID = "OrdererMSP"
General.BCCSP.ProviderName = "SW"
General.BCCSP.SwOpts.SecLevel = 256
General.BCCSP.SwOpts.HashFamily = "SHA2"
General.BCCSP.SwOpts.Ephemeral = false
General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/etc/hyperledger/msp/orderer/msp/keystore"
General.BCCSP.SwOpts.DummyKeystore =
General.BCCSP.SwOpts.InmemKeystore =
General.BCCSP.PluginOpts =
General.Authentication.TimeWindow = 15m0s
General.Authentication.NoExpirationChecks = false
FileLedger.Location = "/var/hyperledger/production/orderer"
FileLedger.Prefix = "hyperledger-fabric-ordererledger"
RAMLedger.HistorySize = 1000
Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = false
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 3
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[WALDir:/var/hyperledger/production/orderer/etcdraft/wal SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
panic: unable to bootstrap orderer. Error reading genesis block file: open /etc/hyperledger/fabric/genesisblock: no such file or directory
goroutine 1 [running]:
github.com/hyperledger/fabric/orderer/common/bootstrap/file.(*fileBootstrapper).GenesisBlock(0xc0002bb2b0, 0xc0002bb2b0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/bootstrap/file/bootstrap.go:39 +0x1d0
github.com/hyperledger/fabric/orderer/common/server.extractBootstrapBlock(0xc000444900, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:532 +0x1bd
github.com/hyperledger/fabric/orderer/common/server.Start(0x1018e03, 0x5, 0xc000444900)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:96 +0x43
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
When I got this error I made changes to the code follows as below:
environment:
- OERDERER_GENERAL_GENESISFILE=/etc/hyperledger/fabric/genesis.block
volumes:
- ./channel-artifacts/:/etc/hyperledger/fabric
But the I got the same error below is the error logs:
2019-10-18 07:49:44.841 UTC [orderer.common.server] Main -> ERRO 001 failed to parse config: Error reading configuration: Unsupported Config Type ""
Please help me with this issue. please ignore is there are any indentation error
Its weird,
According to your yaml manifest
Supplied path is:
- OERDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
Error log stats:
Error reading genesis block file: open /etc/hyperledger/fabric/genesisblock: no such file or directory
Something you are missing, please upload the configuration to github
so that I will have a look
In your volume, you have to specify your path to block.genesis
And you have to give the same link in environment genesis file path
environment:
- OERDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.pb
volumes:
- ./channel-artifacts/:/etc/hyperledger/fabric
- ./orderer/genesis_block.pb:/var/hyperledger/orderer/orderer.genesis.pb
Please check the genesis block path. Docker is not able to locate the file
I am able to solve this issue by again generating the cryptography files in the folder "channel-artifacts" and "crypto-config".
1)
generating the cryptography files
By again Running Configtxgen to create artifacts
OERDERER_GENERAL_GENESISFILE >> ORDERER_GENERAL_GENESISFILE
I am trying to run kafka in docker. It works with plaintext but does not work with SSL.
I performed SSL setup according to this documentation:
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey
#Step 2
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
#Step 3
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
Then I copied all ssl stuff into /tmp/ssl/1/ directory.
Here is a my docker-compose:
version: '2'
volumes:
data-volume: {}
services:
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=127.0.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
volumes:
- "/tmp/ssl/:/tmp/ssl/"
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
server.properties
advertised.host.name = 127.0.0.1
advertised.listeners = null
advertised.port = 9092
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092,SSL://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka/kafka-logs-935db2aeed2f
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /tmp/ssl/1/server.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /tmp/ssl/1/server.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
server log:
[2019-04-29 13:12:33,935] INFO Awaiting socket connections on s0.0.0.0:9092. (kafka.network.Acceptor)
[2019-04-29 13:12:33,975] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2019-04-29 13:12:33,975] INFO Awaiting socket connections on s0.0.0.0:9093. (kafka.network.Acceptor)
[2019-04-29 13:12:34,117] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9093,ListenerName(SSL),SSL) (kafka.network.SocketServer)
[2019-04-29 13:12:34,122] INFO [SocketServer brokerId=1001] Started 2 acceptor threads for data-plane (kafka.network.SocketServer)
[2019-04-29 13:12:34,160] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,162] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,163] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,164] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,180] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-04-29 13:12:34,264] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
/opt/kafka/client-ssl.properties:
security.protocol=SSL
ssl.truststore.location=/tmp/ssl/1/kafka.client.truststore.jks
ssl.truststore.password=test1234
I run the following:
/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9093 --topic sample-sink-data --producer.config /opt/kafka/client-ssl.properties
and see this in the kafka server log:
[2019-04-29 13:28:13,654] INFO [SocketServer brokerId=1001] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
What am I missing?
Good morning.
First of all, I've created a docker swarm with 2 physical hosts and an overlay network. In the same host I've created 2 containers (postgres and ambari servers) and one with ambari agent in which I'll install kafka, zookeeper, spark,... from ambari. The scope is installing several containers in several hosts, but I'm trying first like this as I'm not getting it working.
The fact is that once deployed with Ambari, I change kafka configuration to add advertised.host.name to the physical host's ip and advertised.port to 9092 to bind it to physical host's 9092 port.
When trying it, I'm always getting the following errors:
WARN Error while fetching metadata with correlation id 17 : {prueba2=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
If trying by sending to container's 6667 port
or
[2018-08-30 12:06:28,758] WARN Bootstrap broker 192.168.0.28:9092 disconnected (org.apache.kafka.clients.NetworkClient)
If trying to physical host's port 9092.
Thanks for help and please ask for any further information needed to help solving this problem.
EDIT1:
Configuration changed to the following properties.
and kafka-broker is not running. server.log is showing following trace:
root#host1:/usr/hdp/2.6.3.0-235/kafka# cat /var/log/kafka/server.log
[2018-09-04 09:17:46,964] INFO KafkaConfig values:
advertised.host.name = host1.ambari
advertised.listeners = INTERNO://host1.ambari:6667,EXTERNO://192.168.0.28:9092
advertised.port = 9092
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
default.replication.factor = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 10000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.protocol.version = 0.10.1-IV2
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listeners = INTERNO://host1.ambari:6667,EXTERNO://host1.ambari:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.1-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 6667
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 10000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = host1.ambari:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-09-04 09:17:46,974] FATAL (kafka.Kafka$)
java.lang.IllegalArgumentException: Error creating broker listeners from 'INTERNO://host1.ambari:6667,EXTERNO://host1.ambari:6667': No enum constant org.apache.kafka.common.protocol.SecurityProtocol.INTERNO
at kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:994)
at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:1013)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:966)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:779)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:776)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)
If you're deploying Kafka within Docker, you need to configure the listener to be accessible if you want to be able to access it from outside your Docker network too. This article explains the details.
My config file :
agent1.sources = source1
agent1.channels = channel1
agent1.sinks = sink1
agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /var/SpoolDir
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path = hdfs://templatecentosbase.oneglobe.com:8020/user/Banking4
agent1.sinks.sink1.hdfs.filePrefix = Banking_Details
agent1.sinks.sink1.hdfs.fileSuffix = .avro
agent1.sinks.sink1.hdfs.serializer = avro_event
agent1.sinks.sink1.hdfs.serializer = DataStream
#agent1.sinks.sink1.hdfs.callTimeout = 20000
agent1.sinks.sink1.hdfs.rollCount = 0
agent1.sinks.sink1.hdfs.rollsize = 100000000
#agent1.sinks.sink1.hdfs.txnEventMax = 40000
agent1.sinks.sink1.hdfs.rollInterval = 0
#agent1.sinks.sink1.serializer.codeC =
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 100000000
agent1.channels.channel1.transactionCapacity = 100000000
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
Can anyone help me in getting this resolved. Source file is nearly 400MB its writing bits and pieces in HDFS. example ( 1.5mb to 2mb )