I am trying to echo some string to /dev/ttyS0 and the rx and tx stats in /proc/tty/driver/serial remain 0.
If I change the irq of ttyS0 to 0 and then echo some string to ttyS0, the stats are changing according to the length of the string.
Would anyone be able to help me with this?
output of setserial -g /dev/ttyS[0123]
/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 11
/dev/ttyS1, UART: 16550A, Port: 0x02f8, IRQ: 10
/dev/ttyS2, UART: unknown, Port: 0x03e8, IRQ: 4
/dev/ttyS3, UART: unknown, Port: 0x02e8, IRQ: 3
output of stty -F /dev/ttyS0 -a
speed 115200 baud; rows 24; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S;
susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1; time = 0;
-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig -icanon -iexten -echo -echoe -echok -echonl -noflsh -xcase -tostop -echoprt -echoctl -echoke -flusho -extproc
out put of cat /proc/interrupts for irq 11 with 16 cpu.
11: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC 11-edge ttyS0
output of cat /proc/tty/driver/serial:
0: uart:16550A port:000003F8 irq:11 tx:228 rx:0 RTS|DTR|DSR|CD|RI
1: uart:16550A port:000002F8 irq:10 tx:0 rx:0 CTS|DSR|CD|RI
2: uart:unknown port:000003E8 irq:4
3: uart:unknown port:000002E8 irq:3
As i said I have tried to change thee irq to 0 and it worked. But what is wrong with irq 11. There is no deviced registered for irq 11 in /proc/interrupts except ttyS0.
I'm using AVAssetWriter to write audio CMSampleBuffer to an mp4 file, but when I later read that file using AVAssetReader, it seems to be missing the initial chunk of data.
Here's the debug description of the first CMSampleBuffer passed to writer input append method (notice the priming duration attachement of 1024/44_100):
CMSampleBuffer 0x102ea5b60 retainCount: 7 allocator: 0x1c061f840
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
buffer-level attachments:
TrimDurationAtStart = {
epoch = 0;
flags = 1;
timescale = 44100;
value = 1024;
}
formatDescription = <CMAudioFormatDescription 0x281fd9720 [0x1c061f840]> {
mediaType:'soun'
mediaSubType:'aac '
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'aac '
mFormatFlags: 0x2
mBytesPerPacket: 0
mFramesPerPacket: 1024
mBytesPerFrame: 0
mChannelsPerFrame: 2
mBitsPerChannel: 0 }
cookie: {<CFData 0x2805f50a0 [0x1c061f840]>{length = 39, capacity = 39, bytes = 0x03808080220000000480808014401400 ... 1210068080800102}}
ACL: {(null)}
FormatList Array: {
Index: 0
ChannelLayoutTag: 0x650002
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'aac '
mFormatFlags: 0x0
mBytesPerPacket: 0
mFramesPerPacket: 1024
mBytesPerFrame: 0
mChannelsPerFrame: 2
mBitsPerChannel: 0 }}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 1
outputPTS = {6683542167/44100 = 151554.244, rounded}(based on cachedOutputPresentationTimeStamp)
sampleTimingArray[1] = {
{PTS = {6683541143/44100 = 151554.221, rounded}, DTS = {6683541143/44100 = 151554.221, rounded}, duration = {1024/44100 = 0.023}},
}
sampleSizeArray[1] = {
sampleSize = 163,
}
dataBuffer = 0x281cc7a80
Here's the debug description of the second CMSampleBuffer (notice the priming duration attachement of 1088/44_100, which combined with the previous trim duration yields the standard value of 2112):
CMSampleBuffer 0x102e584f0 retainCount: 7 allocator: 0x1c061f840
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
buffer-level attachments:
TrimDurationAtStart = {
epoch = 0;
flags = 1;
timescale = 44100;
value = 1088;
}
formatDescription = <CMAudioFormatDescription 0x281fd9720 [0x1c061f840]> {
mediaType:'soun'
mediaSubType:'aac '
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'aac '
mFormatFlags: 0x2
mBytesPerPacket: 0
mFramesPerPacket: 1024
mBytesPerFrame: 0
mChannelsPerFrame: 2
mBitsPerChannel: 0 }
cookie: {<CFData 0x2805f50a0 [0x1c061f840]>{length = 39, capacity = 39, bytes = 0x03808080220000000480808014401400 ... 1210068080800102}}
ACL: {(null)}
FormatList Array: {
Index: 0
ChannelLayoutTag: 0x650002
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'aac '
mFormatFlags: 0x0
mBytesPerPacket: 0
mFramesPerPacket: 1024
mBytesPerFrame: 0
mChannelsPerFrame: 2
mBitsPerChannel: 0 }}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 1
outputPTS = {6683543255/44100 = 151554.269, rounded}(based on cachedOutputPresentationTimeStamp)
sampleTimingArray[1] = {
{PTS = {6683542167/44100 = 151554.244, rounded}, DTS = {6683542167/44100 = 151554.244, rounded}, duration = {1024/44100 = 0.023}},
}
sampleSizeArray[1] = {
sampleSize = 179,
}
dataBuffer = 0x281cc4750
Now, when I read the audio track using AVAssetReader, the first CMSampleBuffer I get is:
CMSampleBuffer 0x102ed7b20 retainCount: 7 allocator: 0x1c061f840
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
buffer-level attachments:
EmptyMedia(P) = true
formatDescription = (null)
sbufToTrackReadiness = 0x0
numSamples = 0
outputPTS = {0/1 = 0.000}(based on outputPresentationTimeStamp)
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {0/1 = 0.000}},
}
dataBuffer = 0x0
and the next one is contains priming info of 1088/44_100:
CMSampleBuffer 0x10318bc00 retainCount: 7 allocator: 0x1c061f840
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
buffer-level attachments:
FillDiscontinuitiesWithSilence(P) = true
GradualDecoderRefresh(P) = 1
TrimDurationAtStart(P) = {
epoch = 0;
flags = 1;
timescale = 44100;
value = 1088;
}
IsGradualDecoderRefreshAuthoritative(P) = false
formatDescription = <CMAudioFormatDescription 0x281fdcaa0 [0x1c061f840]> {
mediaType:'soun'
mediaSubType:'aac '
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'aac '
mFormatFlags: 0x0
mBytesPerPacket: 0
mFramesPerPacket: 1024
mBytesPerFrame: 0
mChannelsPerFrame: 2
mBitsPerChannel: 0 }
cookie: {<CFData 0x2805f3800 [0x1c061f840]>{length = 39, capacity = 39, bytes = 0x03808080220000000480808014401400 ... 1210068080800102}}
ACL: {Stereo (L R)}
FormatList Array: {
Index: 0
ChannelLayoutTag: 0x650002
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'aac '
mFormatFlags: 0x0
mBytesPerPacket: 0
mFramesPerPacket: 1024
mBytesPerFrame: 0
mChannelsPerFrame: 2
mBitsPerChannel: 0 }}
}
extensions: {{
VerbatimISOSampleEntry = {length = 87, bytes = 0x00000057 6d703461 00000000 00000001 ... 12100680 80800102 };
}}
}
sbufToTrackReadiness = 0x0
numSamples = 43
outputPTS = {83/600 = 0.138}(based on outputPresentationTimeStamp)
sampleTimingArray[1] = {
{PTS = {1024/44100 = 0.023}, DTS = {1024/44100 = 0.023}, duration = {1024/44100 = 0.023}},
}
sampleSizeArray[43] = {
sampleSize = 179,
sampleSize = 173,
sampleSize = 178,
sampleSize = 172,
sampleSize = 172,
sampleSize = 159,
sampleSize = 180,
sampleSize = 200,
sampleSize = 187,
sampleSize = 189,
sampleSize = 206,
sampleSize = 192,
sampleSize = 195,
sampleSize = 186,
sampleSize = 183,
sampleSize = 189,
sampleSize = 211,
sampleSize = 198,
sampleSize = 204,
sampleSize = 211,
sampleSize = 204,
sampleSize = 202,
sampleSize = 218,
sampleSize = 210,
sampleSize = 206,
sampleSize = 207,
sampleSize = 221,
sampleSize = 219,
sampleSize = 236,
sampleSize = 219,
sampleSize = 227,
sampleSize = 225,
sampleSize = 225,
sampleSize = 229,
sampleSize = 225,
sampleSize = 236,
sampleSize = 233,
sampleSize = 231,
sampleSize = 249,
sampleSize = 234,
sampleSize = 250,
sampleSize = 249,
sampleSize = 259,
}
dataBuffer = 0x281cde370
The input append method keeps returning true which in principle means that all sample buffers got appended, but the reader for some reason skips the first chunk of data. Is there anything I'm doing wrong here?
I'm using the following code to read the file:
let asset = AVAsset(url: fileURL)
guard let assetReader = try? AVAssetReader(asset: asset) else {
return
}
asset.loadValuesAsynchronously(forKeys: ["tracks"]) { in
guard let audioTrack = asset.tracks(withMediaType: .audio).first else { return }
let audioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
assetReader.startReading()
while assetReader.status == .reading {
if let sampleBuffer = audioOutput.copyNextSampleBuffer() {
// do something
}
}
}
First some pedantry: you haven't lost your first sample buffer, but rather the first packet within your first sample buffer.
The behaviour of AVAssetReader with nil outputSettings when reading AACÂ packet data has changed on iOS 13 and macOS 10.15 (Catalina).
Previously you would get the first AAC packet, that packet's presentation timestamp (zero) and a trim attachment instructing you to discard the usual first 2112 frames of decoded audio.
Now [iOS 13, macOS 10.15] AVAssetReader seems to discard the first packet, leaving you the second packet, whose presentation timestamp is 1024, and you need only discard 2112 - 1024 = 1088 of the decoded frames.
Something that might not be immediately obvious in the above situations is that AVAssetReader is talking about TWO timelines, not one. The packet timestamps are referred to one, the untrimmed timeline, and the trim instruction implies the existence of another: the untrimmed timeline.
The transformation from untrimmed to trimmed timestamps is very simple, it's usually trimmed = untrimmed - 2112.
So is the new behaviour a bug? The fact that if you decode to LPCM and correctly follow the trim instructions, then you should still get the same audio, leads me believe the change was intentional (NB: I haven't yet personally confirmed the LPCM samples are the same).
However, the documentation says:
A value of nil for outputSettings configures the output to vend samples in their original format as stored by the specified track.
I don't think you can both discard packets [even the first one, which is basically a constant] and claim to be vending samples in their "original format", so from this point of view I think the change has a bug-like quality.
I also think it's an unfortunate change as I used to consider nil outputSettings AVAssetReader to be a sort of "raw" mode, but now it assumes your only use case is decoding to LPCM.
There's only one thing that could downgrade "unfortunate" to "serious bug", and that's if this new "let's pretend the first AAC packet doesn't exist" approach extends to files created with AVAssetWriter because that would break interoperability with non-AVAssetReader code, where the number of frames to trim has congealed to a constant 2112 frames. I also haven't personally confirmed this. Do you have a file created with the above sample buffers that you can share?
p.s. I don't think your input sample buffers are relevant here, I think you'd lose the first packet reading from any AAC file. However your input sample buffers seem slightly unusual in that they have hosttime [capture session?] style timestamps, yet are AAC, and only have one packet per sample buffer, which isn't very many and seems like a lot of overhead for 23ms of audio. Are you creating them them yourself in an AVCaptureSession -> AVAudioConverter chain?
I am trying to run kafka in docker. It works with plaintext but does not work with SSL.
I performed SSL setup according to this documentation:
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey
#Step 2
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
#Step 3
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
Then I copied all ssl stuff into /tmp/ssl/1/ directory.
Here is a my docker-compose:
version: '2'
volumes:
data-volume: {}
services:
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=127.0.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
volumes:
- "/tmp/ssl/:/tmp/ssl/"
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
server.properties
advertised.host.name = 127.0.0.1
advertised.listeners = null
advertised.port = 9092
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.2-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092,SSL://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka/kafka-logs-935db2aeed2f
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.2-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /tmp/ssl/1/server.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.principal.mapping.rules = [DEFAULT]
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /tmp/ssl/1/server.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = zookeeper:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
server log:
[2019-04-29 13:12:33,935] INFO Awaiting socket connections on s0.0.0.0:9092. (kafka.network.Acceptor)
[2019-04-29 13:12:33,975] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2019-04-29 13:12:33,975] INFO Awaiting socket connections on s0.0.0.0:9093. (kafka.network.Acceptor)
[2019-04-29 13:12:34,117] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(null,9093,ListenerName(SSL),SSL) (kafka.network.SocketServer)
[2019-04-29 13:12:34,122] INFO [SocketServer brokerId=1001] Started 2 acceptor threads for data-plane (kafka.network.SocketServer)
[2019-04-29 13:12:34,160] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,162] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,163] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,164] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-29 13:12:34,180] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-04-29 13:12:34,264] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
/opt/kafka/client-ssl.properties:
security.protocol=SSL
ssl.truststore.location=/tmp/ssl/1/kafka.client.truststore.jks
ssl.truststore.password=test1234
I run the following:
/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9093 --topic sample-sink-data --producer.config /opt/kafka/client-ssl.properties
and see this in the kafka server log:
[2019-04-29 13:28:13,654] INFO [SocketServer brokerId=1001] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
What am I missing?
People that I would like to ask a question about.
I have a replica between a laptop and a server.
On the server I have no problems
But on the laptop ...
With Time, you notice that the RAM increases ... Until you reach the limit ... When you reach the limit, the MariaDB service is restarted.
What is the reason for the increase in RAM ...
Server version: 10.1.32-MariaDB MariaDB Server
Excuse me my English is not very good
socket = /run/mysqld/mysqld.sock
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
slave-net-timeout = 30
binlog_ignore_db = mysql
binlog_ignore_db = zoom
binlog_ignore_db = performance_schema
binlog_ignore_db = information_schema
binlog_do_db = TK09
replicate_do_db = TK09
binlog_ignore_db = TK09_user
log-bin=binlog
log-slave-updates=1
binlog_format=mixed
innodb_buffer_pool_size = 2G
innodb_buffer_pool_instances = 8
innodb_log_buffer_size = 8M
query_cache_size = 40M
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
Good morning.
First of all, I've created a docker swarm with 2 physical hosts and an overlay network. In the same host I've created 2 containers (postgres and ambari servers) and one with ambari agent in which I'll install kafka, zookeeper, spark,... from ambari. The scope is installing several containers in several hosts, but I'm trying first like this as I'm not getting it working.
The fact is that once deployed with Ambari, I change kafka configuration to add advertised.host.name to the physical host's ip and advertised.port to 9092 to bind it to physical host's 9092 port.
When trying it, I'm always getting the following errors:
WARN Error while fetching metadata with correlation id 17 : {prueba2=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
If trying by sending to container's 6667 port
or
[2018-08-30 12:06:28,758] WARN Bootstrap broker 192.168.0.28:9092 disconnected (org.apache.kafka.clients.NetworkClient)
If trying to physical host's port 9092.
Thanks for help and please ask for any further information needed to help solving this problem.
EDIT1:
Configuration changed to the following properties.
and kafka-broker is not running. server.log is showing following trace:
root#host1:/usr/hdp/2.6.3.0-235/kafka# cat /var/log/kafka/server.log
[2018-09-04 09:17:46,964] INFO KafkaConfig values:
advertised.host.name = host1.ambari
advertised.listeners = INTERNO://host1.ambari:6667,EXTERNO://192.168.0.28:9092
advertised.port = 9092
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
default.replication.factor = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 10000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.protocol.version = 0.10.1-IV2
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listeners = INTERNO://host1.ambari:6667,EXTERNO://host1.ambari:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.1-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 6667
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 10000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = host1.ambari:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-09-04 09:17:46,974] FATAL (kafka.Kafka$)
java.lang.IllegalArgumentException: Error creating broker listeners from 'INTERNO://host1.ambari:6667,EXTERNO://host1.ambari:6667': No enum constant org.apache.kafka.common.protocol.SecurityProtocol.INTERNO
at kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:994)
at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:1013)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:966)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:779)
at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:776)
at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)
If you're deploying Kafka within Docker, you need to configure the listener to be accessible if you want to be able to access it from outside your Docker network too. This article explains the details.