Not able to produce after setting up ACL in kafka - docker

I am using wurstmeister kafka and zookeeper docker images in my local to test SASL and ACL in kafka.
My docker-compose.yml is -
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
container_name: zookeeper
volumes:
- ./zookeeper/zookeeper.sasl.jaas.config:/etc/kafka/zookeeper_server_jaas.conf
- ./zk/data:/var/lib/zookeeper/data
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SET_ACL: 'true'
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/zookeeper_server_jaas.conf
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-Dzookeeper.allowSaslFailedClients=false
-Dzookeeper.requireClientAuthScheme=sasl
broker:
image: wurstmeister/kafka:2.13-2.6.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
volumes:
- ./kafka/kafka.jaas.conf:/etc/kafka/kafka_server_jaas.conf
- ./kfk/data:/kafka
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: EXTERNAL:SASL_PLAINTEXT
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_AUTO_CREATE_TOPIC: 'true'
KAFKA_LISTENERS: EXTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: EXTERNAL://localhost:9092
KAFKA_ADVERTISED_PORT: 9092
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG: |
org.apache.kafka.common.security.plain.PlainLoginModule required \
username="broker" \
password="broker" \
user_broker="broker" \
user_client="client-secret" \
user_alice="alice-secret";
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_INTER_BROKER_LISTENER_NAME: EXTERNAL
And following are the jaas files for zookeeper and kafka -
zookeeper.sasl.jaas.config -
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_kafka="kafka";
};
kafka.jaas.config -
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka";
};
I created zookeeper and kafka containers and ran the command within kafka container -
/opt/kafka_2.13-2.6.0/bin # ./kafka-acls.sh --authorizer-properties zookeeper.connect=zookeeper:2181 --add --allow-principal User:alice --producer --topic testtopic
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=testtopic, patternType=LITERAL)`:
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=CREATE, permissionType=ALLOW)
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=testtopic, patternType=LITERAL)`:
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=CREATE, permissionType=ALLOW)
But when I try to produce event from my go code (using sarama) - it gives error
kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
My go code is -
package main
import "github.com/Shopify/sarama"
var brokers = []string{"127.0.0.1:9092"}
func newProducer() (sarama.SyncProducer, error) {
config := sarama.NewConfig()
config.Producer.Partitioner = sarama.NewRandomPartitioner
config.Producer.RequiredAcks = sarama.WaitForAll
config.Producer.Return.Successes = true
config.Net.SASL.User = "alice"
config.Net.SASL.Password = "alice-secret"
config.Net.SASL.Handshake = true
config.Net.SASL.Enable = true
producer, err := sarama.NewSyncProducer(brokers, config)
return producer, err
}
func prepareMessage(topic, message string) *sarama.ProducerMessage {
msg := &sarama.ProducerMessage{
Topic: topic,
Partition: -1,
Value: sarama.StringEncoder(message),
}
return msg
}
func panicOnError(err error) {
if err != nil {
panic(err)
}
}
func main() {
producer, err := newProducer()
panicOnError(err)
msg := prepareMessage("testtopic", `{"key":"value"}`)
_, _, err = producer.SendMessage(msg)
panicOnError(err)
}
I tried kafka-acls.sh with --bootstrap-server (command - ./kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:alice --producer --topic testtopic) argument also but then the script would get stuck and I can observer authentication error in kafka docker logs -
[2021-05-29 16:27:46,288] INFO [SocketServer brokerId=1002] Failed authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
PS: all things are working fine if I use SASL only (without ACL)
Now I am stuck at the acl part. Anyone has ideas what I am missing (probably in zookeeper or kafka config) ?
Any help is appreciated. Thanks in advance.

For your first issue I would try the following suggestions https://github.com/Shopify/sarama/issues/272
For the second issue you should add to the command line --command-config /path/cmd.cfg
Indicating the admin client properties to connect your broker , like mechainsem SASL and more...
KAFKA_OPTS setting the jaas file
And the jaas file should contain KafkaClient with user , password to connect to your broker with PLAIN authentication method

Related

Quarkus Kafka Streams App unable to use SASL PLAIN mechanism: Unexpected handshake request with client mechanism PLAIN, enabled mechanisms are []

I've been developing a Kafka stream processing application with the Quarkus-Framework in Java. Now I'm trying to connect to the Kafka brokers via the SASL/PLAIN mechanism, but am getting the following error:
2022-10-27 10:52:06,736 ERROR [org.apa.kaf.cli.NetworkClient] (kafka-admin-client-thread | alarms-preprocessor-dev-a8147d3e-809c-4e96-9ce0-de10e55a8d72-admin) [AdminClient clientId=alarms-preprocessor-dev-a8147d3e-809c-4e96-9ce0-de10e55a8d72-admin] Connection to node -1 (localhost/127.0.0.1:29092) failed authentication due to: Unexpected handshake request with client mechanism PLAIN, enabled mechanisms are []
Apparently, the brokers do not have the PLAIN mechanism enabled, which begs the question why my Kafka-Connect-Service is able to sue the PLAIN-mechanism.
Anyway, this is my broker-configuration (approximately the same for all 3 instances) using confluentinc/cp-kafka docker image with docker-compose:
broker-1:
image: confluentinc/cp-kafka:7.2.1
hostname: broker-1
container_name: broker-1
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
- "9091:9091"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-1:9092,SASL_PLAINTEXT://broker-1:9091,PLAINTEXT_HOST://localhost:29092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf"
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
volumes:
- /home/larissa/Projekte/SRE/Kafka/local_dev_cluster/files/kafka_jaas.conf:/etc/kafka/kafka_jaas.conf
and this is part of the output from docker logs broker-1 | grep PLAIN:
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
advertised.listeners = PLAINTEXT://broker-1:9092,SASL_PLAINTEXT://broker-1:9091,PLAINTEXT_HOST://localhost:29092
inter.broker.listener.name = PLAINTEXT
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
listeners = PLAINTEXT://0.0.0.0:9092,SASL_PLAINTEXT://0.0.0.0:9091,PLAINTEXT_HOST://0.0.0.0:29092
sasl.enabled.mechanisms = [PLAIN]
security.inter.broker.protocol = PLAINTEXT
The part that says "sasl.enabled.mechanisms = [PLAIN]" suggests that the PLAIN mechanism is indeed enabled. So maybe it's a problem with my Quarkus application configuration, which looks like this:
quarkus.kafka-streams.application-id=alarms-preprocessor-dev
quarkus.kafka-streams.bootstrap-servers=localhost:29092,localhost:29192,localhost:29292
quarkus.kafka-streams.topics=test_topic
quarkus.kafka-streams.security.protocol=SASL_PLAINTEXT
quarkus.kafka-streams.sasl.mechanism=PLAIN
quarkus.kafka-streams.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret" \
serviceName="alarms-preprocessor";
All JAAS-configs, users and passwords are correct by the way, since they work with Kafka Connect just fine. If necessary, I can provide those too, just let me know.
Thanks in advance for any answers :)

Micronaut with kafka fails to retrieve metadata on first attempts

I'm trying to send messages from a Micronaut 3.6.3 application to Kafka deployed with docker-compose. On first attempt I receive a warning like this:
[Producer clientId=producer-1] Error while fetching metadata with
correlation id 1 : {accountRegistered=LEADER_NOT_AVAILABLE}
For the following messages, the problem disappear but my requirement is to not lost any about account registration.
My docker compose configuration:
services:
kafka:
image: 'bitnami/kafka:3.2'
hostname: 'kafka'
environment:
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_BROKER_ID: 1
KAFKA_CFG_ADVERTISED_LISTENERS: 'INSIDE://kafka:29092, OUTSIDE://localhost:9092'
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: 'INSIDE'
KAFKA_CFG_LISTENERS: 'INSIDE://:29092, OUTSIDE://:9092'
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT'
KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'
ports:
- '9092:9092'
depends_on:
- 'zookeeper'
#TODO: Can be removed with the future versions of Kafka (using KRaft)
zookeeper:
image: 'bitnami/zookeeper:3.8'
hostname: 'zookeeper'
environment:
ALLOW_ANONYMOUS_LOGIN: 'yes'
ports:
- '2181:2181'
From the application I use 'localhost:9092' to connect.
My consumer code:
#KafkaListener(offsetReset = OffsetReset.EARLIEST)
class AccountReferenceUpdaterEventConsumer {
#Inject
AccountReferenceEntityRepository accountReferenceEntityRepository
#Topic('accountRegistered')
void receive(#MessageBody AccountRegisteredEvent event) {
def account = event.source
accountReferenceEntityRepository.findById(account.id)
.ifPresentOrElse(
accountReference -> log.warn('Account {} already registered', account.id),
() -> {
def accountReference = new AccountReferenceEntity(
accountId: account.id,
username: account.username
)
accountReferenceEntityRepository.save(accountReference)
}
)
}
}
application.yml:
kafka:
bootstrap:
servers: 'localhost:9092'

Using Spark Streaming with Kafka docker container errors?

I am using Kafka docker-compose setting with below docker-compose.yml installed in VM-Ware machine.
When I connect to it by pyspark.streaming.kafka.KafkaUtils, it released some errors.
Please help me resolve this problems.
I used configuration from https://rmoff.net/2018/08/02/kafka-listeners-explained/
docker-compose.yml file
version: '3.7'
services:
zookeeper:
image: "confluentinc/cp-zookeeper:latest"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
# This has three listeners you can experiment with.
# BOB for internal traffic on the Docker network
# FRED for traffic from the Docker-host machine (`localhost`)
# ALICE for traffic from outside, reaching the Docker host on the 192.168.231.145
# Use
kafka0:
image: "confluentinc/cp-enterprise-kafka:latest"
ports:
- '9092:9092'
- '29094:29094'
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://kafka0:9092,LISTENER_ALICE://0.0.0.0:29094
KAFKA_ADVERTISED_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://localhost:9092,LISTENER_ALICE://192.168.231.145:29094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT,LISTENER_ALICE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_BOB
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
kafkacat:
image: confluentinc/cp-kafkacat
command: sleep infinity
python code i used to connect from vm-ware hosted machine
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from confluent_kafka import Producer, Consumer
import socket
import json
if __name__ == "__main__":
sc = SparkContext(appName="Processing_raw_data")
ssc = StreamingContext(sc, 1)
in_stream = KafkaUtils.createStream(ssc, "192.168.231.145:2181", socket.gethostname(), {"testing": 1}, {"auto.offset.reset": "smallest"})
in_stream.pprint()
ssc.start()
ssc.awaitTermination()
Errors
21/06/19 19:44:24 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/06/19 19:44:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
-------------------------------------------
Time: 2021-06-19 19:44:27
-------------------------------------------
21/06/19 19:44:27 WARN AppInfo$: Can't read Kafka version from MANIFEST.MF. Possible cause: java.lang.NullPointerException
[Stage 0:> (0 + 1) / 1]-------------------------------------------
Time: 2021-06-19 19:44:28
-------------------------------------------
21/06/19 19:44:28 WARN ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(testing)] from broker [id:0,host:kafka0,port:29092] failed
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
21/06/19 19:44:28 WARN ConsumerFetcherManager$LeaderFinderThread: [dino-computer_dino-computer-1624106667579-fbe9ab2d-leader-finder-thread], Failed to find leader for Set([testing,0])
kafka.common.KafkaException: fetching topic metadata for topics [Set(testing)] from broker [ArrayBuffer(id:0,host:kafka0,port:29092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 3 more

Dapr golang Docker Compose - running into a ""errorCode":"ERR_DIRECT_INVOKE","message":"invoke API is not ready" error

I am trying out Dapr for the first time ....refering to the Dapr go sdk at https://github.com/dapr/go-sdk...
... trying to host a Dapr service using golang with Docker Compose on my Windows 10 machine - using VSCode - and running into an issue connecting to ther service.
I have the docker compose file set to do a simple configuration as follows. And trying to connect to the service via the Dapr API using curl
golang service (taskapi service) => Dapr SideCar (taskapidapr)
I based it off of the example from https://github.com/dapr/go-sdk/blob/main/example/Makefile, but using Docker Compose.
When I try to connect connect to the service using
curl -d "ping" -H "Content-type: text/plain;charset=UTF-8"
"http://localhost:8300/v1.0/invoke/taskapi/method/echo"
I am running into the following error.
{"errorCode":"ERR_DIRECT_INVOKE","message":"invoke API is not ready"}
And the Dapr logs in Docker show a 'no mDNS apps to refresh.' - not sure if this is the cause of it and how to handle it.
Anyone can point me to what I am missing - greatly appreciate it.
Thank you
Athadu
golang package
package main
import (
"context"
"errors"
"fmt"
"log"
"net/http"
"github.com/dapr/go-sdk/service/common"
daprd "github.com/dapr/go-sdk/service/http"
)
func main() {
port := "8085"
address := fmt.Sprintf(":%s", port)
log.Printf("Creating New service at %v port", address)
log.Println()
// create a Dapr service (e.g. ":8080", "0.0.0.0:8080", "10.1.1.1:8080" )
s := daprd.NewService(address)
// add a service to service invocation handler
if err := s.AddServiceInvocationHandler("/echo", echoHandler); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
if err := s.Start(); err != nil && err != http.ErrServerClosed {
log.Fatalf("error listenning: %v", err)
}
}
func echoHandler(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
if in == nil {
err = errors.New("invocation parameter required")
return
}
log.Printf(
"echo - ContentType:%s, Verb:%s, QueryString:%s, %s",
in.ContentType, in.Verb, in.QueryString, in.Data,
)
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return
}
docker-compose.yml
version: "3"
services:
taskapi:
image: golang:1.16
volumes:
- ..:/go/src/lekha
working_dir: /go/src/lekha/uploader
command: go run main.go
ports:
- "8085:8085"
environment:
aaa: 80
my: I am THE variable value
networks:
- lekha
taskapidapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "taskapi",
"-app-protocol", "http",
"-app-port", "8085",
"-dapr-http-port", "8300",
"-placement-host-address", "placement:50006",
"-log-level", "debug",
"-components-path", "/components"
]
volumes:
- "../dapr-components/:/components" # Mount our components folder for the dapr runtime to use
depends_on:
- taskapi
ports:
- "8300:8300"
networks:
- lekha
#network_mode: "service:taskapi" # Attach the task-api-dapr service to the task-api network namespace
############################
# Dapr placement service
############################
placement:
image: "daprio/dapr"
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- lekha
networks:
lekha:
Daprd shows these mDNS messages in logs - not sure if this is the cause
time="2021-05-24T01:06:13.6629303Z" level=debug msg="Refreshing all
mDNS addresses." app_id=taskapi instance=442e04c9e8a6
scope=dapr.contrib type=log ver=edge
time="2021-05-24T01:06:13.6630421Z" level=debug msg="no mDNS apps to
refresh." app_id=taskapi instance=442e04c9e8a6 scope=dapr.contrib
Additionally, I see the containers on the expected ports ... running fine in Docker desktop...
enter image description here
{
"errorCode": "ERR_DIRECT_INVOKE",
"message": "invoke API is not ready"
}
same as yours

kafka: cannot produce messages to kafka inside docker

docker-compose.yml (https://github.com/wurstmeister/kafka-docker)
version: "2.1"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Errors when trying to produce messages following https://kafka.apache.org/quickstart:
~/kafka_2.11-1.0.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>gh
>[2018-01-19 17:28:15,385] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1566 ms has passed since batch creation plus linger time
list topics:
~/kafka_2.11-1.0.0$ bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test
why? thanks
UPDATE
how to set KAFKA_ADVERTISED_HOST_NAME or network to make my python/java program or kafka-console-producer.sh (outside docker container) to produce messages to the kafka by localhost:9092?
UPDATE
It seems that the following docker-compose.yml working fine
version: "2"
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- 2181:2181
kafkaserver:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
I had the same issue. The suggested syntax in the kafka-docker README does not match the provided docker-compose.yml which does not work as is. I finally found this post and a variation of BEA's updated docker-compose.yml file worked for me. Thank you!
Here are the details.
I am running wurstmeister/kafka-docker on a Ubuntu 16.04 virtual image I set up as described at https://bertrandszoghy.wordpress.com/2018/05/03/building-the-hyperledger-fabric-vm-and-docker-images-version-1-1-from-scratch/
My docker-compose.yml file:
version: '2'
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- "2181:2181"
kafka:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.17.0.1:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "BertTopic:3:1"
On the same VM I installed NodeJs with:
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –
sudo apt-get install -y nodejs
cd
mkdir nodecode
cd nodecode
sudo npm install -g node-pre-gyp
sudo npm install kafka-node
Then I ran the following program to produce a couple of messages:
var kafka = require('kafka-node'),
Producer = kafka.Producer,
KeyedMessage = kafka.KeyedMessage,
client = new kafka.Client(),
producer = new Producer(client),
km = new KeyedMessage('key', 'message'),
payloads = [
{ topic: 'BertTopic', messages: 'first test message', partition: 0 },
{ topic: 'BertTopic', messages: 'second test message', partition: 0 }
];
producer.on('ready', function () {
producer.send(payloads, function (err, data) {
console.log(data);
process.exit(0);
});
});
producer.on('error', function (err) {
console.log('ERROR: ' + err.toString());
});
Which returned:
{ BertTopic: { '0': 0 } }
And I ran this second NodeJs program to consume the (last) messages:
var options = {
fromOffset: 'latest'
};
var kafka = require('kafka-node'),
Consumer = kafka.Consumer,
client = new kafka.Client(),
consumer = new Consumer(
client,
[
{ topic: 'BertTopic', partition: 0 }
],
[
{
autoCommit: false
},
options =
{
fromOffset: 'latest'
}
]
);
Which returned:
{ topic: 'BertTopic',
value: 'first test message',
offset: 0,
partition: 0,
highWaterOffset: 2,
key: null }
{ topic: 'BertTopic',
value: 'second test message',
offset: 1,
partition: 0,
highWaterOffset: 2,
key: null }
I also have third NodeJs program to show all historical messages in the topic listed at my blog post https://bertrandszoghy.wordpress.com/2017/06/27/nodejs-querying-messages-in-apache-kafka/
Hope this helps someone out.

Resources