I'm trying to send messages from a Micronaut 3.6.3 application to Kafka deployed with docker-compose. On first attempt I receive a warning like this:
[Producer clientId=producer-1] Error while fetching metadata with
correlation id 1 : {accountRegistered=LEADER_NOT_AVAILABLE}
For the following messages, the problem disappear but my requirement is to not lost any about account registration.
My docker compose configuration:
services:
kafka:
image: 'bitnami/kafka:3.2'
hostname: 'kafka'
environment:
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_BROKER_ID: 1
KAFKA_CFG_ADVERTISED_LISTENERS: 'INSIDE://kafka:29092, OUTSIDE://localhost:9092'
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: 'INSIDE'
KAFKA_CFG_LISTENERS: 'INSIDE://:29092, OUTSIDE://:9092'
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT'
KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'
ports:
- '9092:9092'
depends_on:
- 'zookeeper'
#TODO: Can be removed with the future versions of Kafka (using KRaft)
zookeeper:
image: 'bitnami/zookeeper:3.8'
hostname: 'zookeeper'
environment:
ALLOW_ANONYMOUS_LOGIN: 'yes'
ports:
- '2181:2181'
From the application I use 'localhost:9092' to connect.
My consumer code:
#KafkaListener(offsetReset = OffsetReset.EARLIEST)
class AccountReferenceUpdaterEventConsumer {
#Inject
AccountReferenceEntityRepository accountReferenceEntityRepository
#Topic('accountRegistered')
void receive(#MessageBody AccountRegisteredEvent event) {
def account = event.source
accountReferenceEntityRepository.findById(account.id)
.ifPresentOrElse(
accountReference -> log.warn('Account {} already registered', account.id),
() -> {
def accountReference = new AccountReferenceEntity(
accountId: account.id,
username: account.username
)
accountReferenceEntityRepository.save(accountReference)
}
)
}
}
application.yml:
kafka:
bootstrap:
servers: 'localhost:9092'
Related
I am trying to run Activiti Cloud example in docker.
I am using this official tutorial:
https://activiti.gitbook.io/activiti-7-developers-guide/getting-started/getting-started-activiti-cloud/getting-started-docker-compose
All the prior steps succeeded till I try
"To start work, execute getKeycloakToken hruser in Postman Keycloak collection. Then run startProcess in rb-my-app Postman collection."
I succeeded to execute getKeycloakToken hruser in Postman Keycloak collection.
I failed to run startProcess in rb-my-app Postman collection.
I got 404 when I execute getModels in Postman modeling collection.
I got 500 when I execute startProcess in Postman rb collection.
The major error message in logs:
example-runtime-bundle | 2022-03-08 19:08:30.970 WARN [rb,,] 7 --- [nio-8080-exec-1] o.keycloak.adapters.KeycloakDeployment : Failed to load URLs from http://127.0.0.1.nip.io/auth/realms/activiti/.well-known/openid-configuration
example-runtime-bundle |
example-runtime-bundle | java.net.ConnectException: Connection refused (Connection refused)
The related configurations:
in docker-compose.yml:
keycloak:
container_name: keycloak
image: activiti/activiti-keycloak
volumes:
- ./activiti-realm.json:/opt/jboss/keycloak/activiti-realm.json
restart: unless-stopped
depends_on:
- nginx
example-runtime-bundle:
container_name: example-runtime-bundle
image: activiti/example-runtime-bundle:${VERSION}
environment:
# JAVA_OPTS: "-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=8000,suspend=n -noverify"
SPRING_JMX_ENABLED: "false"
ACT_KEYCLOAK_URL: "http://${DOCKER_IP}/auth"
SPRING_RABBITMQ_HOST: "rabbitmq"
SERVER_SERVLET_CONTEXT_PATH: /rb
SPRING_DATASOURCE_URL: jdbc:postgresql://activiti-postgres:5432/activitidb
SPRING_DATASOURCE_USERNAME: activiti
SPRING_DATASOURCE_PASSWORD: mypassword
SPRING_JPA_DATABASE_PLATFORM: org.hibernate.dialect.PostgreSQLDialect
SPRING_JPA_GENERATE_DDL: "true"
SPRING_JPA_HIBERNATE_DDL_AUTO: update
# ACTIVITI_SECURITY_POLICIES_0_NAME: "HR Group restricted to SimpleProcess and ConnectorProcess"
# ACTIVITI_SECURITY_POLICIES_0_GROUPS: "hr"
# ACTIVITI_SECURITY_POLICIES_0_ACCESS: "WRITE"
# ACTIVITI_SECURITY_POLICIES_0_SERVICENAME: "rb-my-app"
# ACTIVITI_SECURITY_POLICIES_0_KEYS: "SimpleProcess,ConnectorProcess,fixSystemFailure,twoTaskProcess"
# ACTIVITI_SECURITY_POLICIES_1_NAME: "testgroup not restricted at all"
# ACTIVITI_SECURITY_POLICIES_1_GROUPS: "testgroup"
# ACTIVITI_SECURITY_POLICIES_1_ACCESS: "WRITE"
# ACTIVITI_SECURITY_POLICIES_1_SERVICENAME: "rb-my-app"
# ACTIVITI_SECURITY_POLICIES_1_KEYS: "*"
restart: unless-stopped
depends_on:
- nginx
- keycloak
- rabbitmq
- activiti-postgres
in .env:
DOCKER_IP=127.0.0.1.nip.io
VERSION=7.1.0-M13
KEYCLOAK_REALM=activiti
KEYCLOAK_RESOURCE=activiti
Full logs output is here:
https://gist.github.com/chang4tech/affd504809249733ee1f553da1d03763
What am I supposed to do to debug/detect the problem and eliminate the errors?
Thanks.
I'm running Loki for test purposes in Docker and am recently getting following error from the Promtail and Loki containers:
level=warn ts=2022-02-18T09:41:39.186511145Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
I have tried increasing limit settings (ingestion_rate_mb and ingestion_burst_size_mb) in my Loki config.
I setup two Promtail jobs - one job ingesting MS Exchange logs from a local directory (currently 8TB and increasing), the other job gets logs spooled from syslog-ng.
I've read that reducing labels help. But I'm only using two labels.
Configuration
Below my config files (docker-compose, loki, promtail):
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.2
container_name: loki
restart: always
user: "10001:10001"
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
- ${DATADIR}/loki/chunks:/loki/chunks
networks:
- loki
promtail:
image: grafana/promtail:2.4.2
container_name: promtail
restart: always
volumes:
- /var/log/loki:/var/log/loki
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:8.3.4
container_name: grafana
restart: always
user: "476:0"
volumes:
- ${DATADIR}/grafana/var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
Loki Config
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 24MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchange
__path__: /var/log/loki/exchange/*/*/*log
- job_name: syslog-ng
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog-ng"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'
I am using wurstmeister kafka and zookeeper docker images in my local to test SASL and ACL in kafka.
My docker-compose.yml is -
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
container_name: zookeeper
volumes:
- ./zookeeper/zookeeper.sasl.jaas.config:/etc/kafka/zookeeper_server_jaas.conf
- ./zk/data:/var/lib/zookeeper/data
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SET_ACL: 'true'
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/zookeeper_server_jaas.conf
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-Dzookeeper.allowSaslFailedClients=false
-Dzookeeper.requireClientAuthScheme=sasl
broker:
image: wurstmeister/kafka:2.13-2.6.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
volumes:
- ./kafka/kafka.jaas.conf:/etc/kafka/kafka_server_jaas.conf
- ./kfk/data:/kafka
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: EXTERNAL:SASL_PLAINTEXT
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_AUTO_CREATE_TOPIC: 'true'
KAFKA_LISTENERS: EXTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: EXTERNAL://localhost:9092
KAFKA_ADVERTISED_PORT: 9092
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG: |
org.apache.kafka.common.security.plain.PlainLoginModule required \
username="broker" \
password="broker" \
user_broker="broker" \
user_client="client-secret" \
user_alice="alice-secret";
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_INTER_BROKER_LISTENER_NAME: EXTERNAL
And following are the jaas files for zookeeper and kafka -
zookeeper.sasl.jaas.config -
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_kafka="kafka";
};
kafka.jaas.config -
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka";
};
I created zookeeper and kafka containers and ran the command within kafka container -
/opt/kafka_2.13-2.6.0/bin # ./kafka-acls.sh --authorizer-properties zookeeper.connect=zookeeper:2181 --add --allow-principal User:alice --producer --topic testtopic
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=testtopic, patternType=LITERAL)`:
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=CREATE, permissionType=ALLOW)
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=testtopic, patternType=LITERAL)`:
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=CREATE, permissionType=ALLOW)
But when I try to produce event from my go code (using sarama) - it gives error
kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
My go code is -
package main
import "github.com/Shopify/sarama"
var brokers = []string{"127.0.0.1:9092"}
func newProducer() (sarama.SyncProducer, error) {
config := sarama.NewConfig()
config.Producer.Partitioner = sarama.NewRandomPartitioner
config.Producer.RequiredAcks = sarama.WaitForAll
config.Producer.Return.Successes = true
config.Net.SASL.User = "alice"
config.Net.SASL.Password = "alice-secret"
config.Net.SASL.Handshake = true
config.Net.SASL.Enable = true
producer, err := sarama.NewSyncProducer(brokers, config)
return producer, err
}
func prepareMessage(topic, message string) *sarama.ProducerMessage {
msg := &sarama.ProducerMessage{
Topic: topic,
Partition: -1,
Value: sarama.StringEncoder(message),
}
return msg
}
func panicOnError(err error) {
if err != nil {
panic(err)
}
}
func main() {
producer, err := newProducer()
panicOnError(err)
msg := prepareMessage("testtopic", `{"key":"value"}`)
_, _, err = producer.SendMessage(msg)
panicOnError(err)
}
I tried kafka-acls.sh with --bootstrap-server (command - ./kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:alice --producer --topic testtopic) argument also but then the script would get stuck and I can observer authentication error in kafka docker logs -
[2021-05-29 16:27:46,288] INFO [SocketServer brokerId=1002] Failed authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
PS: all things are working fine if I use SASL only (without ACL)
Now I am stuck at the acl part. Anyone has ideas what I am missing (probably in zookeeper or kafka config) ?
Any help is appreciated. Thanks in advance.
For your first issue I would try the following suggestions https://github.com/Shopify/sarama/issues/272
For the second issue you should add to the command line --command-config /path/cmd.cfg
Indicating the admin client properties to connect your broker , like mechainsem SASL and more...
KAFKA_OPTS setting the jaas file
And the jaas file should contain KafkaClient with user , password to connect to your broker with PLAIN authentication method
I am running the WSO2is version 5.8.0 in Docker-Swarm, i script a compose for this mapping the files:
deployment.toml, wsocarbon.jks and directory in servers.
After change the keystore i receive the error on login admin:
System error while Authenticating/Authorizing User : Error when handling event : PRE_AUTHENTICATION
removing the mapping, the SSL Cert is not valid, but i login.
PS: i use traefik to redirect to container.
The stack deploy file:
#IS#
is-hml:
image: wso2/wso2is:5.8.0
ports:
- 4763:4763
- 4443:9443
volumes:
#- /docker/release-hml/wso2/full-identity-server-volume:/home/wso2carbon/wso2is-5.8.0
- /docker/release-hml/wso2/identity-server:/home/wso2carbon/wso2-config-volume
extra_hosts:
- "wso2-hml.valecard.com.br:127.0.0.1"
networks:
traefik_traefik:
aliases:
- is-hml
configs:
#- source: deployment.toml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/deployment.toml
#
- source: wso2carbon.jks
target: /home/wso2carbon/wso2is-5.8.0/repository/resources/security/wso2carbon.jks
#- source: catalina-server.xml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/tomcat/catalina-server.xml
- source: carbon.xml
target: /home/wso2carbon/wso2is-5.8.0/repository/conf/carbon.xml
#environment:
# - "CATALINA_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JVM_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JAVA_OPTS=-Xmx2g -Xms2g"
deploy:
#endpoint_mode: dnsrr
resources:
limits:
cpus: '2'
memory: '4096M'
replicas: 1
labels:
- "traefik.docker.network=traefik_traefik"
- "traefik.backend=is-hml"
- "traefik.port=4443"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.rule=Host:wso2-hml.valecard.com.br"
configs:
deployment.toml:
file: ./wso2-config/deployment.toml
catalina-server.xml:
file: ./wso2-config/catalina-server.xml
wso2carbon.jks:
file: ../../certs/wso2carbon-valecard.jks
carbon.xml:
file: ./wso2-config/carbon.xml
networks:
traefik_traefik:
external: true
The password is some from the deployment.toml
Thz.
docker-compose.yml (https://github.com/wurstmeister/kafka-docker)
version: "2.1"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Errors when trying to produce messages following https://kafka.apache.org/quickstart:
~/kafka_2.11-1.0.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>gh
>[2018-01-19 17:28:15,385] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1566 ms has passed since batch creation plus linger time
list topics:
~/kafka_2.11-1.0.0$ bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test
why? thanks
UPDATE
how to set KAFKA_ADVERTISED_HOST_NAME or network to make my python/java program or kafka-console-producer.sh (outside docker container) to produce messages to the kafka by localhost:9092?
UPDATE
It seems that the following docker-compose.yml working fine
version: "2"
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- 2181:2181
kafkaserver:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
I had the same issue. The suggested syntax in the kafka-docker README does not match the provided docker-compose.yml which does not work as is. I finally found this post and a variation of BEA's updated docker-compose.yml file worked for me. Thank you!
Here are the details.
I am running wurstmeister/kafka-docker on a Ubuntu 16.04 virtual image I set up as described at https://bertrandszoghy.wordpress.com/2018/05/03/building-the-hyperledger-fabric-vm-and-docker-images-version-1-1-from-scratch/
My docker-compose.yml file:
version: '2'
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- "2181:2181"
kafka:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.17.0.1:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "BertTopic:3:1"
On the same VM I installed NodeJs with:
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –
sudo apt-get install -y nodejs
cd
mkdir nodecode
cd nodecode
sudo npm install -g node-pre-gyp
sudo npm install kafka-node
Then I ran the following program to produce a couple of messages:
var kafka = require('kafka-node'),
Producer = kafka.Producer,
KeyedMessage = kafka.KeyedMessage,
client = new kafka.Client(),
producer = new Producer(client),
km = new KeyedMessage('key', 'message'),
payloads = [
{ topic: 'BertTopic', messages: 'first test message', partition: 0 },
{ topic: 'BertTopic', messages: 'second test message', partition: 0 }
];
producer.on('ready', function () {
producer.send(payloads, function (err, data) {
console.log(data);
process.exit(0);
});
});
producer.on('error', function (err) {
console.log('ERROR: ' + err.toString());
});
Which returned:
{ BertTopic: { '0': 0 } }
And I ran this second NodeJs program to consume the (last) messages:
var options = {
fromOffset: 'latest'
};
var kafka = require('kafka-node'),
Consumer = kafka.Consumer,
client = new kafka.Client(),
consumer = new Consumer(
client,
[
{ topic: 'BertTopic', partition: 0 }
],
[
{
autoCommit: false
},
options =
{
fromOffset: 'latest'
}
]
);
Which returned:
{ topic: 'BertTopic',
value: 'first test message',
offset: 0,
partition: 0,
highWaterOffset: 2,
key: null }
{ topic: 'BertTopic',
value: 'second test message',
offset: 1,
partition: 0,
highWaterOffset: 2,
key: null }
I also have third NodeJs program to show all historical messages in the topic listed at my blog post https://bertrandszoghy.wordpress.com/2017/06/27/nodejs-querying-messages-in-apache-kafka/
Hope this helps someone out.