docker-compose.yml (https://github.com/wurstmeister/kafka-docker)
version: "2.1"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Errors when trying to produce messages following https://kafka.apache.org/quickstart:
~/kafka_2.11-1.0.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>gh
>[2018-01-19 17:28:15,385] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1566 ms has passed since batch creation plus linger time
list topics:
~/kafka_2.11-1.0.0$ bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test
why? thanks
UPDATE
how to set KAFKA_ADVERTISED_HOST_NAME or network to make my python/java program or kafka-console-producer.sh (outside docker container) to produce messages to the kafka by localhost:9092?
UPDATE
It seems that the following docker-compose.yml working fine
version: "2"
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- 2181:2181
kafkaserver:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
I had the same issue. The suggested syntax in the kafka-docker README does not match the provided docker-compose.yml which does not work as is. I finally found this post and a variation of BEA's updated docker-compose.yml file worked for me. Thank you!
Here are the details.
I am running wurstmeister/kafka-docker on a Ubuntu 16.04 virtual image I set up as described at https://bertrandszoghy.wordpress.com/2018/05/03/building-the-hyperledger-fabric-vm-and-docker-images-version-1-1-from-scratch/
My docker-compose.yml file:
version: '2'
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- "2181:2181"
kafka:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.17.0.1:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "BertTopic:3:1"
On the same VM I installed NodeJs with:
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –
sudo apt-get install -y nodejs
cd
mkdir nodecode
cd nodecode
sudo npm install -g node-pre-gyp
sudo npm install kafka-node
Then I ran the following program to produce a couple of messages:
var kafka = require('kafka-node'),
Producer = kafka.Producer,
KeyedMessage = kafka.KeyedMessage,
client = new kafka.Client(),
producer = new Producer(client),
km = new KeyedMessage('key', 'message'),
payloads = [
{ topic: 'BertTopic', messages: 'first test message', partition: 0 },
{ topic: 'BertTopic', messages: 'second test message', partition: 0 }
];
producer.on('ready', function () {
producer.send(payloads, function (err, data) {
console.log(data);
process.exit(0);
});
});
producer.on('error', function (err) {
console.log('ERROR: ' + err.toString());
});
Which returned:
{ BertTopic: { '0': 0 } }
And I ran this second NodeJs program to consume the (last) messages:
var options = {
fromOffset: 'latest'
};
var kafka = require('kafka-node'),
Consumer = kafka.Consumer,
client = new kafka.Client(),
consumer = new Consumer(
client,
[
{ topic: 'BertTopic', partition: 0 }
],
[
{
autoCommit: false
},
options =
{
fromOffset: 'latest'
}
]
);
Which returned:
{ topic: 'BertTopic',
value: 'first test message',
offset: 0,
partition: 0,
highWaterOffset: 2,
key: null }
{ topic: 'BertTopic',
value: 'second test message',
offset: 1,
partition: 0,
highWaterOffset: 2,
key: null }
I also have third NodeJs program to show all historical messages in the topic listed at my blog post https://bertrandszoghy.wordpress.com/2017/06/27/nodejs-querying-messages-in-apache-kafka/
Hope this helps someone out.
Related
I'm trying to send messages from a Micronaut 3.6.3 application to Kafka deployed with docker-compose. On first attempt I receive a warning like this:
[Producer clientId=producer-1] Error while fetching metadata with
correlation id 1 : {accountRegistered=LEADER_NOT_AVAILABLE}
For the following messages, the problem disappear but my requirement is to not lost any about account registration.
My docker compose configuration:
services:
kafka:
image: 'bitnami/kafka:3.2'
hostname: 'kafka'
environment:
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_BROKER_ID: 1
KAFKA_CFG_ADVERTISED_LISTENERS: 'INSIDE://kafka:29092, OUTSIDE://localhost:9092'
KAFKA_CFG_INTER_BROKER_LISTENER_NAME: 'INSIDE'
KAFKA_CFG_LISTENERS: 'INSIDE://:29092, OUTSIDE://:9092'
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INSIDE:PLAINTEXT, OUTSIDE:PLAINTEXT'
KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'
ports:
- '9092:9092'
depends_on:
- 'zookeeper'
#TODO: Can be removed with the future versions of Kafka (using KRaft)
zookeeper:
image: 'bitnami/zookeeper:3.8'
hostname: 'zookeeper'
environment:
ALLOW_ANONYMOUS_LOGIN: 'yes'
ports:
- '2181:2181'
From the application I use 'localhost:9092' to connect.
My consumer code:
#KafkaListener(offsetReset = OffsetReset.EARLIEST)
class AccountReferenceUpdaterEventConsumer {
#Inject
AccountReferenceEntityRepository accountReferenceEntityRepository
#Topic('accountRegistered')
void receive(#MessageBody AccountRegisteredEvent event) {
def account = event.source
accountReferenceEntityRepository.findById(account.id)
.ifPresentOrElse(
accountReference -> log.warn('Account {} already registered', account.id),
() -> {
def accountReference = new AccountReferenceEntity(
accountId: account.id,
username: account.username
)
accountReferenceEntityRepository.save(accountReference)
}
)
}
}
application.yml:
kafka:
bootstrap:
servers: 'localhost:9092'
I'm running Loki for test purposes in Docker and am recently getting following error from the Promtail and Loki containers:
level=warn ts=2022-02-18T09:41:39.186511145Z caller=client.go:349 component=client host=loki:3100 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
I have tried increasing limit settings (ingestion_rate_mb and ingestion_burst_size_mb) in my Loki config.
I setup two Promtail jobs - one job ingesting MS Exchange logs from a local directory (currently 8TB and increasing), the other job gets logs spooled from syslog-ng.
I've read that reducing labels help. But I'm only using two labels.
Configuration
Below my config files (docker-compose, loki, promtail):
docker-compose.yaml
version: "3"
networks:
loki:
services:
loki:
image: grafana/loki:2.4.2
container_name: loki
restart: always
user: "10001:10001"
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- ${DATADIR}/loki/etc:/etc/loki:rw
- ${DATADIR}/loki/chunks:/loki/chunks
networks:
- loki
promtail:
image: grafana/promtail:2.4.2
container_name: promtail
restart: always
volumes:
- /var/log/loki:/var/log/loki
- ${DATADIR}/promtail/etc:/etc/promtail
ports:
- "1514:1514" # for syslog-ng
- "9080:9080" # for http web interface
command: -config.file=/etc/promtail/config.yml
networks:
- loki
grafana:
image: grafana/grafana:8.3.4
container_name: grafana
restart: always
user: "476:0"
volumes:
- ${DATADIR}/grafana/var:/var/lib/grafana
ports:
- "3000:3000"
networks:
- loki
Loki Config
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# https://grafana.com/docs/loki/latest/configuration/#limits_config
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 12
ingestion_burst_size_mb: 24
per_stream_rate_limit: 24MB
chunk_store_config:
max_look_back_period: 336h
table_manager:
retention_deletes_enabled: true
retention_period: 2190h
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_encoding: snappy
Promtail Config
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: exchange
static_configs:
- targets:
- localhost
labels:
job: exchange
__path__: /var/log/loki/exchange/*/*/*log
- job_name: syslog-ng
syslog:
listen_address: 0.0.0.0:1514
idle_timeout: 60s
label_structured_data: yes
labels:
job: "syslog-ng"
relabel_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'
I am using wurstmeister kafka and zookeeper docker images in my local to test SASL and ACL in kafka.
My docker-compose.yml is -
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
hostname: zookeeper
container_name: zookeeper
volumes:
- ./zookeeper/zookeeper.sasl.jaas.config:/etc/kafka/zookeeper_server_jaas.conf
- ./zk/data:/var/lib/zookeeper/data
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SET_ACL: 'true'
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/zookeeper_server_jaas.conf
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-Dzookeeper.allowSaslFailedClients=false
-Dzookeeper.requireClientAuthScheme=sasl
broker:
image: wurstmeister/kafka:2.13-2.6.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
volumes:
- ./kafka/kafka.jaas.conf:/etc/kafka/kafka_server_jaas.conf
- ./kfk/data:/kafka
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: EXTERNAL:SASL_PLAINTEXT
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_AUTO_CREATE_TOPIC: 'true'
KAFKA_LISTENERS: EXTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: EXTERNAL://localhost:9092
KAFKA_ADVERTISED_PORT: 9092
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_EXTERNAL_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_LISTENER_NAME_EXTERNAL_PLAIN_SASL_JAAS_CONFIG: |
org.apache.kafka.common.security.plain.PlainLoginModule required \
username="broker" \
password="broker" \
user_broker="broker" \
user_client="client-secret" \
user_alice="alice-secret";
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_INTER_BROKER_LISTENER_NAME: EXTERNAL
And following are the jaas files for zookeeper and kafka -
zookeeper.sasl.jaas.config -
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_kafka="kafka";
};
kafka.jaas.config -
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka";
};
I created zookeeper and kafka containers and ran the command within kafka container -
/opt/kafka_2.13-2.6.0/bin # ./kafka-acls.sh --authorizer-properties zookeeper.connect=zookeeper:2181 --add --allow-principal User:alice --producer --topic testtopic
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=testtopic, patternType=LITERAL)`:
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=CREATE, permissionType=ALLOW)
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=testtopic, patternType=LITERAL)`:
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:alice, host=*, operation=CREATE, permissionType=ALLOW)
But when I try to produce event from my go code (using sarama) - it gives error
kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
My go code is -
package main
import "github.com/Shopify/sarama"
var brokers = []string{"127.0.0.1:9092"}
func newProducer() (sarama.SyncProducer, error) {
config := sarama.NewConfig()
config.Producer.Partitioner = sarama.NewRandomPartitioner
config.Producer.RequiredAcks = sarama.WaitForAll
config.Producer.Return.Successes = true
config.Net.SASL.User = "alice"
config.Net.SASL.Password = "alice-secret"
config.Net.SASL.Handshake = true
config.Net.SASL.Enable = true
producer, err := sarama.NewSyncProducer(brokers, config)
return producer, err
}
func prepareMessage(topic, message string) *sarama.ProducerMessage {
msg := &sarama.ProducerMessage{
Topic: topic,
Partition: -1,
Value: sarama.StringEncoder(message),
}
return msg
}
func panicOnError(err error) {
if err != nil {
panic(err)
}
}
func main() {
producer, err := newProducer()
panicOnError(err)
msg := prepareMessage("testtopic", `{"key":"value"}`)
_, _, err = producer.SendMessage(msg)
panicOnError(err)
}
I tried kafka-acls.sh with --bootstrap-server (command - ./kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:alice --producer --topic testtopic) argument also but then the script would get stuck and I can observer authentication error in kafka docker logs -
[2021-05-29 16:27:46,288] INFO [SocketServer brokerId=1002] Failed authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
PS: all things are working fine if I use SASL only (without ACL)
Now I am stuck at the acl part. Anyone has ideas what I am missing (probably in zookeeper or kafka config) ?
Any help is appreciated. Thanks in advance.
For your first issue I would try the following suggestions https://github.com/Shopify/sarama/issues/272
For the second issue you should add to the command line --command-config /path/cmd.cfg
Indicating the admin client properties to connect your broker , like mechainsem SASL and more...
KAFKA_OPTS setting the jaas file
And the jaas file should contain KafkaClient with user , password to connect to your broker with PLAIN authentication method
I am setting up a docker-compose with several services, all of them writing to a common syslog container / service... which actually is a logstash service (a complete elk image as a matter of fact) with the logstash-input-syslog plugin enabled..
kind of as follows (using custom 5151 port since default 514 was giving me a hard time due to permission issues):
services:
elk-custom:
image: some_elk_image
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 5151:5151
service1:
image: myservice1_image
logging:
driver: syslog
options:
syslog-address: "tcp://127.0.0.1:5151"
service2:
image: myservice2_image
logging:
driver: syslog
options:
syslog-address: "tcp://127.0.0.1:5151"
My question is how can I add a field (an option rather under logging) so that each log entry in logstash ends up with a field, whose value will help determine whether the log came from service1 or service2.
I kind of managed to do this using the tag field, but the information ends up being part of the message, and not a separate field which I can use for queries in elasticsearch.
For the time being, kibana displays log entries as follows:
#timestamp:September 26th 2017, 12:00:47.684 syslog_severity_code:5
port:53,422 syslog_facility:user-level #version:1 host:172.18.0.1
syslog_facility_code:1 message:<27>Sep 26 12:00:47 7705a2f9b22a[2128]:
[pid: 94|app: 0|req: 4/7] 172.18.0.1 () {40 vars in 461 bytes} [Tue
Sep 26 09:00:47 2017] GET /api/v1/apikeys => generated 74 bytes in 5
msecs (HTTP/1.1 401) 2 headers in 81 bytes (1 switches on core 0)
type:syslog syslog_severity:notice tags:_grokparsefailure
_id:AV69atD4zBS_tKzDPfyh _type:syslog _index:logstash-2017.09.26 _score: -
From what I also know, we cannot define custom syslog-facilities since they are predefined by the syslog RFC.
Thx.
Ended up using port multiplexing and adding custom field based on this condition:
docker-compose.yml
elk-custom:
image: some_elk_image
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 5151:5151
- 5152:5152
service1:
image: myservice2_image
logging:
driver: syslog
options:
syslog-address: "tcp://127.0.0.1:5151"
service2:
image: myservice2_image
logging:
driver: syslog
options:
syslog-address: "tcp://127.0.0.1:5152"
logstash-conf
input {
tcp {
port => 5151
type => syslog
add_field => {'received_from' => 'service1'}
}
tcp {
port => 5152
type => syslog
add_field => {'received_from' => 'service2'}
}
}
I have 2 docker containers managed with docker-compose and can't seem to properly use webpack to proxy some request to the backend api.
docker-compose.yml :
version: '2'
services:
web:
build:
context: ./frontend
ports:
- "80:8080"
volumes:
- ./frontend:/16AGR/frontend:rw
links:
- back:back
back:
build:
context: ./backend
expose:
- "8080"
ports:
- "8081:8080"
volumes:
- ./backend:/16AGR/backend:rw
the service web is a simple react application served by a webpack-dev-server.
the service back is a node application.
I have no problem to access either app from my host :
$ curl localhost
> index.html
$ curl localhost:8081
> Hello World
I can also ping and curl the back service from the web container :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73ebfef9b250 16agr_web "npm run start" 37 hours ago Up 13 seconds 0.0.0.0:80->8080/tcp 16agr_web_1
a421fc24f8d9 16agr_back "npm start" 37 hours ago Up 15 seconds 0.0.0.0:8081->8080/tcp 16agr_back_1
$ docker exec -it 73e bash
$ root#73ebfef9b250:/16AGR/frontend# curl back:8080
Hello world
However i have a problem with the proxy.
Webpack is started with
webpack-dev-server --display-reasons --display-error-details --history-api-fallback --progress --colors -d --hot --inline --host=0.0.0.0 --port 8080
and the config file is
frontend/webpack.config.js :
var webpack = require('webpack');
var config = module.exports = {
...
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: "back:8080",
secure: false
}
}
}
...
}
When i try to request /api/test with a link in my app for exemple i get a generic error, the link and google did not help much :(
[HPM] Error occurred while trying to proxy request /api/test from localhost to back:8080 (EINVAL) (https://nodejs.org/api/errors.html#errors_common_system_errors)
I suspect some weird thing because the proxy is on the container and the request is on localhost but I don't really have an idea to solve this.
I think I managed to tacle the problem.
Just had to change the webpack configuration with the following
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: {
host: "back",
protocol: 'http:',
port: 8080
},
ignorePath: true,
changeOrigin: true,
secure: false
}
}
}