I'm currently using Kafka-node in my application but I cant manage to connect it to the Kafka broker I have brought up before hand.
Firstly, I bring up the kafka broker with the wurstmeister/kafka docker image. I also bring zookeeper up with the jplock/zookeeper image.
I thne automatically create a topic with an environment variable with the wurstmeister/kafka image. like so:
zookeeper:
image: jplock/zookeeper
ports:
- "2181:2181"
networks:
- bitmex_backend
kafka:
image: wurstmeister/kafka:latest
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: "PLAINTEXT://:9092"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_CREATE_TOPICS: "Topic1:1:1"
networks:
- bitmex_backend
I verify the container is up by listing all the topics from kafka which returns the correct number and name of topics.
I then want to bring up a producer and verify its up when I call an endpoint so I do:
// Import the WebFramework for routing
const Koa = require('koa')
const route = require('koa-route')
var kafka = require('kafka-node');
client = new kafka.Client(),
producer = new kafka.Producer(client);
// TODO : Generate a position/history endpoint
module.exports = async () => {
const app = new Koa()
// Retrive all the open positions from all the bots in the system
app.use(route.get('/open', async (ctx) => {
producer.on('ready', function () {
console.log('Producer is ready');
});
producer.on('error', function (err) {
console.log('Producer is in error state');
console.log(err);
})
// Response
ctx.status = 200
ctx.body = {
data : "success",
}
}))
return app
This code runs with out errors but no output either. Whn I check the logs the endpoint is called correctly with no console.log to prove that the endpoint has loaded.
Any ideas or pointers are very welcomed as I've been stuck with this for a while now.
Related
Here is the yml file which is used to bring up docker containers in an AWS instance for kafka and zookeeper:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOSTNAME: <machines private ip>
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://<machines private ip>:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When I run the docker-compose command with the above file, it leads to the creation a docker network called kafka-docker with a kafka container and a zookeeper container.
Now, in the default bridge docker network, I have another container with the following piece of nodejs code:
const Producer = kafka.Producer;
const client = new kafka.Client("<machines private ip>:2181");
const producer = new Producer(client);
const kafka_topic = 'hello-topic';
event = ...
event_payload = ...
let payloads = [{topic:kafka_topic ,messages:JSON.stringify(event_payload), partition: 0 }]
let push_status = producer.send(payloads, (err, data) => {
if (err) {
console.log(err);
} else {
console.log('[kafka-producer -> '+kafka_topic+']: broker update success');
}
});
The console.log(err) gives me the error 'Broker not available'. Can someone please tell me what is wrong with my setup?
Notice the line:
const client = new kafka.Client("<machines private ip>:2181");
This is not the port that Kafka is listening on. Kafka is listening for connections on port 9092:
const client = new kafka.Client("<machines private ip>:9092");
It should work after this alteration.
I can get my Apache Kafka producer to send messages when it is running inside a container. However, when my producer is running outside the container in the host machine it doesn't work. I suspect it is a Docker networking issue with my Docker compose file but I can't figure it out.
I tried the solutions posted online similar to my problem but they don't work for me. Help!
Docker-compose file
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
environment:_
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
- ALLOW_PLAINTEXT_LISTENER=yes
Host producer
//import util.properties packages
import java.util.Properties;
//import simple producer packages
import org.apache.kafka.clients.producer.Producer;
//import KafkaProducer packages
import org.apache.kafka.clients.producer.KafkaProducer;
//import ProducerRecord packages
import org.apache.kafka.clients.producer.ProducerRecord;
//Create java class named “SimpleProducer”
public class SimpleProducer {
public static void main(String[] args) throws Exception{
// Check arguments length value
if(args.length == 0){
System.out.println("Enter topic name");
return;
}
//Assign topicName to string variable
String topicName = args[0].toString();
// create instance for properties to access producer configs
Properties props = new Properties();
//Assign localhost id
props.put("bootstrap.servers", "localhost:9092");
//Set acknowledgements for producer requests.
props.put("acks", "all");
//If the request fails, the producer can automatically retry,
props.put("retries", 0);
//Specify buffer size in config
props.put("batch.size", 16384);
//Reduce the no of requests less than 0
props.put("linger.ms", 1);
//The buffer.memory controls the total amount of memory available to the producer for buffering.
props.put("buffer.memory", 33554432);
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer
<String, String>(props);
for(int i = 0; i < 10; i++)
producer.send(new ProducerRecord<String, String>(topicName,
Integer.toString(i), Integer.toString(i)));
System.out.println("Message sent successfully");
producer.close();
}
}
The host producer should post messages to the Docker Apache kafka but it doesn't. It creates the topic but the messages are never received. What am I doing wrong? This is a bitnami image, not Confluent image.
From my previous answer here:
What I needed to do was to declare the LISTENERS as both binding to the docker host, and then advertise them differently - one to the docker network, one to the host.
services:
zookeeper:
image: confluentinc/cp-zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SYNC_LIMIT: 2
kafka:
image: confluentinc/cp-kafka
ports:
- 9094:9094
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://kafka:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
Now you have the Kafka available on your localhost at :9094 (as per the OUTSIDE listener and the ports entry in the docker-compose file), and inside the Docker network at :9092.
This solution is for the bitnami Docker image of Apache Kafka. Thanks to #cricket_007 and #daniu for the solution. I updated several lines in my Docker-compose file in the Kafka environment section.
Here's the complete, updated Docker-compose file:
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
- '29092:29092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
- ALLOW_PLAINTEXT_LISTENER=yes
I'm trying to connect Flink to a Kafka consumer
I'm using Docker Compose to build 4 containers zookeeper, kafka, Flink JobManager and Flink TaskManager.
For zookeeper and Kafka I'm using wurstmeister images, and for Flink I'm using the official image.
docker-compose.yml
version: '3.1'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
hostname: zookeeper
expose:
- "2181"
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:2.11-2.0.0
depends_on:
- zookeeper
ports:
- "9092:9092"
hostname: kafka
links:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: 'pipeline:1:1:compact'
jobmanager:
build: ./flink_pipeline
depends_on:
- kafka
links:
- zookeeper
- kafka
expose:
- "6123"
ports:
- "8081:8081"
command: jobmanager
environment:
JOB_MANAGER_RPC_ADDRESS: jobmanager
BOOTSTRAP_SERVER: kafka:9092
ZOOKEEPER: zookeeper:2181
taskmanager:
image: flink
expose:
- "6121"
- "6122"
links:
- jobmanager
- zookeeper
- kafka
depends_on:
- jobmanager
command: taskmanager
# links:
# - "jobmanager:jobmanager"
environment:
JOB_MANAGER_RPC_ADDRESS: jobmanager
And When I submit a simple job to Dispatcher the Job fails with the following error:
org.apache.kafka.common.errors.TimeoutException: Timeout of 60000ms expired before the position for partition pipeline-0 could be determined
My Job code is:
public class Main {
public static void main( String[] args ) throws Exception
{
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
Properties properties = new Properties();
String bootstrapServer = System.getenv("BOOTSTRAP_SERVER");
String zookeeperServer = System.getenv("ZOOKEEPER");
if (bootstrapServer == null) {
System.exit(1);
}
properties.setProperty("zookeeper", zookeeperServer);
properties.setProperty("bootstrap.servers", bootstrapServer);
properties.setProperty("group.id", "pipeline-analysis");
FlinkKafkaConsumer kafkaConsumer = new FlinkKafkaConsumer<String>("pipeline", new SimpleStringSchema(), properties);
// kafkaConsumer.setStartFromGroupOffsets();
kafkaConsumer.setStartFromLatest();
DataStream<String> stream = env.addSource(kafkaConsumer);
// Defining Pipeline here
// Printing Outputs
stream.print();
env.execute("Stream Pipeline");
}
}
I know I'm late to the party but I had the exact same error. In my case, I was not setting up TopicPartitions correctly. My topic had 2 partitions and my producer was producing messages just fine, but it's the spark streaming application, as my consumer, that wasn't really starting and giving up after 60 secs complaining the same error.
Wrong code that I had -
List<TopicPartition> topicPartitionList = Arrays.asList(new topicPartition(topicName, Integer.parseInt(numPartition)));
Correct code -
List<TopicPartition> topicPartitionList = new ArrayList<TopicPartition>();
for (int i = 0; i < Integer.parseInt(numPartitions); i++) {
topicPartitionList.add(new TopicPartition(topicName, i));
}
I had an error that looks the same.
17:34:37.668 [org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] ERROR o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-3, groupId=api.dev] User provided listener org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerConsumerRebalanceListener failed on partition assignment
org.apache.kafka.common.errors.TimeoutException: Timeout of 60000ms expired before the position for partition aaa-1 could be determined
Turns out it's my hosts file has been changed so the broker address is wrong.
Try this log settings to debug more details.
<logger name="org.apache.kafka.clients.consumer.internals.Fetcher" level="info" />
I was having issues with this error in a vSphere Integrated Containers environment. For me the problem was that I had advertise on the hostname and not the IP. I had to set the hostname and container name in my compose file.
Here are my settings that finally worked:
kafka:
depends_on:
- zookeeper
image: wurstmeister/kafka
ports:
- "9092:9092"
mem_limit: 10g
container_name: kafka
hostname: kafka
environment:
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://kafka:9092
KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: <REPLACE_WITH_IP>:2181
I had the same problem, the issue was I had a wrong host entry in /etc/hosts file for kafka node!
I am trying to build a pipeline based on this tutorial where Kafka reads from a file with a File Source connector. Using these Docker images for the Elastic Stack, I want to register Logstash as a consumer for the "quickstart-data" topic but I have failed for the moment.
Here is my logstash.conf file:
input {
kafka {
bootstrap_servers => 'localhost:9092'
topics => 'quickstart-data'
}
}
output {
elasticsearch {
hosts => [ 'elasticsearch']
user => 'elastic'
password => 'changeme'
}
stdout {}
}
The connection to Elasticsearch works because I tested it with a heartbeat input.
The message error I get is the following:
Connection to node -1 could not be established. Broker may not be available.
Give up sending metadata request since no node is available
Any ideas?
I would recommend you keep things simple and use Kafka Connect for landing the data to Elasticsearch too : https://docs.confluent.io/current/connect/connect-elasticsearch/docs/elasticsearch_connector.html#quick-start
There may be a better way to do it but here how I correct the issue:
Change my Zookeeper & Kafka images to Confluent images
zookeeper:
image: confluentinc/cp-zookeeper:latest
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- stack
kafka:
image: confluentinc/cp-kafka:latest
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
networks:
- stack
Logstash configuration (Please note that port is 2902):
input {
stdin{}
kafka {
id => "my_kafka_1"
bootstrap_servers => "kafka:29092"
topics => "test"
}
}
I'm trying to query firebase from a node.js app in a Docker container. It works locally but not in the container. I have port 443 open and I can make a request to google fine. For some reason I never get a response back running in the Docker container though. I suspect it's something with websockets.
My Ports are: 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp
And in my docker syslog:
: dropping unexpected TCP packet sent from 172.18.0.3:33288 to 216.58.210.173:443 (valid sources = 192.168.65.2, 0.0.0.0)
Any on ideas on what to try?
firebase.initializeApp({
serviceAccount: firebaseKey,
databaseURL: 'https://my-firebase.firebaseio.com'
});
const userId = 'xxxxxxxxxxxx';
const ref = firebase.database().ref(`datasource/${userId}`)
.once('value').then( (snapshot) => {
console.log(snapshot.val());
return callback(null, 'ok');
}, (error) => {
console.error(error);
return callback(error);
});
And my docker-compose.yml
version: "2"
services:
test-import:
build: .
command: npm run dev
volumes:
- .:/var/www
ports:
- "7000:8080"
- "443:443"
depends_on:
- mongo
networks:
- import-net
mongo:
container_name: mongo
image: mongo
networks:
- import-net
networks:
import-net:
driver: bridge
In my case the problem was that serviceAccount.privateKey was set using an environment variable. The value of that environment variable is a multi line string and that was causing the issue. So double check that serviceAccount is correctly configured in order to solve this.
edit
I had the same problem again today. The solution was to sync the time with a NTP server because the time in the Docker container was wrong (a few days off).