Landoop/fast-data-dev : Connection to node -1 could not be established - docker

I am trying to use kafka with the docker image Landoop/fast-data-dev
I ran the following commands
I started the docker container
docker run --rm -it -p 2183:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9093:9092 -e ADV_HOST=127.0.0.1 landoop/fast-data-dev
then I started the bash command
docker run --rm -it --net=host landoop/fast-data-dev bash
then I created a topic
kafka-topics --create --zookeeper localhost:2183 --replication-factor 1 --partitions 3 --topic my-topic
then I tried to send data to the topic
kafka-console-producer --broker-list localhost:9093 --topic my-topic
but I was receiving the following error
[2018-10-27 20:08:24,655] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
ps: because of "port already allocation" problem I changed the mappings of kafka and zookeeper to 9093 and 2183

You're running the CLI commands within the container, so you can't just remap the port on the host. You'll actually need to also set the port within the container on which Kafka runs, using the BROKER_PORT variable, in this case.
docker run --rm -it -p 2183:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9093:9093 -e ADV_HOST=127.0.0.1 -e BROKER_PORT=9093 landoop/fast-data-dev
Otherwise, you still have to use localhost:9092 within the container even if the external port is 9093, but you never needed to add -p flags anyway to expose the ports externally if you were going to start bash within the container to do things.
If you wanted to use applications outside the container, see this blog, which uses Confluent containers, but same concept applies, though the landoop variables are different

Related

kafka: Connection to node 1001 could not be established. Broker may not be available

I started zookeeper and kafka container using below commands in CentOS 7.9:
docker run -it -d --net=sup-network --name zookeeper --ip 200.100.0.140 -p 2181:2181 zookeeper:3.7.0
docker run -it --net=sup-network --name kafka -p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=200.100.0.140:2181 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://200.100.0.141:9092 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-d bitnami/kafka:3.0.0
The 200.100.0.xxx ips are defined in docker swarm.
But kafka consistently gave below logs:
WARN [Controller id=1001, targetBrokerId=1001] Connection to node 1001 (/200.100.0.141:9092)
could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
How to fix it out?
additional info:
I removed -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://200.100.0.141:9092 \, then kafka didn't give Broker may not be available log info. But why there're so many posts suggest that this line should be added?

Remote access to cassandra 4.0.1 using docker via cqlsh

My Environment:
Windows 10 Home
WSL2
Cassandra 4.0.1: Official Docker Image
Docker command:
docker run --name cassandra-node-0 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9042:9042 -p 9160:9160 -e CASSANDRA_CLUSTER_NAME=MyCluster -e CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch -e CASSANDRA_DC=datacenter1 -e CASSANDRA_BROADCAST_ADDRESS=192.168.1.101 -d cassandra
CQLSH Command:
docker run -it -e CQLSH_HOST=$(docker inspect --format='{{ .NetworkSettings.IPAddress}}' cassandra-node-0) --name cassandra-client --entrypoint=cqlsh cassandra
I try to connect cassandra node using cqlsh where ubuntu in WSL2 in same pc.
I did not change all *.yaml file and only use Docker Env.
When I insert node's docker network ip to CQLSH_HOST, cqlsh is successfully connected node.
But, When I insert my private ip, public ip or 127.0.0.1, cqlsh is refused connection to node.
This shows the same issue when nodes from different networks connect.
I think I'm missing a setting of something Docker Env.
What settings am I missing?
[Update] I add some port fowarding rules in firewall but same issue.
[Update 2] docker ps -a result:
0.0.0.0:7000-7001->7000-7001/tcp, :::7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, :::7199->7199/tcp, 0.0.0.0:9042->9042/tcp, :::9042->9042/tcp, 0.0.0.0:9160->9160/tcp, :::9160->9160/tcp
Try adding --hostname and --network when you run Cassandra. For example:
$ docker run --rm -d
--name cassandra-node-0
--hostname cassandra-node-0
--network cassandra-node-0
You'll find that it's easier to connect via cqlsh by adding:
--network cassandra-node-0
-e CQLSH_HOST=cassandra-node-0
to your docker run command. Cheers!

How to create multiple Debezium connectores for mysql database

I am tring to connect multiple Debezium connectores for a mysql database and my configurations are as follows.
sudo docker run -it --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:1.5 &
sudo docker run -it --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:1.5 &
sudo docker run -it --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka debezium/connect:1.5 &
sudo docker run -it --name connect1 -p 8084:8084 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=my_connect_configs -e OFFSET_STORAGE_TOPIC=my_connect_offsets -e STATUS_STORAGE_TOPIC=my_connect_statuses --link zookeeper:zookeeper --link kafka:kafka debezium/connect:1.5 &
but when i tring to run second connector...following error occurred.
ERRO[0000] error waiting for container: context canceled
Can anyone help me with this please.
You're not running any connectors, only containers for workers.
One Kafka Connect worker can be used to submit more than one connector task via the HTTP server on port 8083
Regarding the commands shown, you do not need multiple containers unless you are trying to create a Connect worker cluster
In order to do so, they need the same topics and the same group id.
You'd also want -p 8084:8083 since you've not changed the server port. Also, rather than using &, you can do docker run -d, but using Docker Compose would make more sense here

docker container is active but port is not displayed

I am building a docker image and running it will following command:
docker run --name myjenkins -u root -d -p 8080:8080 -p 50000:50000 -v jenkins-volume:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock --net=host vm31
docker container is up and running when i do docker ps output is :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22a92a3b7875 vm31 "/sbin/tini -- /usr/…" 4 seconds ago Up 3 seconds
why does not it show the port on which this container is running - so i can not reach jenkins on localhost:8080
You are using two conflicting things together:
--net=host
-p 8080:8080 -p 50000:50000
The first tells the container to use the network stack of the host, the second is the way to bind container ports with host ports. I believe you only want to use the second one.
try after removing option --net=host.

How do I set up a simple dockerized RabbitMQ cluster?

I've been doing a bit of reading up about setting up a dockerized RabbitMQ cluster and google turns up all sorts of results for doing so on the same machine.
I am trying to set up a RabbitMQ cluster across multiple machines.
I have three machines with the names dockerswarmmodemaster1, dockerswarmmodemaster2 and dockerswarmmodemaster3
On the first machine (dockerswarmmodemaster1), I issue the following command:
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 -p 15672:15672 \
-p 25672:25672 --hostname dockerswarmmodemaster1 --name roger_rabbit \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
Now this starts up a rabbitMQ just fine, and I can go to the admin page on 15672 and see that it is working as expected.
I then SSH to my second machine (dockerswarmmodemaster2) and this is the bit I am stuck on. I have been trying variations on the following command:
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 \
-p 15672:15672 -p 25672:25672 --name jessica_rabbit -e CLUSTERED=true \
-e CLUSTER_WITH=rabbit#dockerswarmmodemaster1 \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' \
rabbitmq:3-management
No matter what I try, the web page on both RabbitMQ machines says that there is no cluster under the 'cluster links' section. I haven't tried involving the third machine yet.
So - some more info:
The machine names are resolvable by DNS.
I have tried using the --net=host switch in the docker run command on both machines; no change.
I am not using docker swarm or swarm mode.
I do not have docker compose installed. I'd prefer not to use it if possible.
Is there any way of doing this from the docker run command or will I have to download the rabbit admin cli and manually join to the cluster?
You can use this plugin https://github.com/aweber/rabbitmq-autocluster to create a RabbitMQ docker cluster.
The plugin uses etcd2 or consul as service discovery, in this way you don't need to use the rabbitmqctl command line.
I used it with docker swarm, but it is not necessary.
Here is the result
The official container seems to not support environment variables CLUSTERED and CLUSTER_WITH. It supports only a list variables that are specified in RabbitMQ Configuration.
According to official Clustering Guide, one of possible solutions is via configuration file. Thus, you can just provide your own configuration to the container.
Modified default configuration in your case will look like:
[
{ rabbit, [
{ loopback_users, [ ] },
{ cluster_nodes, {['rabbit#dockerswarmmodemaster1'], disc }}
]}
].
Save this snippet to, for example, /home/user/rmq/rabbitmq.config.
Hint: If you want to see node in management console, you need to add another file /home/user/rmq/enabled_plugins with only string
[rabbitmq_management].
after that, your command will look like
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 \
-p 15672:15672 -p 25672:25672 --name jessica_rabbit \
-v /home/user/rmq:/etc/rabbmitmq \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' \
rabbitmq:3-management
PS You may also need to consider setting environment variable RABBITMQ_USE_LONGNAME.
In order to create a cluster, all rabbitmq nodes that are to form up a cluster must be accessible (each one by others) by node name (hostname).
You need to specify a hostname for each docker container with --hostname option and to add /etc/host entries for all the other containers, this you can do with --add-host option or by manually editing /etc/hosts file.
So, here is the example for a 3 rabbitmq nodes cluster with docker containers (rabbitmq:3-management image).
First, create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1. We are going to have the following:
3 docker containers named rab1con, rab2con and rab3con
IPs respectively will be 172.18.0.11 , -12 and -13
each of them will have the host name respectively rab1, rab2 and rab3
all of them must share the same erlang cookie
Spin up the first one
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host rab2:172.18.0.12 --add-host rab3:172.18.0.13 --name rab1con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
second one
docker run -d --net mynet1 --ip 172.18.0.12 --hostname rab2 --add-host rab1:172.18.0.11 --add-host rab3:172.18.0.13 --name rab2con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
last one
docker run -d --net mynet1 --ip 172.18.0.13 --hostname rab3 --add-host rab2:172.18.0.12 --add-host rab1:172.18.0.11 --name rab3con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
Then, in container rab2con, do
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit#rab1
rabbitmqctl start_app
and the same in rab3con and that's it.

Resources