I'm trying to locally start Kafka in a Docker container, but don't seem to get the combination of options right.
I'm running on Windows 10, Docker ce version 2.0.0.3 (31259).
What I'm doing
run Zookeeper in Docker container
docker run -d --name=zookeeper1 --network=host --env-file=zookeeper_options confluentinc/cp-zookeeper
I'll leave out the environment file since zookeeper runs fine.
run Kafka in Docker container
docker run -d --network=host --name=kafka --env-file=kafka_options confluentinc/cp-kafka
with the kafka_options file containing
KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181
KAFKA_LISTENERS=PLAINTEXT://localhost:9092
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092
Try to get the metadata list
kafkacat -b localhost:29092 -L
(this I do in Windows 10 Ubuntu subsytem, the others I ran from PowerShell. I also have a small Java Kafka application which exhibits the same behavior though)
The result is
% ERROR: Failed to acquire metadata: Local: Broker transport failure
What I've read
Obviously the Quickstart with Docker documentation which uses docker-compose which I don't want; as well as the Docker section of the documentation.
Aside from that, most notably this post by Robin explaining the advertised listeners concept, but I still can't see what I do wrong.
I also found this issue about a difference in Windows preventing you to use the official Quickstart steps in Windows; this lead me try the alternative run with a separate network.
Separate network
Following the steps in the issue:
docker network create confluent
docker run -d --name=zookeeper1 --network=confluent -p 22181:2181 --env-file=zookeeper_options confluentinc/cp-zookeeper
docker run -d --network=confluent --name=kafka -p 29092:9092 --env-file=kafka_options confluentinc/cp-kafka
kafkacat -b localhost:29092 -L
That does change the outcome to
% ERROR: Failed to acquire metadata: Local: Timed out
So it looks like at least it connects, but that doesn't help much in the end.
The question is what am I doing wrong? Is it the Kafka configuration options, or is it a Docker issue I'm not aware of?
EDIT:
It does work with the sample docker-compose.yml here, but shouldn't we be able to start the containers separately?
Related
When running on the host, I can get all the Kafka topics with:
docker exec broker kafka-topics --bootstrap-server broker:29092 --list
I can't run this from within a container because I'd get docker: not found, even if I installed Docker in the container I don't think it'll work anyway. Also, apparently it's hard and insecure to be able to run an arbitrary command in another Docker container. How else can I get all the Kafka topics from within another Docker container? E.g. can I interface with Kafka through http?
I get docker: not found
That seems to imply docker CLI command is not installed, and has nothing to do with Kafka.
docker is not (typically) installed in "another container", so that explains that... You'll need to install Java and download Kafka cli tools to run kafka-topics.sh in any other environment, and then not use docker exec.
Otherwise, your command is "correct", but if you are using Docker Compose, you should do it like this from your host (change port accordingly).
docker-compose exec broker bash -c \
"kafka-topics --list --bootstrap-server localhost:9092"
This is gonna sound stupid probably but...
I'm trying to run airflow on a windows machine.
I'm aware that airflow doesn't work on windows so i thought I'd use docker.
So after installing docker in windows, i opened up my cmd and type:
docker pull puckel/docker-airflow:1.10.9
docker container run --name airflow-docker -it puckel/docker-airflow:1.10.9 /bin/bash
That image already contains python and airflow ( https://github.com/puckel/docker-airflow)
Then
airflow initdb
airflow webserver -p 8080
Everything seems fine.
I tried to visitlocalhost:8080 on chrome but nothing shows up.
I don't know where i'm supposed to see airflow ui.
I should expose port 8080 to see it?
How can i do?
Thanks.
Other resources:
https://www.youtube.com/watch?v=20HDFbYyAY0
If you want the port 8080 to be exposed to the host you could use -p parameter in docker run commannd. Also you can set the command webserver directlry while starting the container. This will start Airflow with Sequential Executor.
docker run --name airflow-docker -d -p 8080:8080 puckel/docker-airflow:1.10.9 webserver
I used the below command to start the splunk server using Docker.
docker run -d -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_USER=root" -p "8000:8000" splunk/splunk
But when I opened the URL localhost:8000, I am getting Server can't be reached message
What am I missing here?
I followed a tutorial from the source :- https://medium.com/#caysever/docker-splunk-logging-driver-c70dd78ad56a
Depending on your docker version and host OS, you could be missing the need to map 8080 from the VirtualBox.
This should not be needed if you are using HyperV (Windows host) or XHyve (Mac host), but can still be needed with VirtualBox.
The link to the Docker Image is https://hub.docker.com/r/splunk/splunk/. Referring to this we can see some details related to pulling and running the image. According to the link the right command is:
docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=<password>" --name splunk splunk/splunk:latest
This works correctly for me. The image uses Ansible to do the configurations once the container has been created. If you do not specify password, the respective Ansible task will fail and your container will not be configured.
To follow the progress of the container configuration, you may run this command after running the above command:
docker logs -f splunk
Given that the name of your container is splunk. Here you will be able to see the progress of Ansible in configuring Splunk.
In case you are looking to create a clustered Splunk deployment then, you might want to have a look at this: https://github.com/splunk/docker-splunk
Hope this helps!
I'm trying to run kafka docker image inside my VirtualBox. I firstly run zookeeper server by:
docker run -d -p 2181:2181 --name zookeeper jplock/zookeeper
After that, I run kafka which is linked to that zookeeper server:
docker run -d --name kafka --link zookeeper:zookeeper ches/kafka
When I check "docker ps -a", only zookeeper is running and kafka is not (the status of kafka is always "Exited".
However, when I do those things above outside VM, which is local machine, everything work just fine. What am I missing here?
Update: I just run the "docker logs kafka" and I got this:
I have figured out that VM did not have enough memory to allocate for kafka server. I got it from the last 3 bottom lines of the second picture above. And the solution is quite easy, I just need to assign bigger memory to VM in vagrantfile, previously it was 1024. And now:
config.vm.provider "virtualbox" do |vm|
vm.memory = 2048
vm.cpus = 2
I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.