I'm trying to run kafka docker image inside my VirtualBox. I firstly run zookeeper server by:
docker run -d -p 2181:2181 --name zookeeper jplock/zookeeper
After that, I run kafka which is linked to that zookeeper server:
docker run -d --name kafka --link zookeeper:zookeeper ches/kafka
When I check "docker ps -a", only zookeeper is running and kafka is not (the status of kafka is always "Exited".
However, when I do those things above outside VM, which is local machine, everything work just fine. What am I missing here?
Update: I just run the "docker logs kafka" and I got this:
I have figured out that VM did not have enough memory to allocate for kafka server. I got it from the last 3 bottom lines of the second picture above. And the solution is quite easy, I just need to assign bigger memory to VM in vagrantfile, previously it was 1024. And now:
config.vm.provider "virtualbox" do |vm|
vm.memory = 2048
vm.cpus = 2
Related
I'm trying to locally start Kafka in a Docker container, but don't seem to get the combination of options right.
I'm running on Windows 10, Docker ce version 2.0.0.3 (31259).
What I'm doing
run Zookeeper in Docker container
docker run -d --name=zookeeper1 --network=host --env-file=zookeeper_options confluentinc/cp-zookeeper
I'll leave out the environment file since zookeeper runs fine.
run Kafka in Docker container
docker run -d --network=host --name=kafka --env-file=kafka_options confluentinc/cp-kafka
with the kafka_options file containing
KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181
KAFKA_LISTENERS=PLAINTEXT://localhost:9092
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092
Try to get the metadata list
kafkacat -b localhost:29092 -L
(this I do in Windows 10 Ubuntu subsytem, the others I ran from PowerShell. I also have a small Java Kafka application which exhibits the same behavior though)
The result is
% ERROR: Failed to acquire metadata: Local: Broker transport failure
What I've read
Obviously the Quickstart with Docker documentation which uses docker-compose which I don't want; as well as the Docker section of the documentation.
Aside from that, most notably this post by Robin explaining the advertised listeners concept, but I still can't see what I do wrong.
I also found this issue about a difference in Windows preventing you to use the official Quickstart steps in Windows; this lead me try the alternative run with a separate network.
Separate network
Following the steps in the issue:
docker network create confluent
docker run -d --name=zookeeper1 --network=confluent -p 22181:2181 --env-file=zookeeper_options confluentinc/cp-zookeeper
docker run -d --network=confluent --name=kafka -p 29092:9092 --env-file=kafka_options confluentinc/cp-kafka
kafkacat -b localhost:29092 -L
That does change the outcome to
% ERROR: Failed to acquire metadata: Local: Timed out
So it looks like at least it connects, but that doesn't help much in the end.
The question is what am I doing wrong? Is it the Kafka configuration options, or is it a Docker issue I'm not aware of?
EDIT:
It does work with the sample docker-compose.yml here, but shouldn't we be able to start the containers separately?
On my host machine, I have installed docker. Then I pull a Jenkins image.
I want to run that image like daemon service like some services runs on my host machine after rebooting my machine every time. And how can I fix Jenkins port permanent(like 8080) in mine docker?
docker run -d --restart always -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
-d: for running the container in background
--restart always: for the container to always restart (unless manually stopped), it will start automatically at boot.
The rest of the arguments are from the jenkins image documentation, you may need to adapt your port mapping and volume path.
Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/
I finished this documentation:
https://docs.docker.com/swarm/install-w-machine/
It works fine.
Now I tried to setup this EC2 instances by following this documentation:
https://docs.docker.com/swarm/install-manual/
I am in Step 4. Set up a discovery backend
I cannot understand the steps what I need to do further.
I created 5 nodes in EC2: manager0, manager1, consul0, node0, node1. Now I need to know how to setup service discovery with swarm.
In that document they ask us to connect manager0 and consul0 then ifconfig, then they given as etc0 instance. I don't know where this is coming from.
Ultimately I need to know where (in which node?) to run this command:
$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Any suggestion for me How to clear this step?
Consul will run on the consul0 server you created. So basically you first need to be able to run docker on worker0 and worker1 remotely, this is step 3. A better way of doing this is editing the daemon directly with the command:
sudo echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"' > /etc/default/docker`
Then restart docker. Afterwards you will find that you can run docker remotely from master0, master1 or any other instance behind your firewall with docker commands that start with:
docker -H $WORKER0_IPADDRESS:2375
For example if your workers ip address was 1.2.3.4 this would run the docker ps command remotely:
docker -H 1.2.3.4:2375 ps
This is what swarm runs on. Then start up your consul server with the command you want to run, you got that one right and thats it you wont do anything else with the consul0 server except use its IP address when you run your swarm commands.
So if $CONSUL0 represented the IP address of your consul server this is how you would set up the rest of swarm. If you ran each of them on the local machine of each node:
On consul0:
docker run -d -p 8500:8500 --restart=unless-stopped --name=consul progrium/consul -server -bootstrap
On master0 and master1:
docker run --name=master -d -p 4000:4000 swarm manage -H :4000 --replication --advertise $(hostname -i):4000 consul://$CONSUL0:8500
On worker0 and worker1:
docker run -d --name=worker swarm join --advertise=$(hostname -i):2375 consul://$CONSUL0:8500/
I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.