I want to run "standalone" Docker container with a configured Kafka server.
I found out on Kafka website (https://kafka.apache.org/quickstart) how to run Kafka topic:)
But when I'm doing everything as instructed I need to run three terminals:
One for run ZooKeeper server:
./bin/zookeeper-server-start.sh config/zookeeper.properties
Second for start kafka server:
./bin/kafka-server-start.sh config/server.properties
Third for create a topic:
./bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
The question is :
How do I run three independent terminals inside Docker while building the docker image?
Because 1 want to only use commands:
docker build . -t kafka
and then
docker start kafka
and have an up and running Kafka server with a created topic.
I done something but I stack on trying to create this terminals.
Here's the project:
https://github.com/mpawel1993/Kafka-Docker
I want to run "standalone" Docker container with a configured Kafka server
If you want a configured Kafka server, any of the existing Docker images work fine. landoop/fast-data-dev includes both Kafka, Zookeeper, Kafka Connect, and a Schema Registry, if by "standalone" you mean have all necessary components in one image
How do I run three independent terminals inside Docker while building the docker image
You wouldn't. Each RUN command its a single terminal
You also should not start Kafka and Zookeeper in one container for fault tolerance and scalability reasons
You also don't need to create Kafka topics while building the container, only once the container is built and server is running can you create topics
Related
When running on the host, I can get all the Kafka topics with:
docker exec broker kafka-topics --bootstrap-server broker:29092 --list
I can't run this from within a container because I'd get docker: not found, even if I installed Docker in the container I don't think it'll work anyway. Also, apparently it's hard and insecure to be able to run an arbitrary command in another Docker container. How else can I get all the Kafka topics from within another Docker container? E.g. can I interface with Kafka through http?
I get docker: not found
That seems to imply docker CLI command is not installed, and has nothing to do with Kafka.
docker is not (typically) installed in "another container", so that explains that... You'll need to install Java and download Kafka cli tools to run kafka-topics.sh in any other environment, and then not use docker exec.
Otherwise, your command is "correct", but if you are using Docker Compose, you should do it like this from your host (change port accordingly).
docker-compose exec broker bash -c \
"kafka-topics --list --bootstrap-server localhost:9092"
I want to run nginx inside a docker container as a reverse proxy to talk to an instance of gunicorn running on the host machine (possibly inside another container, but not necessarily). I've seen solutions involving docker compose but I'd like to learn how to do it "manually" first without learning a new tool, right now.
The simplified version of the problem is this:
Say I run a docker container on my machine.
Outside the container, I run gunicorn on port 5000.
From within the container, I want to run ping ??? and have it reach the gunicorn instance run in step 2.
How can I do this in a simple, portable way?
The easiest way is to run gunicorn in its own container and expose port 5000 (not map it, just expose it).
It is important to create a network first and run both your containers on the same network so that they see each other: docker network create xxxx
The when you run your 2 containers attach them to this network: docker run ... --network xxxx ...
Give names to your your containers, it is a good practice. (eg: docker run ... --name gunicorn ...)
Now from your container you can ping your gunicorn container: ping gunicorn, or even telnet on on port 5000.
If you need more information just drop me a comment.
I'm trying to locally start Kafka in a Docker container, but don't seem to get the combination of options right.
I'm running on Windows 10, Docker ce version 2.0.0.3 (31259).
What I'm doing
run Zookeeper in Docker container
docker run -d --name=zookeeper1 --network=host --env-file=zookeeper_options confluentinc/cp-zookeeper
I'll leave out the environment file since zookeeper runs fine.
run Kafka in Docker container
docker run -d --network=host --name=kafka --env-file=kafka_options confluentinc/cp-kafka
with the kafka_options file containing
KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181
KAFKA_LISTENERS=PLAINTEXT://localhost:9092
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092
Try to get the metadata list
kafkacat -b localhost:29092 -L
(this I do in Windows 10 Ubuntu subsytem, the others I ran from PowerShell. I also have a small Java Kafka application which exhibits the same behavior though)
The result is
% ERROR: Failed to acquire metadata: Local: Broker transport failure
What I've read
Obviously the Quickstart with Docker documentation which uses docker-compose which I don't want; as well as the Docker section of the documentation.
Aside from that, most notably this post by Robin explaining the advertised listeners concept, but I still can't see what I do wrong.
I also found this issue about a difference in Windows preventing you to use the official Quickstart steps in Windows; this lead me try the alternative run with a separate network.
Separate network
Following the steps in the issue:
docker network create confluent
docker run -d --name=zookeeper1 --network=confluent -p 22181:2181 --env-file=zookeeper_options confluentinc/cp-zookeeper
docker run -d --network=confluent --name=kafka -p 29092:9092 --env-file=kafka_options confluentinc/cp-kafka
kafkacat -b localhost:29092 -L
That does change the outcome to
% ERROR: Failed to acquire metadata: Local: Timed out
So it looks like at least it connects, but that doesn't help much in the end.
The question is what am I doing wrong? Is it the Kafka configuration options, or is it a Docker issue I'm not aware of?
EDIT:
It does work with the sample docker-compose.yml here, but shouldn't we be able to start the containers separately?
I create a swarm and join a node, very nice all works fine
docker swarm init --advertise-addr 192.168.99.1
docker swarm join --token verylonggeneratedtoken 192.168.99.1:2377
I create 3 services on the swarm manager
docker service create --replicas 1 --name nginx nginx --publish published=80,target=80
docker service create --replicas 1 --name php php:7.1-fpm published=9000,target=9000
docker service create --replicas 1 --name postgres postgres:9.5 published=5432,target=5432
All services boots up just fine, but if I customize the php image with my app, and configure nginx to listen to the php fpm socket I can’t find a way to communicate these three services. Even if I access the services using “docker exec -it service-id bash” and try to ping the container names or host names (I even tried to curl them).
What I am trying to say is I don’t know how to configure nginx to connect to fpm since I don’t know how one container communicates to another using swarm. Using docker-compose or docker run is simple as using a links option. I’ve read all documentation around, spent hours on trial and error, and I just couldn’t wrap my head around this. I have read about the routing mesh, wish will get the ports published and it really does to the outside world, but I couldn’t figure in wish ip its published for the internal containers, also that can't be an random ip as that will cause problems to manage my apps configuration, even the nginx configurations.
To have multiple containers communicate with each other, they next to be running on a user created network. With swarm mode, you want to use an overlay network so containers can run on multiple hosts.
docker network create -d overlay mynet
Then run the services with that network:
docker service create --network mynet ...
The easier solution is to use a compose.yml file to define each of the services. By default, the services in a stack are deployed on their own network:
docker stack deploy -c compose.yml stack-name
Or you can just make 1 Docker-compose, and make a docker stack with them.
It's easier and more reliable to combine php_fpm and nginx in the same image. I know this goes against the official way of single-app images, but for cases like php_fpm+nginx where you must have both to return a request, it's the best case. I have a WIP sample here: https://github.com/BretFisher/php-docker-good-defaults
This is a two-part question.
First part:
What is the best approach to run Consul and a Tomcat in the same docker container?
I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.
The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.
Second part:
In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.
Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.
Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........
Thank you.
I would use ENTRYPOINT to kick off a script on docker run.
Something like
ENTRYPOINT myamazingbashscript.sh
Syntax might be off but u get the idea
The script should start both services and finally should tail -f the tomcat logs (or any logs).
tail -f will prevent container exit since the tail -f command never exits and it will also help u to see what tomcat is doing
Do ... docker logs -f to watch the logs after a docker run
Note because the container doesn't exit u can exec into it with ... docker exec -it containerName bash
This lets you have a sniff around inside the container
Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but u may have valid reasons
To build use docker build then run with docker run as u stated it.
If u decided to go for a 2 container solution then u will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from