When running on the host, I can get all the Kafka topics with:
docker exec broker kafka-topics --bootstrap-server broker:29092 --list
I can't run this from within a container because I'd get docker: not found, even if I installed Docker in the container I don't think it'll work anyway. Also, apparently it's hard and insecure to be able to run an arbitrary command in another Docker container. How else can I get all the Kafka topics from within another Docker container? E.g. can I interface with Kafka through http?
I get docker: not found
That seems to imply docker CLI command is not installed, and has nothing to do with Kafka.
docker is not (typically) installed in "another container", so that explains that... You'll need to install Java and download Kafka cli tools to run kafka-topics.sh in any other environment, and then not use docker exec.
Otherwise, your command is "correct", but if you are using Docker Compose, you should do it like this from your host (change port accordingly).
docker-compose exec broker bash -c \
"kafka-topics --list --bootstrap-server localhost:9092"
Related
I have a group of docker containers running on a host (172.16.0.1). Because of restrictions of the size of the host running the docker containers, I'm trying to set up an auto-test framework on a different host (172.16.0.2). I need my auto-test framework to be able to access the docker containers. I've looked over the docker documentation and I don't see anything that says how to do this.
Is it possible to run a docker exec and point it to the docker host? I was hoping to do something like the following but there isn't an option to specify the host.:
docker exec -h 172.16.0.1 -it my_container bash
Should I be using a different command?
Thank you!
Not sure why there is need of doing docker exec remotely. But anyways it is achievable.
You need to make sure your docker daemon on your host where your containers are running is listening on a socket.
Something like this:
# Running docker daemon which listens on tcp socket
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
Now interact with the docker daemon remotely from external VM using:
$ docker -H tcp://<machine-ip>:2375 exec -it my-container bash
OR
$ export DOCKER_HOST="tcp://<machine-ip>:2375"
$ docker exec -it my-container bash
Note: Exposing docker socket publicly in your network has some serious security risks. Although there are other ways to expose it on encrypted HTTPS socket or over the ssh protocol.
Please go through these docs carefully, before attempting anything:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
https://docs.docker.com/engine/security/https/
If you have SSH on both machines you can easily execute commands on remote daemon like that:
docker -H "ssh://username#remote_host" <your normal docker command>
# for example:
docker -H "ssh://username#remote_host" exec ...
docker -H "ssh://username#remote_host" ps
# and so on
Another way to do the same is to store -H key value into DOCKER_HOST environment variable:
export DOCKER_HOST=ssh://username#remote_host
# now you can talk to remote daemon with your regular commands
# these will be executed on remote host:
docker ps
docker exec ...
Without SSH you can make Docker listen for TCP. This will require you to make some preparations to maintain security. This guide walks through creating certificates and some basic usage. After that you will have somewhat similar usage:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=172.16.0.1:2376
At last you can use docker context to save external hosts and their configuration. Using context allows you to communicate with various remote hosts with ease by using --context <name> option. Read context documentation here.
I want to run "standalone" Docker container with a configured Kafka server.
I found out on Kafka website (https://kafka.apache.org/quickstart) how to run Kafka topic:)
But when I'm doing everything as instructed I need to run three terminals:
One for run ZooKeeper server:
./bin/zookeeper-server-start.sh config/zookeeper.properties
Second for start kafka server:
./bin/kafka-server-start.sh config/server.properties
Third for create a topic:
./bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
The question is :
How do I run three independent terminals inside Docker while building the docker image?
Because 1 want to only use commands:
docker build . -t kafka
and then
docker start kafka
and have an up and running Kafka server with a created topic.
I done something but I stack on trying to create this terminals.
Here's the project:
https://github.com/mpawel1993/Kafka-Docker
I want to run "standalone" Docker container with a configured Kafka server
If you want a configured Kafka server, any of the existing Docker images work fine. landoop/fast-data-dev includes both Kafka, Zookeeper, Kafka Connect, and a Schema Registry, if by "standalone" you mean have all necessary components in one image
How do I run three independent terminals inside Docker while building the docker image
You wouldn't. Each RUN command its a single terminal
You also should not start Kafka and Zookeeper in one container for fault tolerance and scalability reasons
You also don't need to create Kafka topics while building the container, only once the container is built and server is running can you create topics
I am trying to use the python in a docker container on a remote machine as the interpreter in Pycharm. Since that is a mouthful, here is a diagram:
There is a Jupyter Notebook running in the container, which I am able to connect to through my local browser (although this is just for testing the connection). The command I am using to launch the Docker container is
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 -p 7722:22 --ipc=host latest:latest
I can forward the port 8888 which the Jupyter notebook is running on with ssh -L 8888:0.0.0.0:8888 BBB.BBB.BBB.BBB and thus use it on the local machine. But I don't much like using Jupyter for developing and would like to use the Python interpreter in the Docker Container in Pycharm.
When I select "Add Python Interpreter" in Pycharm, I get the following options:
The documentation for Pycharm suggests using the "Add Python Interpreter/Docker" tool which looks like this:
However the documentation doesn't say how to set up the Docker container and the connections if the Docker is on a remote machine.
So my questions are: should I use a Unix or a TCP socket to connect to my remote docker? Or should I somehow forward all the relevant ports from the container and use the "SSH Interpreter" option? And if so, how do I set this all up? Am I setting up my Docker Container properly in the first place?
I think I have trawled through every forum and online resource, over the last two days, but have not come any closer to getting this to work. I have also tried to get this to work in Spyder, but to no avail either. So any advice is very appreciated!
Many thanks!
Thank you for depicting the dilemma so poignantly and clearly in your cartoon :-). My colleague and I were trying to do something similar and what ultimately worked beautifully was creating an SSH config directly to the Docker container jumping from the remote machine, and then setting it as a remote SSH interpreter so that pycharm doesn't even realize it's a Docker container. It also works well for vscode.
set up ssh service in docker container (subset of steps in https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i, port22 stuff wasn't needed)
docker exec -it <container> bash: create admin interactive prompt for docker
apt-get install openssh-server
service ssh start
confirm with service ssh status -> * sshd is running
determine IP and test SSHing from remote machine into container (adapted from https://phoenixnap.com/kb/how-to-ssh-into-docker-container, steps 2 and 3)
from normal command prompt on remote machine (not within container): docker inspect -f "{{ .NetworkSettings.IPAddress }}" <container> to get container IP
test: ping -c 3 <container_ip>
ssh: ssh <container_ip>; should drop you into the container as your user; however, requires container to be configured properly (docker run cmd has -v /etc/passwd:/etc/passwd:ro \ etc.). It may ask for a password. note: if you do this for a different container later that is assigned the same IP, you will get a warning and may need to delete the previous key from known_hosts; just follow the instructions in the warning.
test SSH from local machine
if you don't have it set up already, set up passwordless ssh key-based authentication to the remote machine with the docker container
make SSH command that uses your remote machine as a jump server to the container: ssh -J <remote_machine> <container_ip>, as described in https://wiki.gentoo.org/wiki/SSH_jump_host; if successful you should drop into the container just as you did from the remote machine
save this setup in your ~/.ssh/config; follow the ProxyJump Example from https://wiki.gentoo.org/wiki/SSH_jump_host
test config with ssh <container_host_name_defined_in_ssh_config>; should also drop you into interactive container
configure pycharm (or vscode or any IDE that accepts remote SSH interpreter)
Preferences -> Project -> Python Interpreter -> Add -> SSH Interpreter -> New server configuration
host: <container_host_name_defined_in_ssh_config>
port: 22
username: <username_on_remote_server>
select interpreter, can navigate using the folder icon, which will walk you through paths within the docker, or you can enter the result of which python from the container
follow pycharm prompts
I'm trying to locally start Kafka in a Docker container, but don't seem to get the combination of options right.
I'm running on Windows 10, Docker ce version 2.0.0.3 (31259).
What I'm doing
run Zookeeper in Docker container
docker run -d --name=zookeeper1 --network=host --env-file=zookeeper_options confluentinc/cp-zookeeper
I'll leave out the environment file since zookeeper runs fine.
run Kafka in Docker container
docker run -d --network=host --name=kafka --env-file=kafka_options confluentinc/cp-kafka
with the kafka_options file containing
KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181
KAFKA_LISTENERS=PLAINTEXT://localhost:9092
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:29092
Try to get the metadata list
kafkacat -b localhost:29092 -L
(this I do in Windows 10 Ubuntu subsytem, the others I ran from PowerShell. I also have a small Java Kafka application which exhibits the same behavior though)
The result is
% ERROR: Failed to acquire metadata: Local: Broker transport failure
What I've read
Obviously the Quickstart with Docker documentation which uses docker-compose which I don't want; as well as the Docker section of the documentation.
Aside from that, most notably this post by Robin explaining the advertised listeners concept, but I still can't see what I do wrong.
I also found this issue about a difference in Windows preventing you to use the official Quickstart steps in Windows; this lead me try the alternative run with a separate network.
Separate network
Following the steps in the issue:
docker network create confluent
docker run -d --name=zookeeper1 --network=confluent -p 22181:2181 --env-file=zookeeper_options confluentinc/cp-zookeeper
docker run -d --network=confluent --name=kafka -p 29092:9092 --env-file=kafka_options confluentinc/cp-kafka
kafkacat -b localhost:29092 -L
That does change the outcome to
% ERROR: Failed to acquire metadata: Local: Timed out
So it looks like at least it connects, but that doesn't help much in the end.
The question is what am I doing wrong? Is it the Kafka configuration options, or is it a Docker issue I'm not aware of?
EDIT:
It does work with the sample docker-compose.yml here, but shouldn't we be able to start the containers separately?
I have three docker containers,
java container (JC): for my java application (spring boot)
elasticsearch container (EC): for ElasticSearch
test container (TC): testing container to troubleshoot with ping test
Currently, the JC cannot see the EC by "name". And when I say "see" I mean if I do a ping on the JC to EC, I get a ping: unknown host. Interestingly, if I do a ping on the TC to EC, I do get a response.
Here is how I start the containers.
docker run -dit --name JC myapp-image
docker run -d --name EC elasticsearch:1.5.2 elasticsearch -Des.cluster.name=es
docker run --rm --name TC -it busybox:latest
Then, to ping EC from JC, I issue the following commands.
docker exec JC ping -c 2 EC
I get a ping: unknown host
With the TC, since I am already at the shell, I can just do a ping -c 2 EC and I get 2 replies.
I thought maybe this had something to do with my Java application, but I doubt it because I modified my Dockerfile to just stand up the container. The Dockerfile looks like the following.
FROM java:8
VOLUME /tmp
Note that you can create the above docker image by docker build -no-cache -t myapp-image ..
Also note that I have Docker Weave Net installed, and this does not seem to help getting the JC to see the EC by name. On the other hand, I tried to find the IP address of each container as follows.
docker inspect -f '{{ .NetworkSettings.IPAddress }}' JC --> 172.17.0.4
docker inspect -f '{{ .NetworkSettings.IPAddress }}' EC --> 172.17.0.2
docker inspect -f '{{ .NetworkSettings.IPAddress }}' TC --> 172.17.0.3
I can certainly ping EC from JC by IP address: docker exec JC ping -c 2 172.17.0.2. But getting the containers to see each other by IP address does not help as my Java application needs a hostname reference as a part of its configuration.
Any ideas on what's going on? Is it the container images themselves? Why would the busybox container image be able to ping the ElasticSearch container by name but the java container not?
Some more information.
VirtualBox 5.0.10
Docker 1.9.1
Weave 1.4.0
CentOS 7.1.1503
I am running docker inside a CentOS VM on a Windows 10 desktop as a staging environment before deployment to AWS
Any help is appreciated.
Within the same docker daemon, use the old --link option in order to update the /etc/hosts of each component and make sure one can ping the other:
docker run -d --name EC elasticsearch:1.5.2 elasticsearch -Des.cluster.name=es
docker run -dit --name JC --link ED myapp-image
docker run --rm --name TC -it busybox:latest
Then, a docker exec JC ping -c 2 EC should work.
If it does not, check if this isn't because of the base image and a security issue: see "Addressing Problems with Ping in Containers on Atomic Hosts".
JC is based on docker/_java:8, itself based on jessie-curl, jessie.
Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option. docs.docker.org.
It should also work using the new networking.
docker network create -d bridge non-default
docker run --net non-default ...
There isn't a specific option which applies this behavior to the default network (AFAICT from looking at docker network inspect). I guess it's just triggered by the option "com.docker.network.bridge.default_bridge".
In the first part of another question, it's suggested this was changed in Docker 1.9. Note that Docker 1.9 was when they turned on the new networking system in the stable release. The section of the userguide that I quoted from above, did not exist in version 1.8. Docker 1.9.0 "bridge" versus a custom bridge network results in difference in hosts file and SSH_CLIENT env variable