I am running single node zookeeper docker image using below command
docker run -it -p 2181:2181 --name zookeeper zookeeper.
Inorder to run kafka docker image, I need to share the IP address of the zookeper docker image as ENV variable and update kafka server.properties file.
Using docker inspect, I am able to fetch the IP address of zookeeper image and pass it during docker run of kafka.
But is there any way to automatically detect and share the IP address between docker containers.
I could see some example using --link, but in the latest docker documentation it seems deprecated.
Appreciate any suggestion/help
Thanks
Docker Compose is a perfect solution for your case.
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
The docker-compose.yaml is something like below one for your case:
version: '3'
services:
zookeeper:
image: <zookeeper_image_name>
restart: always
ports:
- 2181:2181
kafka:
image: <kafka_image_name>
restart: always
Kafka container should have zookeeper as a zookeeper hostname in server.properties file.
Related
I have two services defined for docker-compose
version: '3'
services:
celery:
build:
context: .
dockerfile: ./docker/celery/Dockerfile
command: celery -A api.tasks worker -l info
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
hostname: "0.0.0.0"
I can start the first service
docker-compose run --service-ports rabbitmq
And everything works well. I can ping and connect to port 5672 for communication from host os.
$ curl 0.0.0.0:5672
AMQP
However, the second service cannot see that port. The following command errors because it cannot connect to 0.0.0.0:5672.
docker-compose run --service-ports celery
How do I setup two docker containers, such that they can see each other?
From inside the Docker container, the loopback address 0.0.0.0 refers to the container itself. Use your host's IP address to reach the other container.
Here's an extensive explanation on how to reach your host from inside a container and various network modes that Docker offers: https://stackoverflow.com/a/24326540/417866
Another approach would be to create a Docker Network, connect both your containers to it, and then they'll be able to resolve each other on that network. Details here: https://docs.docker.com/engine/reference/commandline/network_create/
So the easy answer is to refer to each other by name. In your compose file you reference two services:
rabbitmq
celery
if you use docker-compose up -d (or just docker-compose up) it will create the new containers on a newly created network they share. Docker compose then registers both services to the DNS service for that network via an automatic alias.
So from celery, you could ping rabbitmq via:
ping rabbitmq and on rabbitmq you could ping celery via
ping celery
This applies to all network communications as it's just name resolution. You can accomplish this all manually by creating a new network, assigning them to the hosts, and then registering aliases, but docker-compose does all the hard work.
I have two docker containers:
database
app that consumes the database
I run my database container like this:
docker run --name my-db -p 127.0.0.1:3306:3306 my-db-image
And my app container like this:
docker run --name my-app --network host -it my-app-image
This works fine on Linux. I can access the DB from both the host system and the app container. Perfect.
However --network host does not work on Mac and Windows:
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
(source: https://docs.docker.com/network/host/)
I can still access the database via 127.0.0.1:3306 from the main host, but I cannot access it from the app container.
How can I solve this issue? How can I let the app container connect to the database (and keep accessing also to the DB from the main host using 127.0.0.1:3306)?
I've tried using host.docker.internal and gateway.docker.internal but it doesn't work.
I've also tried to launch both containers using --network my-network after creating my-network with docker network create my-network but it doesn't work.
I can't figure out how to solve this issue.
For multiple services, it can often be easier to create a docker-compose.yml file that will launch all the services and any networks needed to connect them.
version: '3'
services:
my-db:
image: my-db-image
ports:
- "3306:3306"
networks:
- mynetwork
my-app:
image: my-app-image
ports:
- "8000:80"
networks:
- mynetwork
networks:
mynetwork:
From the project folder, you run docker-compose up or docker-compose up -d to make the services run in the background.
In this scenario, the magic of Docker provisions a network with hostname "mynetwork". It should expose default ports to other services on that network. If you want to remap the ports, the pattern is target:source.
I don't know that you need the 'ports' config here. But I'm trying to map your config to the compose file. Also I'm assuming you need to expose the app on some port; using 8000 as it's pretty common setup.
What are the parameters here? Docker-compose reference
Kubernetes has a concept of pods where containers can share ports between them. For example within the same pod, a container can access another container (listening on port 80) via localhost:80.
However on docker-compose, localhost refers to the container itself.
Is there anyway to implement the kubernetes network config on docker?
Essentially I have a kubernetes config that I would like to reuse in a docker-compose config, without having to modify the images.
I seem to have gotten it to work by adding network_mode: host to each of the container configs within my docker-compose config.
Yes you can. You run a service and then you can use network_mode: service:<nameofservice>
version: '3'
services:
mainnetwork:
image: alpine
command: tail -f /dev/null
mysql:
image: mysql
network_mode: service:mainnetwork
environment:
- "MYSQL_ROOT_PASSWORD=root"
mysqltest:
image: mysql
command: bash -c "sleep 10 && mysql -uroot -proot -h 127.0.0.1 -e 'CREATE DATABASE tarun;'"
network_mode: service:mainnetwork
Edit-1
So the network_mode can have below possible values
host
service:(servicename in same compose file)
container:(name or id of a external container already running)
In this case i have used service:mainnetwork, so the mainnetwork needs to be up.
Also this has been tested on Docker 17.06 ce. So I assume you are using a newer version
Using Docker Links mechanism you can wire together containers and then declared ports will be available through localhost.
Here's the result when I type docker ps :
I have 3 docker containers: webapps, redis and rabbitmq. I want to link container webapps to container redis and rabbitmq container. In non docker apps, mywebapps can send message to rabbitmq and write/read redis.
I tried using command like this
docker run --name rabbitmq -p 8080:80 --link webapps:nimmis/apache-php7 -d rabbitmq
but it does not work.
Here is my config.php on webapps where I am trying to send messages via rabbitmq:
define('HOST', 'localhost');
define('PORT', 5672);
I tried to change localhost with hostname
define('HOST', 'rabbitmq');
define('PORT', 5672);
Error message says connection refused.
It seems that in my three containers needs to be configured in the same network namespace.
Linking is a legacy feature. Please use "user defined networks":
sudo docker network create mynetwork
Then rerun your containers using this network:
sudo docker run --name rabbitmq -p 8080:80 -d --network mynetwork rabbitmq
Do the same for other containers that you want connected with each other.
Using "user defined networks", you have an "internal name resolution" at your disposal (somewhat like domain name resolution when visiting websites). You can use the names of the container that you want to refer to, in order to resolve the IP addresses of containers, as long as they are running on the same "user defined network". With this, you can resolve the IP address of the rabbitmq container with its name, within other containers, on the same network.
All containters on the same "user defined network" will have network connectivity. There is no need for "legacy linking".
For inter-container dependencies and links, you'll want to use docker-compose where you can define the links between containers.
In your root directory where you store your Docker files, just make a new file called docker-compose.yml and here you can define your containers as services which rely on each other like this:
version: '2'
services:
webapps:
build: .
links:
- "rabbitmq:rabmq"
- "redis"
rabbitmq:
image: rabbitmq
redis:
image: redis
so here in the definition of the webapps service, you see it links the other two services rabbitmq and redis. What this means is that when the webapps container is build, an entry to it's hosts file is made such that the domain name redis is translated to the IP and port number of the actual container.
You have the option to change the name of how this container is address by using the service:alias notation, like how I defined the rabbitmq to
use the alias rabmq inside the container webapps.
To now build and start your containers using docker-compose just type:
docker-compose up -d
So connecting to another container is as simple as using this alias as the name of the host.
Since you are using docker-compose in this case, it creates a docker network automatically to connect all the containers so you shouldn't have to worry about that. But for more information have a look at the docs:
https://docs.docker.com/compose/networking/#/specifying-custom-networks
You need to link rabbitmq and redis to your webapps container and not the other way arround.
#run redis container
docker run --name some-redis -d redis
#run rabbitmq container
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq
#run webapps container
docker run --name webapps -p 8080:80 --link some-redis:redis --link some-rabbit:rabbitmq nimmis/apache-php7
First run redis and rabbitmq containers.
Then run webapps container with links to the 2 containers.
Now, to configure redis host in the webapps - its easy. You can simply use env variable 'REDIS_PORT_6379_TCP_ADDR'. Because once a container is linked you get its env variables. and redis exports that variable.
Regarding the rabbitmq host - you can get the ip after the rabbit container is up by:
RABBITMQ_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' some-rabbit)
and then pass it in --env when you run the webapps container.
In my experience working with declaratives such as docker-compose.yml is okay, but simply you can use
docker run -d -P -link nimmis/apache-php7 rabbitmq redis
You can define your services to use a user-defined network in your docker-compose.yml
version: "3"
services:
webapps:
image: nimmis/apache-php7
ports:
- "80:8080"
networks:
- my-network
rabbitmq:
image: rabbitmq
networks:
- my-network
redis:
image: redis
networks:
- my-network
networks:
my-network:
driver: overlay
Then do:
docker swarm init
docker stack deploy -c docker-compose.yml my-stack
Check out the full example at https://docs.docker.com/get-started/part3/
You could access the IP Address of your Redis Container.
Start rabbitmq and get the internal IP Adress:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' rabbitmq > .rabbitmq.ip
Now, you can add an Apache configuration and add the internal IP Address for rabbitmq while starting the webapps container. Or simply add an entry in the Apache container's /etc/hosts like:
// the dynamic internal IP of rabbitmq is known once rabbitmq starts:
172.30.20.10 rabbitmq.redis.local
In the Docker docs here they set up a custom bridge network with the containers connected, like so
$ docker network create -d bridge my-bridge-network
$ docker run -d --network=my-bridge-network --name db training/postgres
$ docker run -d --network=my-bridge-network --name web training/webapp python app.py
These two docker containers spin up and connect to the same network.
But I can not find a way to save this configuration like you would commit a docker image, so that I could pull the network configuration and it would pull the containers ready to go.
The creation of the bridge is done on setting up the docker network when calling docker network create ... and configuring docker container network mode is done when calling docker run --network=....
You cannot store the information in docker image that when it starts it should be connected to bridge X, this is not run-time configurable and outside docker image's scope when running inside a docker container.
You could either bash script the docker network & docker runcommands or use docker-compose .yml file to script the configuration each container needs to be run against, e.g.:
version: "2"
services:
my-app
build: .
image: my-app-image
container_name: my_app_container
ports:
- 3000:3000
networks:
- my-network
networks:
my-network:
external: true