I have a container running nifi (--name nifi) exposing port 8080 and another container running nifi registry (--name nifireg) exposing port 10808. I can get to both UI's, and I am able to connect nifi to the registry in the registry services by using the registry container's IP (172.17.0.5). These containers are also on a docker network called nifi-net. My issue is that the registry client is unable to talk to the registry when using the container name.
From the nifi I can ping by container IP as well as by name (ping nifireg), so there is some level of connectivity. But if I change the registry client to point to http://nifireg:180880 or even http://nifi-net.nifireg:18080 it clocks for a while and then eventually returns this error:
Unable to obtain listing of buckets: java.net.ConnectException: Connection refused (Connection refused)
What needs to be done to allow nifi to connect to the nifi registry using the container name?
EDIT: Here is how I set everything up:
docker run -d --name nifi -p 8080:8080 apache/nifi
docker run -d --name nifireg -p 18080:18080 apache/nifi-registry
I added the netorking after the fact, but that shouldn't be an issue.
docker network create nifi-net
docker network connect nifi-net nifi
docker network connect nifi-net nifireg
I don't understand why this solved the problem, but destroying the containers and recreating them with the --net nifi-net option at spin-up solved the problem.
docker run -d --name nifi --net nifi-net -p 8080:8080 apache/nifi
docker run -d --name nifireg --net nifi-net -p 18080:18080 apache/nifi-registry
The docs state that you can add them to a network after the fact, and I am able to ping from one container to the other using the name. I guess it's just a lesson that I need to use docker networking more.
I would suggest using docker-compose to manage the deployment since you can define the network once in docker-compose.yaml and not have to worry about it agian.
Plus it lets you learn about docker networking :P
Related
I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.
I've got two Docker containers that need to have a websocket connection between the two.
I run one container like this:
docker run --name comm -p 8080:8080 comm_module:latest
to expose port 8080 to the host. Then I try to run the second container like this:
docker run --name test -p 8080:8080 datalogger:latest
However, I get the error below:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint test
(f06588ee059e2c4be981e3676d7e05b374b42a8491f9f45be27da55248189556):
Bind for 0.0.0.0:8080 failed: port is already allocated. ERRO[0000]
error waiting for container: context canceled
I'm not sure what to do. Should I connect these to a network? How do I run these containers?
you can't bind the same host port twice in the same time you may change one of the ports on one container:
docker run --name comm -p 8080:8080 comm_module:latest
docker run --name test -p 8081:8080 datalogger:latest
you may check the configuration in the containers on how they communicate .
you can also create link between them:
docker run --name test -p 8081:8080 --link comm datalogger:latest
I finally worked it out. These are the steps involved for a two-way websocket communication between two Docker containers:
Modify the source code in the containers to use the name of the other container as the destination host address + port number (e.g. comm:port_no inside test, and vice versa).
Expose the same port (8080) in the Dockerfiles of the two containers and build the images. No need to publish them as they are will be visible to other containers on the network.
Create a user-defined bridge network like this:
docker network create my-net
Create my first container and attach it to the network:
docker create --name comm --network my-net comm_module:latest
Create my second container and attach it to the network:
docker create --name test --network my-net datalogger:latest
Start both containers by issuing the docker start command.
And the two-way websocket communication works nicely!
My Solution works fine.
docker network create mynet
docker run -p 443:443 --net=mynet --ip=172.18.0.3 --hostname=frontend.foobar.com foobarfrontend
docker run -p 9999:9999 --net=mynet --ip=172.18.0.2 --hostname=backend.foobar.com foobarbackend
route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75.2
the foobarfrontend calls a wss websocket on foobarbackend on port 9999
PS: i work on docker windows 10 with linuxcontainers
have fun
currently I have one personal docker image uploaded to DockerHub, I am making the changes to connect this application to redis (that is running in another docker container as well)
Now, I have the next message from my app when it tries to connect to redis:
Error saving to redis: dial tcp 127.0.0.1:6379: connect: connection refused
I understand that they're not in the same network, so my app is pointing to 127.0.0.1:6379 that is not executing anything, so, I am looking for the best way to connect those containers in some way that don't make my app depending of the IP where redis is hosted, from my local machine I can use redis, but not from another docker container. Briefly, what I did was:
docker run --name redis_server -p 6379:6379 -d redis
sudo docker run -d --restart=always -p 10000:10000 --link redis_server:redis --name my_app repo/my_app
So, I am looking for solutions on how to make 127.0.0.1:6379 accessible for my_app. I don't use docker-compose by the way
The easiest solution would be to run Docker with "--network=host" which binds the containers to the host network. This is a perfectly fine solution for testing, however it will expose the ports and services respectively to the internet (if you don't have a firewall), which may or may not be secure.
Another way of doing this would be to define a network in the following way:
docker network create -d bridge my-net
And then to run your containers on that same network by running the docker run command with --network=my-net. This way you can reference each container by its name. For example in your case you can ping my_app from the Redis container and vice-versa.
I have a container1 running a service1 on port1
also
I have a container2 running a service2 on port2
How can I access service2:port2 from service1:port1?
I mention that the container are linked together.
I ask if there is a way to do it without accessing the docker0 IP (where the port is visible)
thanks
The preferred solution is to place both containers on the same network, use the build-in dns discovery to reach the other node by name, and you'll be able to access them by the container port, not the host published port. By CLI, that looks like:
docker network create testnet
docker run -d --net testnet --name web nginx
docker run -it --rm --net testnet busybox wget -qO - http://web
The busybox shows a sample client container connecting to the nginx container with the name web, over port 80. Note that this port didn't need to be published to be reachable by other containers.
Setting up multi-container environments with their own network is a common task for docker-compose, so I'd recommend looking into this tool if you find yourself doing this a lot.
I have a CentOS docker container on a CentOS docker host. When I use this command to run the docker image docker run -d --net=host -p 8777:8777 ceilometer:1.x the docker container get host's IP but doesn't have ports assigned to it.
If I run the same command without "--net=host" docker run -d -p 8777:8777 ceilometer:1.x docker exposes the ports but with a different IP. The docker version is 1.10.1. I want the docker container to have the same IP as the host with ports exposed. I also have mentioned in the Dockerfile the instruction EXPOSE 8777 but with no use when "--net=host" is mentioned in the docker run command.
I was confused by this answer. Apparently my docker image should be reachable on port 8080. But it wasn't. Then I read
https://docs.docker.com/network/host/
To quote
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
That's rather annoying as I'm on a Mac. The docker command should report an error rather than let me think it was meant to work.
Discussion on why it does not report an error
https://github.com/docker/for-mac/issues/2716
Not sure I'm convinced.
The docker version is 1.10.1. I want the docker container to have same ip as the host with ports exposed.
When you use --net=host it tells the container to use the hosts networking stack. So you can't expose ports to the host, because it is the host (as far as the network stack is concerned).
docker inspect might not show the expose ports, but if you have an application listening on a port, it will be available as if it were running on the host.
On Linux, I have always used --net=host when myapp needed to connect to an another docker container hosting PostgreSQL.
myapp reads an environment variable DATABASE in this example
Like Shane mentions this does not work on MacOS or Windows...
docker run -d -p 127.0.0.1:5432:5432 postgres:latest
So my app can't connect to my other other docker container:
docker run -e DATABASE=127.0.0.1:5432 --net=host myapp
To work around this, you can use host.docker.internal instead of 127.0.0.1 to resolve your hosts IP address.
Therefore, this works
docker run -e DATABASE=host.docker.internal:5432 -d myapp
Hope this saves someone time!